mvorbrodt / blog Goto Github PK
View Code? Open in Web Editor NEWCode samples from https://vorbrodt.blog/
License: BSD Zero Clause License
Code samples from https://vorbrodt.blog/
License: BSD Zero Clause License
ATM I'm experimenting with a thread pool implementation in C based on your blog post (also Sean Parent's stlab) and Java's ForkJoinPool which also employs similar ideas. The implementation seems to work correctly, yet it yields almost the same result as a simple thread pool implementation with a single blocking queue. So, I've tried building and running src/pool_test.cpp
in order to understand how your implementation performs on my system.
That's what I got:
$ ./bin/pool
********************************************************************************
simple (100,000 tasks, 100 reps) 1065.74 ms (10,000,000)
advanced (100,000 tasks, 100 reps) 1116.36 ms (10,000,000)
simple (100,000 tasks, 200 reps) 2279.97 ms (20,000,000)
advanced (100,000 tasks, 200 reps) 2296.35 ms (20,000,000)
simple (100,000 tasks, 300 reps) 3444.52 ms (30,000,000)
advanced (100,000 tasks, 300 reps) 3612.27 ms (30,000,000)
simple (100,000 tasks, 400 reps) 5077.66 ms (40,000,000)
advanced (100,000 tasks, 400 reps) 5010.16 ms (40,000,000)
simple (100,000 tasks, 500 reps) 6302.66 ms (50,000,000)
advanced (100,000 tasks, 500 reps) 6236.15 ms (50,000,000)
simple (100,000 tasks, 600 reps) 7547.81 ms (60,000,000)
advanced (100,000 tasks, 600 reps) 7472.9 ms (60,000,000)
simple (100,000 tasks, 700 reps) 8802.88 ms (70,000,000)
advanced (100,000 tasks, 700 reps) 8735.1 ms (70,000,000)
simple (100,000 tasks, 800 reps) 10043.1 ms (80,000,000)
advanced (100,000 tasks, 800 reps) 9964.45 ms (80,000,000)
simple (100,000 tasks, 900 reps) 11145.9 ms (90,000,000)
advanced (100,000 tasks, 900 reps) 10629 ms (90,000,000)
simple (100,000 tasks, 1,000 reps) 12423 ms (100,000,000)
advanced (100,000 tasks, 1,000 reps) 12330.8 ms (100,000,000)
So, it looks like on my machine both pools in the benchmark perform on par while I was expecting to see that the advanced pool behaves significantly better. Is it expected or am I interpreting something wrong?
uname -a
output: Linux apechkurov-laptop 5.4.0-59-generic #65-Ubuntu SMP Thu Dec 10 12:01:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
, gcc version 9.3.0
The thread pools implemented in pool.hpp could have a small extension that support openmp-style "parallel_for" functions. This could be desirable if you don't want to mess about with #pragmas and would rather configure your parallelised for-loops with function arguments instead.
Hello,
Is there any method to tell thread_pool to not accept any new work/tasks and wait for all threads to finish their current jobs to finish before programs exits.
Thanks.
I ran the benchmarks as detailed here and i get pretty much the same performance when using g++ and compiling with -O3.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.