In my work on group commit for MariaDB, I have the following situation:
A group of threads are going to participate in group commit. This means that
one of the threads, called the group leader, will run
an fsync()
for all of them, while the other threads wait.
Once the group leader is done, it needs to wake up all of the other threads.
The obvious way to do this is to have the group leader
call pthread_cond_broadcast()
on a condition that the other
threads are waiting for with pthread_cond_wait():
bool wakeup= false; pthread_cond_t wakeup_cond; pthread_mutex_t wakeup_mutex
Waiter:
pthread_mutex_lock(&wakeup_mutex); while (!wakeup) pthread_cond_wait(&wakeup_cond, &wakeup_mutex); pthread_mutex_unlock(&wakeup_mutex); // Continue processing after group commit is now done.
Group leader:
pthread_mutex_lock(&wakeup_mutex); wakeup= true; pthread_cond_broadcast(&wakeup_cond); pthread_mutex_unlock(&wakeup_mutex);
Note the association of the condition with a mutex. This association is
inherent in the way pthread condition variables work. The mutex must be locked
when calling into pthread_mutex_wait()
, and
will be obtained again before the call returns.
(Check the
man
page
for pthread_cond_wait()
for details).
Now, when I think about how these condition variables work, something strikes me as somewhat odd.
The idea is that the broadcast signals every waiting thread to wake up. However, because of the associated mutex, only one thread will actually be able to wake up; this thread will obtain a lock on the mutex, and all other to-be-awoken threads will now have to wait for this mutex! Only after the first thread releases this mutex will the next thread wakeup holding the mutex, then after releasing the third thread can wake up, and so on.
So if we have say 100 threads waiting, the last one will have to wait for the first 99 threads to each be scheduled and each release the mutex, one after the other in a completely serialised fashion.
But what I really want is to just let them all run at once in parallel (or at least as many as my machine has spare cores for). There is another way to achieve this, by simply using a separate condition and mutex for each thread, and have the group leader signal each one individually:
Waiter:
pthread_mutex_lock(&me->wakeup_mutex); while (!me->wakeup) pthread_cond_wait(&me->wakeup_cond, &me->wakeup_mutex); pthread_mutex_unlock(&me->wakeup_mutex);
Group leader:
for waiter in <all waiters> pthread_mutex_lock(&waiter->wakeup_mutex); waiter->wakeup= true; pthread_cond_signal(&wakeup_cond); pthread_mutex_unlock(&wakeup_mutex);
This way, every waiter is free to start running as soon as woken up by the leader; no waiters have to wait for one another. This seems advantageous, especially as number of cores increases (rumours are that 48 core machines are becoming commodity).
"Seems" advantageous. But is it really? Let us micro-benchmark it.
For this, I start up 5000 threads. Each thread goes to wait on a condition,
either a single shared one, or distinct in each thread. The main program then
signals the threads to wakeup, either with a single pthread_cond_broadcast()
,
or with one pthread_cond_signal()
per thread. Each thread records
the time they woke up, and the main program collects these times and computes
how long it took between starting to signal the condition(s) and wakeup of the
last thread. (Here is
the full C
source code for the test program).
I ran the program on an Intel quad Core i7 with hyperthreading enabled, the
most parallel machine I have easy access to. The results is the following:
pthread_cond_broadcast() :
| 46.9 msec |
pthread_cond_signal() :
| 17.6 msec |
Conclusion:
pthread_cond_broadcast()
is slower, as I
speculated. I would expect the effect to be more pronounced on systems with
more cores; it would be interesting if readers with access to such systems
could try the test program and comment below on the results.
PlanetMySQL Voting: Vote UP / Vote DOWN