I have a c++ program where I'm creating multiple threads and having them access a shared array.
Everytime I want one thread to access the array, I call
access the array and then call
All threads loop continuously until they've accessed the array a certain number of times. Thus they aren't just accessing the array once, but instead accessing it several times.
Now, when I execute my program just as it is, whichever thread gets the mutex first (usually the first thread created) executes until it completes, before allowing another thread to access the array.
If I add a simple sleep() behind
Then the threads will alternate accessing the array (which is what I want). I would rather not have to use sleep() however to accomplish this.
To the best of my knowledge I believe this is what's happening:
Thread A locks the mutex and begins accessing the array Thread B tries to lock the mutex but finds its locked, therefore it waits Thread A finishes with the array and unlocks the mutex Thread A loops and relocks the mutex before Thread B realizes that Thread A unlocked the matrix
Thus Thread A continues to access the array until it's accessed it n times, then completes and Thread B accesses the array n times
Is there anyway to make the threads waiting (for the mutex to unlock) to update faster and grab the lock as soon as it's unlocked?<br /> I would prefer the above output to be something more along the lines of:
Thread A locks the mutex and begins accessing the array Thread B tries to lock the mutex but finds its locked, therefore it waits Thread A finishes with the array and unlocks the mutex Thread B sees the mutex is unlocked and locks it Thread A loops and tries to lock the mutex, but finds its locked, therefore it waits ... etc.Answer1:
sleep() you can take a look at
sched_yield(), which will result in Thread A yielding the CPU before acquiring the mutex again right after releasing it. Mutexes don't queue and don't guarantee fairness.
Alternatively, signal the other threads using a condition variable.