Programming with POSIX Threads

Category: Operating Systems
Author: David R. Butenhof
All Stack Overflow 11
This Month Stack Overflow 1


by anonymous   2019-07-21

The following explanation is given by David R. Butenhof in "Programming with POSIX Threads" (p. 80):

Spurious wakeups may sound strange, but on some multiprocessor systems, making condition wakeup completely predictable might substantially slow all condition variable operations.

In the following comp.programming.threads discussion, he expands on the thinking behind the design:

Patrick Doyle wrote: 
> In article , Tom Payne   wrote: 
> >Kaz Kylheku  wrote: 
> >: It is so because implementations can sometimes not avoid inserting 
> >: these spurious wakeups; it might be costly to prevent them. 

> >But why?  Why is this so difficult?  For example, are we talking about 
> >situations where a wait times out just as a signal arrives? 

> You know, I wonder if the designers of pthreads used logic like this: 
> users of condition variables have to check the condition on exit anyway, 
> so we will not be placing any additional burden on them if we allow 
> spurious wakeups; and since it is conceivable that allowing spurious 
> wakeups could make an implementation faster, it can only help if we 
> allow them. 

> They may not have had any particular implementation in mind. 

You're actually not far off at all, except you didn't push it far enough. 

The intent was to force correct/robust code by requiring predicate loops. This was 
driven by the provably correct academic contingent among the "core threadies" in 
the working group, though I don't think anyone really disagreed with the intent 
once they understood what it meant. 

We followed that intent with several levels of justification. The first was that 
"religiously" using a loop protects the application against its own imperfect 
coding practices. The second was that it wasn't difficult to abstractly imagine 
machines and implementation code that could exploit this requirement to improve 
the performance of average condition wait operations through optimizing the 
synchronization mechanisms. 
/------------------[ ]------------------\ 
| Compaq Computer Corporation              POSIX Thread Architect | 
|     My book:     | 
\-----[ ]-----/ 

by anonymous   2019-07-21

Section 3.3 of Dave Butenhof's famous book ( has a great explanation.

Also, you can find very good discussion of signal/broadcast in

by anonymous   2019-07-21

If you will be working with UNIX-like systems, then I recommend Programming With POSIX Threads by David R. Butenhof.

If you will be working with Microsoft Windows, then I recommend Writing Multithreaded Applications in Win32 by Jim Beveridge and Robert Wiener.

Irrespective of which threading package(s) you will end up using, I recommend you look at two presentations I wrote: Generic Synchronization Policies and Multi-threaded Performance Pitfalls. Those short presentations contain useful information that, unfortunately, is not discussed in many other books and articles.

by anonymous   2017-08-20

Although you probably don't like to hear it I would still recommend to start investigating HTTP servers first. Although programming for them seemed boring, synchronous, and non-persistent to you, that's only because the creators of the servers did their job to hide the gory details from you so tremendously well - if you think about it, a web server is so not synchronous (it's not that millions of people have to wait for reading this post until you are done... concurrency :) ... and because these beasts do their job so well (yeah, I know we yell at them a lot, but at the end of the day most HTTP servers are outstanding pieces of software) this is the definite starting point to look into if you want to learn about efficient multi-threading. Operating systems and implementations of programming languages or games are another good source, but maybe a bit further away from what you intend to achieve.

If you really intend to get your fingers dirty I would suggest to orient yourself at something like WEBrick first - it ships with Ruby and is entirely implemented in Ruby, so you will learn all about Ruby threading concepts there. But be warned, you'll never get close to the performance of a Rack solution that sits on top of a web server that's implemented in C such as thin.

So if you really want to be serious, you would have to roll your own server implementation in C(++) and probably make it support Rack, if you intend to support HTTP. Quite a task I would say, especially if you want your end result to be competitive. C code can be blazingly fast, but it's all to easy to be blazingly slow as well, it lies in the nature of low-level stuff. And we haven't discussed memory management and security yet. But if it's really your desire, go for it, but I would first dig into well-known server implementations to get inspiration. See how they work with threads (pooling) and how they implement 'sessions' (you wanted persistence). All the things you desire can be done with HTTP, even better when used with a clever REST interface, existing applications that support all the features you mentioned are living proof for that. So going in that direction would be not entirely wrong.

If you still want to invent your own proprietary protocol, base it on TCP/IP as the lowest acceptable common denominator. Going beyond that would end up in a project that your grand-children would probably still be coding on. That's really as low as I would dare to go when it comes to network programming.

Whether you are using it as a library or not, look into EventMachine and its conceptual model. Overlooking event-driven ('non-blocking') IO in your journey would be negligent in the context of learning about/reinventing the right wheels. An appetizer for event-driven programming explaining the benefits of node.js as a web server.

Based on your requirements: asynchronous communication, multiple "subscribers" reacting to "events" that are centrally published; well that really sounds like a good candidate for an event-driven/message-based architecture.

Some books that may be helpful on your journey (Linux/C only, but the concepts are universal):

(Those were the classics)

Projects you may want to check out:

  • Apache 2, thin, mongrel, nginx, lighttpd, ..., any web server there is
  • EventMachine (I'm sorry :)
  • node.js
  • Efficient networking in a game (Quake 3 sources)
by anonymous   2017-08-20

I guess you could take a look at these:

Programming with POSIX Threads

The sockets Networking API

Interprocess Communications

Advanced Programming in the UNIX environment

The first three are very specific and would serve only if you need to focus on that particular subject. The last link is a highly rated book on Amazon that you may be interested in.

All in all, if you already have a grip of threads, IPC, networking, filesystem, all you need is the internet because there is widely available documentation about the POSIX API.

by anonymous   2017-08-20

Won't this lock the quit mutex in the first thread, thereby blocking the second?


And if that's true, then how does the first thread release the lock, so that the other thread can begin?

When you wait on a condition_variable it unlocks the lock that you pass it, so in

cvQuit.wait_for( lock, chrono::milliseconds(10) )

the condition variable will call lock.unlock() and then block for up to 10ms (this happens atomically so there's no window between unlocking the mutex and blocking where the condition could become ready and you'd miss it)

When the mutex is unlocked it allows the other thread to acquire the lock on it.

Another question: If I change the wait_for() call to wait for zero seconds, that thread is starved. Can someone explain?

I would expect the other thread to be starved, because the mutex is not unlocked long enough for the other thread to lock it.

am I correct to assume that a no_timeout is recv'd instead of a timeout?

No, if the time duration passes without the condition becoming ready then it "times out" even after zero seconds.

How can I call a wait_for() and specify a zero time, so that the wait_for() call doesn't block, instead it just checks the condition and continues?

Don't use a condition variable! If you don't want to wait for a condition to become true, don't wait on a condition variable! Just test m_bQuit and proceed. (Aside, why are your booleans called m_bXxx? They're not members, so the m_ prefix is misleading, and the b prefix looks like that awful MS habit of Hungarian notation ... which stinks.)

I'd also be interested to hear about good references on this subject.

The best reference is Anthony Williams's C++ Concurrency In Action which covers the entire C++11 atomics and thread libraries in detail, as well as the general principles of multithreading programming. One of my favourite books on the subject is Butenhof's Programming with POSIX Threads, which is specific to Pthreads, but the C++11 facilities map very closely to Pthreads, so it's easy to transfer the information from that book to C++11 multithreading.

N.B. In thrQuit you write to m_bQuit without protecting it with a mutex, since nothing prevents another thread reading it at the same time as that write, it's a race condition, i.e. undefined behaviour. The write to the bool must either be protected by a mutex or must be an atomic type, e.g. std::atomic<bool>

I don't think you need two mutexes, it just adds contention. Since you never release the mtxQuit except while waiting on the condition_variable there is no point having the second mutex, the mtxQuit one already ensures only one thread can enter the critical section at once.