Threads are generally considered to be well adapted for systems made
of heavy-computing tasks run in a preemptive context and needing few
communications and synchronizations. This is typically
the case of Web servers, in which a thread (usually picked out from a
pool of available threads) is associated to each new request. In such
contexts, advantages of threads are clear:
The benefit of using threads is less clear for systems made of tasks needing strong synchronizations or a lot of communications. Indeed, in a preemptive context, to communicate or to synchronize generally implies the need to protect some data involved in the communication or in the synchronization. Locks are often used for this purpose, but they have a cost and are error-prone (possibilities of deadlocks).
Pure cooperative threads (sometimes called green-threads) are more adapted for highly communicating tasks. Indeed, data protection is no more needed, and one can avoid the use of locks. Moreover, cooperative threads have clear and simple semantics, and are thus easier to program and to port. However, while cooperative threads can be efficiently implemented at user level, they cannot benefit from multiprocessor machines. Moreover, they need special means to deal with blocking I/O.
Actually, programming with threads is difficult because threads generally have very "lose" semantics. This is particularly true with preemptive threads because their semantics strongly relies on the scheduling policy. The semantics of threads also depends on others aspects, as, for example, the way threads priorities are mapped at the kernel level.
Threads take time to create, and need a rather large amount of memory to execute. Moreover, the number of native threads than can be created is often limited by the system. Several techniques can be used to get round these problems, specially when large numbers of short-lived components are needed. Among these techniques are thread-pooling, to limit the number of created threads, and the use of small pieces of code, sometimes called "chores" or "chunks", which can be executed in a simpler way than threads are.
FairThreads proposes to overcome the difficulties of threads by giving users the possibility to chose the context, cooperative or preemptive, in which threads are executed. More precisely, FairThreads defines schedulers which are cooperative contexts to which threads can dynamically link or unlink. All threads linked to the same scheduler are executed in a cooperative way, and at the same pace. Threads which are not linked to any scheduler are executed by the OS in a preemptive way, at their own pace. An important point is that FairThreads offers programming constructs for linking and unlinking threads.
FairThreads has the following main characteristics:
This paper describes FairThreads in the context of C, implemented on top of the Pthreads library.
The rest of the paper is organized as follows: section 2 presents rationale for the design of FairThreads. An overwiew of the API of FairThreads is given in section 3. Section 4 contains the full API. Some examples are described in section 5. Related work is considered in section 6. Finally, section 7 concludes the paper. Man pages of FairThreads are given in annex.