|
In FairThreads, schedulers can be seen as synchronization servers,
in which linked threads automatically synchronize at the end of each
instant. However, in order to synchronize, linked
threads must behave fairly and cooperate with the other threads by
returning the control to the scheduler. Thus, linked threads are
basically cooperative threads. Schedulers can also be seen as event servers as they are in charge of broadcasting generated events
to all the linked threads. In this way, a fair scheduler defines a
kind of synchronized area made of cooperative threads running
at the same pace, and communicating through broadcast events.
A synchronized area can quite naturally be defined to manage some shared data that has to be accessed by several threads. In order to get access to the data, a thread has first to link to the area, and then it becomes scheduled by the area and can thus get safe access to the data. Indeed, as the scheduling is cooperative, there is no risk for the thread to be preempted during an access to the data. The use of a synchronized area is, in this case, an alternative to the use of locks. A synchronized area can also play the role of a location that threads can join when some kind of communication or synchronization is needed. FairThreads allows programmers to decompose complex systems in several threads and areas to which threads can link dynamically, following their needs. Moreover, a thread can be unlinked, that is totally free from any synchronization provided by any schedulers defined in the system. Of course, unlinked threads cannot benefit from broadcast events. Unlinked threads are run in the preemptive context of the OS, and are thus just standard preemptive threads. Data shared by unlinked threads have to be protected by locks, in the standard way.
Basically, a linked fair thread is a cooperative thread which can
synchronize with other fair threads using events and can communicate
with them through values associated to these events. The scheduler to
which the fair thread is linked gives it the possibility to get the
processor. All threads linked to the scheduler get equal right to
execute. More precisely, fair schedulers define instants
during which all threads linked to it run up to their next cooperation
point. There are only two kinds of cooperation points: explicit ones
which are calls to the cooperate() function, and implicit
ones where threads are waiting for events. A fair scheduler
broadcasts events to all fair threads linked to it. Thus, all threads
linked to the same scheduler see the presence and the absence of
events in exactly the same way. Moreover, values associated to events
are also broadcast. Actually, events are local to the scheduler in
which they are created, and are non-persistent data which are reset at
the beginning of each new instant.
Basically, unlinked threads are standard native preemptive threads. They are introduced in FairThreads for two main reasons. First, using unlinked threads, users can program non-blocking I/Os in a very simple way. Without this kind of I/Os, programming would become problematic. Second, unlinked threads can be run by distinct processors. The use of unlinked threads is a plus in multiprocessors contexts.
FairThreads proposes automata to deal with auxiliary tasks, such as waiting for an event to stop a thread, that do not need the full power of a dedicated native thread to execute. An automaton is a special linked fair thread which executes using the native thread of the scheduler to which it is linked. Thus, an automaton does not have its own execution stack that it could use to store its execution state. As a consequence, it can be implemented more efficiently than threads are. Basically, automata are lists of states which are elementary pieces of sequential code. The current state is stored by the automaton and execution starts from it at the begining of the instant. Execution leaves the current state when an explicit jump to another state is executed. When the state terminates without any explicit jump, execution automatically proceeds to the next state. Execution of the automaton terminates when the last state is exited. Thus, the fine-grain sequentiality of execution inside states is not memorized by automata, only the coarse-grain sequentiality of states execution is. Events can be used without restriction in automata. There is a special state to await an event: execution stays in this state until the event is generated. |