2. Rationale

2. Rationale

Browsing

Home: Fair Threads in C

Previous chapter: Introduction
Next chapter: API Overview


Rationale

Synchronized Areas
Cooperative Scheduling
Preemptive Scheduling
Automata


Chapters

1. Introduction
2. Rationale
3. API Overview
4. API
5. Examples
6. Related Work
7. Conclusion
8. Man Pages


In FairThreads, schedulers can be seen as synchronization servers, in which linked threads automatically synchronize at the end of each instant. However, in order to synchronize, linked threads must behave fairly and cooperate with the other threads by returning the control to the scheduler. Thus, linked threads are basically cooperative threads. Schedulers can also be seen as event servers as they are in charge of broadcasting generated events to all the linked threads. In this way, a fair scheduler defines a kind of synchronized area made of cooperative threads running at the same pace, and communicating through broadcast events.
Synchronized Areas

A synchronized area can quite naturally be defined to manage some shared data that has to be accessed by several threads. In order to get access to the data, a thread has first to link to the area, and then it becomes scheduled by the area and can thus get safe access to the data. Indeed, as the scheduling is cooperative, there is no risk for the thread to be preempted during an access to the data. The use of a synchronized area is, in this case, an alternative to the use of locks. A synchronized area can also play the role of a location that threads can join when some kind of communication or synchronization is needed.

FairThreads allows programmers to decompose complex systems in several threads and areas to which threads can link dynamically, following their needs. Moreover, a thread can be unlinked, that is totally free from any synchronization provided by any schedulers defined in the system. Of course, unlinked threads cannot benefit from broadcast events. Unlinked threads are run in the preemptive context of the OS, and are thus just standard preemptive threads. Data shared by unlinked threads have to be protected by locks, in the standard way.


Cooperative Scheduling

Basically, a linked fair thread is a cooperative thread which can synchronize with other fair threads using events and can communicate with them through values associated to these events. The scheduler to which the fair thread is linked gives it the possibility to get the processor. All threads linked to the scheduler get equal right to execute. More precisely, fair schedulers define instants during which all threads linked to it run up to their next cooperation point. There are only two kinds of cooperation points: explicit ones which are calls to the cooperate() function, and implicit ones where threads are waiting for events. A fair scheduler broadcasts events to all fair threads linked to it. Thus, all threads linked to the same scheduler see the presence and the absence of events in exactly the same way. Moreover, values associated to events are also broadcast. Actually, events are local to the scheduler in which they are created, and are non-persistent data which are reset at the beginning of each new instant.

Modularity

Events are a powerful synchronisation and communication mean which simplifies concurrent programming while reducing risks of deadlocks. Events are used when one wants one or more threads to wait for a condition, without polling a variable to determine when the condition is fulfilled. Broadcast is a mean to get modularity, as the thread which generates an event has nothing to know about potentially receivers of it. Fairness in event processing means that all threads waiting for an event always receive it during the same instant it is generated; thus, a thread leaving control on a cooperation point does not risk to loose an event generated later in the same instant.


Determinism

Cooperative frameworks are less undeterministic than preemptive ones, as in cooperative frameworks preemption cannot occurs in an uncontrolled way. Actually, FairThreads puts the situation to an extreme point, when considering linked threads: linked threads are chosen for execution following a strict round-robin algorithm which leads to deterministic systems. This can be a great help in programming and debugging.


No Priorities

Priorities are meaningless for linked threads which always have equal rights to execute. Absence of priorities also contributes to simplify programming.



Preemptive Scheduling

Basically, unlinked threads are standard native preemptive threads. They are introduced in FairThreads for two main reasons. First, using unlinked threads, users can program non-blocking I/Os in a very simple way. Without this kind of I/Os, programming would become problematic. Second, unlinked threads can be run by distinct processors. The use of unlinked threads is a plus in multiprocessors contexts.


Automata

FairThreads proposes automata to deal with auxiliary tasks, such as waiting for an event to stop a thread, that do not need the full power of a dedicated native thread to execute. An automaton is a special linked fair thread which executes using the native thread of the scheduler to which it is linked. Thus, an automaton does not have its own execution stack that it could use to store its execution state. As a consequence, it can be implemented more efficiently than threads are.

Basically, automata are lists of states which are elementary pieces of sequential code. The current state is stored by the automaton and execution starts from it at the begining of the instant. Execution leaves the current state when an explicit jump to another state is executed. When the state terminates without any explicit jump, execution automatically proceeds to the next state. Execution of the automaton terminates when the last state is exited. Thus, the fine-grain sequentiality of execution inside states is not memorized by automata, only the coarse-grain sequentiality of states execution is.

Events can be used without restriction in automata. There is a special state to await an event: execution stays in this state until the event is generated.



This page has been generated by
Scribe.
Last update Sun Jun 9 15:48:08 2002