3. Fair Thread Framework

3. Fair Thread Framework

Browsing

Home: Java Fair Threads

Previous chapter: Threads
Next chapter: Java FairThreads API


Fair Thread Framework


Chapters

1. Introduction
2. Threads
3. Fair Thread Framework
4. Java FairThreads API
5. Examples
6. Links with Reactive Programming
7. Conclusion


One considers a new framework made of fair threads executed by fair schedulers. It is presented via a list of questions/answers.

What are Fair Threads?

A fair thread is basically a cooperative thread which must never forgets to cooperate with other threads, by calling the cooperate() method. Fair threads are run by fair schedulers; scheduler fairness is twofold:


Why Fair Threads?

The FairThreads framework is basically cooperative; it is thus simpler than preemptive ones. Indeed, as preemption cannot occurs in an uncontroled way, cooperative frameworks are less undeterministic. Actually, FairThreads puts the situation to an extreme point, as it is fully deterministic; threads are chosen for execution following a strict round-robin algorithm. This can be a great help in programming and debugging.

FairThreads provides users with a powerful communication mean that is event broadcasting. This simplifies concurrent programming while reducing risks of deadlocks.


Why Broadcast Events?

Events are used when one wants one or more threads to wait for a condition, without need for them to poll a variable to determine when the condition is fulfilled. Broadcast is a mean to get modularity, as the thread which generates an event has nothing to know about potentially receivers of it. Fairness in event processing means that all threads waiting for an event receive it during the same phase where it is generated; thus, a thread leaving control to cooperate with other threads does not risk to loose an event generated later in the same phase. Note that scheduler phases actually define time scopes of events.


How is it Implemented?

Fair threads are implemented in the Java programming language and usable through an API. Fair thread implementation is based on standard Java threads, but it is independent of the actual JVM and OS, and is thus fully portable. Fair schedulers are actually at the level of the Java Virtual Machines; one thus have the situation shown on Figure Implementation.



Fig. 7: Implementation


What about Locks?

As fair threads are basically cooperative, no lock is needed when accessing a shared object. While executing, a fair thread cannot be interrupted by another fair thread; thus, execution is atomic and there is no need of synchronized code. This contributes to minimize deadlock situations which are the plague of concurrent programming.


What about Priorities?

Priorities are meaningless in a fair context, where threads always have equal rights to execute. Absence of priorities also contributes to simplify programming. Note that the effect of priorities in Java is rather unclear (see [11] for a discussion on this matter).


What about Preemption?

A preemptive strategy is sometimes needed, for example to reuse a piece of code which was not designed to be run concurrently. In the context of fair threads, preemption is possible through the notion of a fair process, assuming that the operating system is preemptive. A fair process gives life to a standard process which is executed by the operating system concurently with the JVM running the fair scheduler. This gives the drawing of Figure Fair-process, where the fair process is represented as a black box.



Fig. 8: Fair Process
Note that one gets an instance of the many-to-many approach presented in section Existing Threading Frameworks.


What about Parallelism?

Up to now, one has only considered uniprocessor machines. When several processors are available, several threads can be simultaneously executed; this situation is often called parallelism. As threads are sharing the same address space, data protection becomes mandatory. Actually, this is very similar to preemptive scheduling; in both cases, shared data have to be protected against concurrent accesses, and it is the programmer's responsability to avoid deadlocks. Fair threads are designed for uniprocessor machines, and it is left for future work to adapt them to multiprocessor ones.


What about Signals and Interrupts?

In operating systems, signals can occur at any moment during execution, and are to be processed without delay. Signals are useful to implement interrupts, for example asynchronous IO interrupts. In the context of fair threads, signals are quite naturally represented by events, which can be generated at any moment. Fair threads also offer the possibility for generated values to be immediately processed: generated values associated to events are broadcast to all components, and are received by them without delay, that is during their generation phase.



This page has been generated by
Scribe.
Last update Mon Jun 10 17:27:09 2002