Why Fair Threads?
The FairThreads framework is basically cooperative; it is thus simpler than
preemptive ones. Indeed, as preemption cannot occurs in an
uncontroled way, cooperative frameworks are less
undeterministic. Actually, FairThreads puts the situation to an
extreme point, as it is fully deterministic; threads are chosen for
execution following a strict round-robin algorithm. This can be a
great help in programming and debugging.
FairThreads provides users with a powerful communication mean
that is event broadcasting. This simplifies concurrent programming
while reducing risks of deadlocks.
What about Parallelism?
Up to now, one has only considered uniprocessor machines. When
several processors are available, several threads can be
simultaneously executed; this situation is often called
parallelism. As threads are sharing the same address space, data
protection becomes mandatory. Actually, this is very similar
to preemptive scheduling; in both cases, shared data
have to be protected against concurrent accesses, and it is the
programmer's responsability to avoid deadlocks.
Fair threads are designed for uniprocessor machines, and it is left
for future work to adapt them to multiprocessor ones.
What about Signals and Interrupts?
In operating systems, signals can occur at any moment during
execution, and are to be processed without delay. Signals are useful
to implement interrupts, for example asynchronous IO interrupts. In
the context of fair threads, signals are quite naturally represented
by events, which can be generated at any moment. Fair threads also
offer
the possibility for generated values to be immediately processed:
generated values associated to events are broadcast to all components,
and are received by them without delay, that is during their
generation
phase.