4. Programming Style

4. Programming Style

Browsing

Home:
Concurrent Programming with Fair Threads
The LOFT Language

Previous chapter: Examples - 1
Next chapter: Semantics

Sections

4.1 Necessity of Cooperation
4.2 Granularity of Instants
4.3 Use of Events
4.4 Architectural Design

Chapters

1. Introduction
2. The Language LOFT
3. Examples - 1
4. Programming Style
5. Semantics
6. FairThreads in C
7. Implementations
8. Examples - 2
9. Related Work
10. Conclusion
11. Annex A: API of FairThreads
12. Annex B: LOFT Reference Manual
13. Annex C: the LOFT Compiler
  Glossary
  Index

The specificities of Loft leads to a programming style which can be sum-up by the following rules of thumb: don't forget to cooperate; define a good granularity of time for instants; don't loose events; choose a smart decomposition into distinct synchronous areas. One now considers these rules in turn.
4.1 Necessity of Cooperation

Fair threads are basically cooperative, that is they have to leave the control periodically and to return it to the scheduler they are linked to in order to let the others threads execute. There are basically two ways for a thread to be non-cooperative: to fall in an instantaneous loop, or to execute a never-terminating atom. It must be noticed that recursion cannot by itself lead to a lack of cooperation. One now consider these points in more details.

Instantaneous Loops

An instantaneous loop is a loop that never terminates and cycles forever during the same instant. Thus, it never cooperates and prevents the others threads from being executed. The first way to get an instantaneous loop is to forget an explicit cooperate statement, as in:

while (1) do 
   printf ("loop!"); 
end
The use of implicit cooperation points is not always the solution to break instantaneous loops. For example, consider the case where the waiting of an event triggers the execution of the loop body:

while (1) do
   await (evt);
   printf ("loop!"); 
end
Then, the infinite cycle does not immediately starts, but it does when the event becomes present. Indeed, from that moment, the await statement will always consider the event as present, and the loop becomes then instantaneous. To falsely assume that two events are never simultaneous can also lead to an instantaneous loop. Consider:

while (1) do
   await (evt1);
   await (evt2);
   { ... }
end
This code cannot lead to an instantaneous loop provided evt1 and evt2 are never simultaneous. However, there is an infinite cycle if both events are present at the same instant during the loop execution. Note that this may occur if evt1 and evt2 actually denote the same event. To produce a potentially infinite number of values can also be possible in an indirect way, through the get_value primitive. For example, consider:

while (search) do
   get_value (evt,i,&res);
   {
      if (return_code () == ENEXT) {
         search = 0;
      } else {
         printf ("value #%d: %d ", i,res);
         i++;
         generate_value (evt,i);
      }
   }
end
This piece of code can lead to an infinite loop (and also to the generation of an infinite number of values). Indeed, to get the ith value leads to generate the (i+1)th one. Thus, an instantaneous loop appears as soon as the first value is generated.

Non-terminating Atoms

A non-terminating atom (which is really bad-named in this case) also leads to an absence of cooperation. The purest form of a non-terminating atom is:

{
   for (;;) {}
}
There are, of course, many others ways for an atom to be non-terminating. Note that, in addition to terminate, an atom as also to do it properly: it is forbidden to jump in a way or an other, for example with a goto, outside the atom, as outlined in C Statements.

Recursion

There is no possibility to recursively create infinitely many threads during the same instant, as thread creation is always postponed to the next instant. Thus, the following code, while not useful, will not prevent cooperation:

module m ()
m_create ();
end module
Of course, one can exhaust the system memory using recursion, by creating infinitely many new threads, as in:

module m ()
m_create ();
halt
end module
Recursive creation of threads is also possible using the run primitive, as for example in:

module m ()
run m ();
end module
However, as with an explicit creation of threads, there is no possibility using run to cycle forever during the same instant.
4.2 Granularity of Instants

Supposing that there is no instantaneous loop to prevent the passing of instants, the problem still remains to get a correct granularity for instant durations. Indeed, if instants take too much time, reactivity of the whole system could be lost. This problem is actually central in the context of Loft. Granularity of unlinked threads is not under control of the user, but only of the scheduler. Of course, standard methods can then be used, for example based on the use of the yield primitive, but with all their standard problems too. One does not consider this case in more details here. With non-native threads, users can have some control on the granularity of instants, based on cooperation points either explicit (cooperate) or implicit (await). For example, consider the following atom which calls a function f 2*N times during the same instant:


{
   int i;
   for (i=0; i<2*N; i++) {
      f ();
   }
}
It can be split into 2 instants, just by introducing a cooperate statement:

{
   int i;
   for (i=0; i<N; i++) {
      f ();
   }
}
cooperate;
{
   int i;
   for (i=0; i<N; i++) {
      f ();
   }
}
Then, the complete execution of the series of calls takes two instants instead of one, leaving the other threads the possibility to execute after the first half of the calls has been performed. At the extreme, f can be called once by instant:

repeat (2*N) do
   f ();
   cooperate
end
Introducing supplementary cooperation points in a program will give others programs more opportunities to execute. The point however is that the introduction of supplementary cooperating points must remain compatible with the use of events. Remember indeed that events are not persistent data and are automatically reset as absent at the beginning of each new instant.
4.3 Use of Events

Busy-waiting

A major use of events is to avoid busy-waiting on a condition (just like condition-variables with standard pthreads). For example, consider a system where one thread sets a boolean variable X to true to trigger a processing done by another thread. A possibility is that the processing thread tests X at each instant:

module processing_thread ()
   while (!X) do cooperate end;
   ....
end
Note that no race condition on X can occur, as the framework is cooperative. This solution is however unsatisfactory because the processing thread tests X at each instant while it is false, which is of course inefficient. The correct solution is to wait for an event generated to trigger the processing thread, which then becomes:

module processing_thread ()
   await (X);
   ....
end
Now, the processing thread does not consume any resources while waiting for X. As a side effect of using events, one is sure that the processing starts at the very same instant X is generated. This is to be compared with the previous solution using a boolean variable, where the processing is delayed to the next instant if X is tested before being set to true.

Lost Generations

A rather delicate point, using events, is that they may be awaited or generated at the wrong moment. For example, let us return to the previous example. If the processing thread is created after the generation of the event, then processing will not be done as the thread will never receive the event. For example, the following code is not correct (more precisely, the processing will not be done immediately despite the generation of X):

generate (X);
processing_thread_create ();
Indeed, the processing thread will only be created at the next instant, and thus will not see X as being present. Actually, the generation of X has been lost.

Deadlocks

As there is no lock, deadlocks, considered in the original meaning of the term, cannot occur. However, it is possible to make some analogy with situations in Loft. Actually, the closest analogy with a deadlock would be a situation made of two threads, each of them preventing the other from closing the current instant. This, would be the case if one thread tests the variable X with the atom:

{
   while (!X) {}
   Y=1;
}
while the other tests the variable Y with:

{
   while (!Y) {}
   X=1;
}
This is however an incorrect situation because the two atoms considered are actually non-terminating. Provided that all atoms always terminate, such a situation should never appear. Another analogy with a deadlock would be a situation where each thread prevents the other from passing a certain point of its execution. This is the case if one thread is executing the non-atomic instruction:

while (!X) do cooperate; end
{Y=1;}
while the other executes:

while (!Y) do cooperate; end 
{X=1;}
Then, the two threads remain blocked, but this does not in any way prevent the passing of instants. Using events instead of boolean variables, avoids busy-waiting. The two instructions would be:

await (X);
generate (Y)
and:

await (Y);
generate (X)
Actually, such a situation could be called a "communication deadlock". Real deadlocks are at implementation level: they appear because some data are to be protected by locks. Real deadlock introduces the risk that a data become inaccessible forever because some lock used to protect it is never released. This problem concerning data does not appear with communication deadlocks which can be seen as occuring at a higher, more "logical", level.
4.4 Architectural Design

Architectural design is of major concern in Loft. Indeed, the language allows programmers to decompose a system in several sub-systems, implemented as distinct schedulers. These schedulers can either be run synchronously or asynchronously.

Synchronous Schedulers

Synchronous schedulers are schedulers whose instants are related. Many different situations exist in which the presence of several synchronous schedulers can be useful. Let us, for example, consider graphical objects which should be painted only after they have all moved. A simple solution consists in defining two threads for each object, one for moving it and one for painting it, and to create all the moving threads before all the painting ones. However, this forbids the dynamic creation of new objects, as their moving threads should be executed after the painting threads of the old objects. The solution is to use two schedulers, one in which moving threads are linked, and the other for painting threads:

scheduler_t move_scheduler, paint_scheduler;

main ()
{
   move_scheduler = scheduler_create ();
   paint_scheduler = scheduler_create ();
   ....
   while (1) {
        scheduler_react (move_scheduler);
        scheduler_react (paint_scheduler);
   }
}
The function to create an object is:

void object_creation (graphical_object obj)
{
   move_create_in (move_scheduler,obj);
   paint_create_in (paint_scheduler,obj);
}
One can also imagine many situations where several instants of a scheduler should actually correspond to one instant of another. Loft gives the possibility to code these various situations very easily.

Asynchronous Schedulers

Schedulers that are run asynchronously define reactive areas in which threads share the same instants and the same events. Asynchronous schedulers are especially interesting in the context of multiprocessing, as each of them can be run by a dedicated native thread, on a distinct processor. The example of the sieve in section Lucky Prime Numbers shows how this can be achieved quite simply. However, one has to be very cautious using unlinked threads, as, then, all the problems of standard threads come to the foreground. For example, unlinked threads should never access data used by linked threads, because these are usually not protected. Data shared by two unlinked threads should be protected by locks exactly as if they were standard threads (actually, they are standard threads!). The discipline that should be obeyed using unlinked threads has to be very strict, and it seems good practice to adopt the following rules:
  • any shared data should only be accessed by threads linked to a same scheduler, in charge of the data. In particular, any access to a shared data by an unlinked thread should be forbidden. The unlinked thread must first link to the scheduler in charge of the data, before accessing it.
  • any communication between distinct schedulers running asynchronously should use either some event, or a native thread acting as a go-between for transferring data.
Of course, to decompose a system in several areas can be a difficult question, with several possible different answers.

Native Modules

Unnecessary uses of native modules should be avoided. Indeed, each instance of a native module requires a native thread to execute. This is required because such an instance can be unlinked, and in this case it behaves exactly as a standard pthread. Instances of non-native modules do not need the full power of a native thread to execute and they can be simply implemented as automata. The reason is that, when making a thread react, the stack state is the same at the beginning and at end of the reaction. Actually, the stack is only needed for atoms which by definition should always terminate instantly. Thus, only the state of the execution and the local variables have to be preserved between instants, which can be done using the heap. Thus, instances of non-native modules can be simply run by the thread of the scheduler to which they are linked. Actually, there is no reason why a module in which the unlink instruction is never used should be native. One also have to keep in mind that there exist partial implementations of Loft which do not have any support for native modules (see Direct Implementation). This is for example the case when the targets are embedded systems with limited resources (memory and CPU).

This page has been generated by Scribe.
Last update Wed Oct 22 18:41:04 2003