[std-interval] More on interval computations as proofs

Lawrence Crowl Lawrence at Crowl.org
Wed Oct 4 17:29:00 PDT 2006

On 10/4/06, Dr John Pryce <j.d.pryce at ntlworld.com> wrote:
> Here is my view of the *context* for the discussion:

Very nice.

> At 19:01 29/09/06, Lawrence Crowl wrote:
> >Please, no. Functions that modify global state are nightmares for
> >multithreaded code. We could use thread-local storage, but that
> >still means the function has a side effect, and compilers cannot
> >optimize well around functions with side effects.
> Wolff, I would hate to be the Engineer, who has successfully tested
> his APPFCN in floating-point and now has to re-write some possibly
> lengthy code putting in all these flags. We want to encourage people
> to join the interval fraternity: is this the way to do it?

My understanding is that interval computations tend to require
some rewriting anyway.  That is, straightforward translations of
plain floating-point computations tend to have wide intervals.
Am I mistaken?

> Lawrence, some queries.
> We want to make it user-friendly to write APPFCN functions a few
> hundreds of lines long, but in practice their length is a tiny
> fraction of the surrounding main program code.  First Q: does the
> fact that setting the flag is localized to a procedure, APPFCN,
> reduce your own misgivings?

My multi-threading misgivings can be mostly addressed by making
the flag local to the particular thread, which is also required
for the IEEE floating-point state.

>Our view in Sect 5.3 of the Pryce-Corliss paper is
> >   For applications that need it, the value of the information [given
> >   by the flag] should far outweigh any speed penalty. There should be
> >   a way to remove [the flag], and its overhead, entirely for those
> >   applications that do not require it.
> Including the flag, or not, is surely best done
> by a compiler option at the file level, or a
> directive at the level of an individual function?

The first implementations will certainly _not_ have compiler support.
They will be pure user code, and so additional flags will make it
into the executable.

> However, the calling code DESOLVE must have access to clear and
> query the flag. Any problem with this from a compiler viewpoint?
> Second, if this is done, is not the speed penalty also localized
> to the function?

The problem is that such a flag interferes with compiler analysis.
Compilers tend to put functions in one of few categories:

   0: The function's computation is based only on its arguments
      and returns results only through its return value.  It reads
      no global memory (excepting unvarying tables of coefficients
      and the like).
   1: The function's computation is based only on its arguments,
      but may read and write memory through pointer arguments, but
      will not write to global variable nor read from a non-constant
      global variable.
   2: The function's computation is as 1, but may additionally may
      read from global variables.
   3: The function may do anything.

Category 0 implies no barrier to any compiler optimization
algorithms.  Category 3 implies a barrier to all compiler
optimization.  Categories 1 and 2 are intermediate, inihibiting
some optimizations and permitting others.  The problem is that
adding an out-of-domain flag moves (e.g.) division from category
0 to category 1 (explicit flag argument) or category 3 (implicit
flag argument), which can seriously inhibit optimization of the
function using the division.

> Third, the IEEE 754 flags OVERFLOW, etc., are global in the way
> that is being criticized by yourself and others. Why is this
> considered acceptable for them, but not for other flags?

It was a mistake in IEEE.

As an example, if you read the documentation for aggressive
floating-point optimization options in most compilers, you find
phrases like "does not set errno", "may yield different rounding
results from the IEEE standard", "may not set IEEE flags", etc.
These phrases are basically saying "these operations are hard to
optimize in their full glory, and we're treating them as operations
on reals".

> Several people have said to me "that's because
> they are in hardware".

And changing everyone's hardware and hardware plans is a bit too
much for a language standard to require.  :-)

> (a) Can you explain why this is so?

These flags are in the hardware in order to reduce the complexity of
the interface between the 8086 processor and the 8087 coprocessor.
The designers of the time said "of course you wouldn't do it this
way for mainframes".  Unfortunately, microprocessors grew into
mainframes without fixing the problems.

> (b) If it is so, the ultimate solution is to include the DISCONT
> flag in hardware. In which case one should not force the current
> standard to adopt bad design on account of temporary hardware
> deficiencies.

> Fourth, I entirely concur that an ideal DISCONT flag would be local
> to each thread. That applies equally to the IEEE flags. How is it
> proposed to achieve it for those?

The next C++ standard will have thread-local storage, so putting a
flag there will solve the interference problem.  Current operating
systems also keep IEEE flags local to a thread by explicitly saving
and restoring the thread state on each context switch.  This overhead
effectively limits the efficiency of multiple threads per processor.

> How is it done at present for, say, Java threads?

Probably similarly.

Lawrence Crowl

More information about the Std-interval mailing list