[std-interval] Interval comparison operators
first.i.last at comcast.net
first.i.last at comcast.net
Tue May 30 17:22:26 PDT 2006
-------------- Original message ----------------------
From: "Fernando Cacciola" <fernando_cacciola at datafull.com>
> Hello Lee,
> For now just a question and a quick comment:
> Can you justify your claim that:
> given a non-degenerate interval X, the expression ceq(X,X) will
> return false
In order to preserve the containment invariant ("not lie") it is necessary for ceq() to assume the independence of any variables that cannot be proven dependent. So if intervals have no history (e.g., are PODs or close to) and the comparison function is declared as ...
bool ceq(interval<T> lhs, interval<T> rhs);
... then it cannot distinguish the following cases:
interval X("[1,2]"), Y("[1,2]");
bool A = ceq(X,Y); // independent arguments
bool B = ceq(X,X); // dependent arguments
Y = X; // create inter-variable dependency
bool C = ceq(X,Y); // dependent arguments
So ceq() must return false for all of them.
However, if the function is declared as ...
bool ceq( interval<T> const & lhs, interval<T> const & lhs );
Then it can detect the dependency in case B and return true. Sun's documentation states that they do this for arithmetic functions, but I am not certain what they do to comparison functions.
In order to detect the dependency in case C either intervals require history (non-PODs with deep copies etc.) or the compiler has to be much smarter than any I've ever heard of.
> Maybe I'm just totally missing a point but your Prinicpal Model
> doesn't look to me like a usefull model of interval numbers.
It isn't particularly useful when the goal is complete rigor as is typical of certified computation. But it is useful when complete rigor is not required.
> Aren't "bounds" the central notion of anything called
> an "interval"??
I don't think so. They are aften the exact focus of the user's attention when implementnig reliable computing, but my understanding of the central notion of IA is containment. The implicit statement about which "thou shalt not lie" being "the true/exact result is contained in such-and-such region". The region can be described by:
-- explicit bounds: X..Y
-- a point with a pair of directional error bounds: X +Y/-Z
-- a point with symmetric error bounds such as:
-- -- magnitude: X +/-Y
-- -- precision limit: X to N sig. digits
-- -- tolerance: X +/- Y%
Just to be clear, in addressing this issue it is necessary to distinguish representation from conceptual model. The vast majority of interval implementations use bounds to represent intervals. But alternate representations, such as center/radius are also useful, especially for problems/spaces of high dimension.
The conceptual models are not tied to the representation of intervals. All of the conceptual models can be applied to all of the representations.
> This models looks to me like the
> "arithmetic expansions" described by Priest here:
> In such an expansion, roughly, a "result" is given explicitely as a
> "principal value" and a sequence of error terms.
> The comparison semantics that you suggest are in line with
> those concepts, but not so much IMO with interval artihmetic.
Is it your conclusion that PV comparison semantics _cannot_ or _should_not_ be applied within interval arithmetic?
> IOW, the comparison semantics that take into account the
> "accumulated roundoff" (those you cite from Knuth) are
> well suited for a model which is centered around the fact
> that a result is always a real-point-value but which is
> represented as machine-point-value which differs from
> the actual value by some error. That's why these
> comparisons are essentiually point-based.
We are in complete agreement up to this point except that our definitions of the term "interval arithmetic" may be distinct. I interpret your statements to indicate a belief that interval arithmetic consists only of operations that are useful for certified computations.
My definition of IA is the collection of techniques that apply to interval objects .AND. do not interfere with the definition of operations that are necessary for certified computations .AND. are useful for some reasonable purposes. Thus I think my defintion would produce a proper superset of your definition.
> But IIUC the very purpose of interval arithmetic when applied
> to certified computations in the presence of roundoff is
> precisely to provide an alternative view centered not on
> the "principal" value but the error bounds...
The key phrase in your statement is "... when applied o certified computations...". I believe that interval arithmetic has wider applicability than certified computations. This is not to denigrate certified computations, but to suggest that there is a spectrum of rigor and utility ranging from certified computations at the end emphasizing rigor down to convenience for simple calculations at the end where rigor is less critical.
Is it your opinion that interval arithmethic should be used _only_ for certified computations?
> If you want a number conceptually centered around an
> "actual value" which can't be exactly represented due to
> roundoff, OK, but I woudn't call that an interval.
Is this simply a terminology issue?
Reductio: Should we create a separate set of features whiich look/walk/quack like intervals and are implemented exactly as intervals are, but that we call something different, say "duplex numbers"? Is this a branding issue where the term interval arithmetic should be reserved for only those systems and applications based on certified computations?
I suspect this issue is a consequence of the fact that there could be several goals for the standardization effort. Is the goal to improve C++ for (only) users intending to create certified computations, or is the goal to improve C++ for as wide a set of users as possible?
The interest of my company is in the latter, but only to the extent that features designed for the wider audience do not detract from the utility of the proposal for users pursuing maximum rigor.
-- Lee Winter
More information about the Std-interval