[std-interval] Interval comparison operators
George.Corliss at marquette.edu
Mon May 29 14:16:39 PDT 2006
Warning: Rather long. My comments are interspersed for the entire thread to
On 5/28/06 12:02 AM, "first.i.last at comcast.net" <first.i.last at comcast.net>
> -------------- Original message ----------------------
> From: Guillaume Melquiond <guillaume.melquiond at ens-lyon.fr>
>> Now we are interested in what the opinions of
>> other interval developers are. Should the user be allowed to
>> compare intervals in C++? And if so, what should the semantic
>> of these comparisons be?
> The attached file contains our detailed response to the issue of comparison of
> intervals. Readers interested in specifics without explanation or rationale
> should not read it.
Some reactions ...
Copyright: Since I intend to quote portions of your document, I assert that
the quoted portions are subject to: "Permission is hereby granted to
distribute this document to all past, present, and future members of the
mailing list <std-interval at compgeom.poly.edu>; all past, present, and future
members of the team proposing the adoption of intervals into the C++
language; and all past, present, and future members of the C++ Standards
Committee and staff thereof. Any such reproduction, distribution, or
copying of this document must include the above copyright statement and this
permission paragraph. All other reproduction, distribution, or copying of
this document requires specific, written permission of the copyright
Personally, find find the requirement to include that slightly cumbersome.
However, I consider it a STRONG endorsement to the C++ committee of the
interest in an interval standard that your firm cares enough about intervals
to be that protective. Hence, I joyfully comply with your copyright notice
Good analysis. Well-thought out. Thank you for the effort.
"... it is probably worth the effort to get as much as possible as right as
possible so long as it does not inhibit the timely adoption of the interval
I'd even allow a modest delay in adoption, if we can develop a consensus on
what "right" is.
Principal Value Model: Hmmm. Very interesting. The purist side of me is
appalled; the engineering side of me applauds. I agree that it can be a
stepping-stone. I fear that it allows the less-than-careful programmer to
violate containment, which I consider the touch-stone of interval
algorithms. I'd have to try it out in a non-trivial application to see
which side is stronger.
"Empty interval arguments should always yield false." and following.
Then you are including "I don't know" with FALSE? Or TRUE means, "I am sure
this is true," and FALSE means, "I cannot assert TRUE." I can live with
that, but I assert that is not the common understanding of FALSE.
I prefer exception-free (or exception-nearly-free) execution, too, but I
think you lose valuable information to lump "I don't know" with FALSE.
Tough issue. I'll be interested to see whether we approach consensus.
"If improper intervals are not considered empty then improper intervals
should also always return false."
By "improper interval." do you mean semi-infinite, e.g., [1, inf]? If so,
then shouldn't some of the 'possible' operators be able to return TRUE in
Standard Terminology: Your proposal makes sense to me.
> George Corliss wrote:
>> #1 In my own interval programming work, I don't think
>> I have ever used certainly/possibly operators or
>> tri-valued logic, mostly because I have to think too hard.
>> Instead, I use <double> tests on interval endpoints, e.g.,
>> lo(A) <= hi(B), and similar. I must think carefully through
>> all cases, but boolean tests work as I am familiar. I think
>> I am not alone in that practice. I believe it leads to code
>> that is more easily maintained by people with only
>> intermediate expertise.
> This is an effective endorsement of the KISS principle. We agree that
> comparison operations need to be as simple, clear, and readable as possible.
> We believe that expresion templates or any other specialized technique, e.g.,
> exceptions, that is not widely used in other comparison contexts will retard
> the widespread use of intervals.
> Devil's Advocate question: how much of your habitual resistance to the use of
> higher level comparisons is due to the non-portability of the various
> implementations? AFAICT tests of the bounds are the only reliable
> cross-platform conventions, so they may be the only reasonable habits at
Answer to DA: None. Even comparing endpoints has slightly different syntax
in different packages.
I'm an old Fortran programmer (in any of several parsing alternatives). My
habits were formed long ago, when I wrote several of my own packages,
thinking they were for my own use, unconstrained by the practices of others.
Comparing endpoints gave me all the tools I ever needed. In retrospect, I
think that habit encouraged me to think carefully through sometimes
exhaustive cases. My codes are full of segments including character
graphics suggesting various cases of endpoint comparisons.
I think at least I understand what I am doing.
I do not oppose more sophisticated comparisons as you propose. However, I
am likely to advise non-experts to save themselves the effort of
understanding interval comparisons by restricting themselves to endpoint
comparisons until such time as they feel stifled. But that's a question of
habit, as you suggest below.
>> #2 I do NOT favor making it as easy as possible for
>> programmers to take legacy code, change double to
>> interval, and expect to get anything sensible.
> This is a properly cautionary warnng, but with respect to comparison
> operators, we disagree. The warning applies to retaining point-wise
> algorithms instead of adopting interval algorithms. We do not believe it
> applies to primitive operations. Rather, primitive interval operations should
> be as similar as possible to built-in floating point operations.
The question, "Is A < B?" is different for floating point numbers A & B and
for intervals A & B. As evidence of that assertion, I submit your document,
at least 25 years discussion in the interval community, and the fact that so
many variants exist.
Having primitive interval operators that LOOK like primitive floating-point
operators, but have different semantics OR that potentially violate
containment requires careful consideration.
If the floating-point and the interval question are different, ANY syntax
for posing the question is potentially confusing. That is why I asserted
above I'd even accept a modest delay in adoption, if we can develop a
consensus on what "right" is.
That said, your proposal is a good step in that direction. When I say
"potentially violate containment," I agree that your comparisons, used as
you intend, are (probably) safe. My fear is that they may be easy to
miss-use. I fear WAY too many examples of people writing code with
violations of containment and blaming us, rather than themselves. I want to
make if hard to violate containment, even by miss-informed programmers.
> Ron Avitzur wrote:
>> Being able to write certainly(X < Y) makes for readable
>> code. Allowing if(X < Y) to compile with some default
>> meaning introduces potential for confusion and subtle
>> run-time errors.
> The readability issue is important. I suspect that to a typical C++ user "if
> ( certainly( X < Y ) )..." looks more like a syntax error than it does
> readable code. Since "if ( certainly_lt(X, Y ) )..." is approximately as
> readable even to a person without knowledge of the specialized expression
> templates that affect only intervals, it is absolutely more readable than the
> The issue of default meaning is even more important. The default meaning of
> the comparison operators has to be so simple that it is hard to get it wrong.
> And it has to work everywhere, including within the STL.
We are in violent agreement there!
>> I suppose it's only a matter of habits though. People who
>> are used to handling bounds directly will keep handling bounds
>> directly, while people who are used to multi-valued logics will
>> keep using them, and so on. So, in a sense, the question could
>> become: into which habits do we want people who are new to
>> interval arithmetic to get? Personally, I wouldn't mind students
>> using possibly / certainly so that they have to think about what
>> they really want to express rather than directly going to lower
>> level manipulation of bounds :-).
I agree it is a matter of habit, where we each see the rationale for OUR
habit. Hence, it is wise to support multiple habits, UNLESS there is a
large complexity cost.
> The question of the desirable habits for new users is absolutely fundamental.
> The answer to that question forms the criteria that should be used to evaluate
> the entire interval proposal, particularly interval comparisons. My company
> has spent a great deal of effort attempting to make it easy for C++ users to
> incorporate intervals into their programs, but our results are will not be
> ready for release for some months yet.
Wonderful. You may not wish to say, but if you did sound usability testing
and can (eventually) publish your results for peer review, any reservations
I have for your proposal would disappear.
> It may be worth soliciting an opinion from the author of the Frink programming
> language. He added support for interval arithmetic in the form of triplex
> numbers a year or so ago. His implementation is perfectly straightforward and
> lacks many of the features of other interval arithmetic implementations. But
> due to the simplicity of his implementation he may be able to offer useful
> insiight into the issue of making intervals easy to use safely.
Good suggestion. Have you approached him? If not, may I or someone else
use your message and your "Comparison of Intervals in C++" to approach him?
Lee Winter responds to response from Guillaume Melquiond:
>> I have taken a quick look at your document and I have a
>> few questions.
. . .
>> I do not understand what you mean by "Present day STL
>> experts may be mislead by the set-based description of
>> intervals, but would not be confused by a line-segment-
>> based description of intervals". According to the previous
>> paragraph of your document, I thought this was supposed
>> to be the exact same notion. Also how would it achieve
>> not to confuse them?
> The set and segment implementations are identical. It is the method of
> describing them that is different. IMHO the term segment is better than the
> term set precisely because it indicates a fundamental property of typical
> intervals -- convexity.
> Describing intervals as containing sets would suggest to an STL expert that
> they can hold any subset, which suggestion is false and thus misleading.
> Describing intervals as containing line segments would suggest that they can
> only hold convex/contiguous subsets, which is accurate and thus not
And talking about line segments might deflect the scorn of constructivists
who do not believe in set theory :-)
Do we all implicitly assume CLOSED line segments, with the possible
exception of semi-infinite [a, infinity)? If whole is an interval, it is
closed, isn't it, because its complement is open (and closed)? Then is the
semi-infinite line segment [a, infinity] closed because its complement
[-infinity, a) is open?
>> Later you also say that "they are also fully compatible with
>> the STL". This does not strike me as obvious since STL
>> algorithms usually require this property to hold:
>> !(x < y) && !(y < x) implies that x and y are equivalent.
> The sentence following the fragment you quoted states that they lack general
> applicability because they are often too strict. In those two sentences I
> meant that they meet the general requirements, but aren't always useful
> because they may fail specific requirements such as the one you stated above
> about deducible equivalence.
This example illustrates my point above that the questions are different for
floating-point and for intervals. We are making progress, but we have not
yet converged on a consistent view of how to reconcile the differences.
>> Concerning your neighborhood model, I disagree. You are
>> saying that "the certain and possible comparison functions
>> are based on this model". But somebody performing global
>> optimization or value-range propagation will use them, while
>> the intervals are nothing like a neighborhood. So certain and
>> possible comparisons should not be associated to a "point
>> model" in my opinion.
> The sticking point here appears to be that "intervals are nothing like a
> neighborhood". If they are not, what are they like? I.e., what conceptual
> model do you believe the certain/possible comparisons to be based upon? Note
> that the conceptual models are not used to describe to high levels of the
> software such as functions, algorithms, programs, or applications. They are
> only useful to describe the rationale for having multiple sets of comparisons.
> So they are properly applied only to distinguish the kinds of low-level
> operations available.
Are neighborhoods open or closed line segments?
The AGREEMENT here is that a model is necessary to underlie the
> But we have a fundamental disagreement. The reason certain and possible
> comparisons are characterized as based upon points is that the results of the
> comparisons describe the presence or absence of points. Possible comparisons
> indicate that there exists some point in each object that satisfies the
> relation in question. Certain comparisons indicate that there is one (and
> only one) point that satisfies the relation in question (except of course for
> cne() which
> indicates that there is no point that satisfies the relation).
>> Do you have some bibliographic references for the principal value model?
> Not yet. That is one of the reasons why it says that it would be
> understandable if they are not part of the standard. The reasons for
> mentioning them at all is that it may benefit the team making the proposal to
> the committee to be aware that there is on-going research and development in
> this area. It may be that there should be no operator notation for comparison
> until the various candidates have been evaluated in the field. The candidates
> that have been suggested include set/segment, certainty, and principal value.
Have you plans to submit something for peer reviewed publication? Or does
that await your commercial release? I suspect several likely referees are
reading these discussions.
In my opinion, not-yet-published views deserve consideration, but
consideration more careful than published concepts. I suspect the level of
community oversight may be higher in this forum than in many review
processes, so I am open to a position paper on the principal value model.
>> This looks like a center-radius model of intervals, but this
>> isn't it, so I'm a bit lost. For now, I will just ask how different
>> is the "loose" comparison from the "possible" comparison.
>> Since the "loose" comparison is using "the implicit epsilon
>> provided by the radius of the actual intervals", it seems to
>> me the results will always be the same.
> The results will not be the same. The authoritative definitions of the
> principal value comparisons can be found in Knuth, D.E. /The art of computer
> programming/ (vol II). A brief description of its usage and utility can be
> found in the beginning of: <http://www.boost.org/libs/test/doc/components/
> The difference between loose and possible is that peq() is true whenever one
> of the intervals overlaps any portion of the other, but leq() is true only
> whenever one of the intervals overlaps the center of the other.
Intriguing. I would not mind being able to claim Knuth as an early, albeit
indirect, contributor to the theory of interval arithmetic.
>> Also you are saying this model appeals to a "much wider
>> audience"; could you give some examples where it is used?
> At present it is not used AFAIK. But one of the claims is that the PV model
> makes intervals suitable for direct replacement of built-in floating point
> types (in the sense of the Liskov substitution principle). If that claim is
> valid then the PV model makes intervals appeal to every C++ programmer who
> uses built-in floating point types.
AFAIK = As Far As I Know?
>> Later, you say that "empty interval arguments should always
>> yield false". This statement contradicts your line-segment
>> model, since the empty set is included in every set.
> What is the contradiction? Equivalence and ordering comparisons do not deal
> with subsets. The functions in() and contains() should affirm that an empty
> interval is included in every other interval. But the empty interval is
> neither less then nor more than any other interval. And given the the empty
> interval is not part of the machine number line one can claim that it is Not
> an Interval (NaI). So, just as NaN is not equal to any other floating point
> value, not even an identical NaN, NaI should not be equal to any other
> interval, not even an identical NaI.
> In fact this conflict illustrates some of the problems caused by the
> interval-as-set terminology. It suggests that an empty interval is an empty
> set, and so seq(empty,empty) should return true. But the interval-as-segment
> terminology suggests that an empty interval is not a segment. (Nor is it a
> point. The width is not zero, it is NaN).
In my Real Analyis class many years ago, I was told that inf(empty) =
+infinity. I do NOT suggest that means empty == whole, but it might suggest
width(empty) = infinity.
The answer to width(empty) = ? Does not matter. What matters is that we are
able to specify operator returns which
are internally consistent, and
do not lead to violations of containment.
The best (only?) way to achieve that is to have a single, sound underlying
> So based upon the interval-as-segment terminology seq(empty,empty) should
> return false.
> Is there a case to be made that empty intervals are less than or greater then
> non-empty intervals?
TRUE means, "I am sure this is true," and FALSE means, "I cannot assert
TRUE." That is your convention?
>> It also goes against your neighborhood model, since the
>> classical interpretation is: if something is not possible,
>> then its contrary is certain (and reciprocally).
> The classical interpretation fails anytime a NaN is encountered because they
> are not part of the number line. Similarly empty intervals are not intervals
> (they need have no sensible bounds nor center/radius). Moreover the classical
> interpretation cannot be applied to intervals at all because there are more
> relationships among intervals than there are among points on the machine
> number line.
NaN is taking a critical role in this discussion.
There is some sentiment to minimize dependence on NaN in the interests of
speed, although my understanding of the examples is missing. One problem,
as I recall, is that in IEEE 754 arithmetic, -infinity + infinity yields
NaN, while we might prefer, at least in outwardly directed rounding modes,
to get whole.
I recall someone telling me that IEEE 754 NaN admits multiple bit patterns.
Hence, rather than overloading NaN as we seem to be doing, we might have
several NaNs, each with distinct meanings, all of which propagate as IEEE
754. I suspect that is a BAD idea because propagation of different NaN bit
patterns is probably not portable across hardware.
The standard should specify the syntax and semantics, but not the
implementation. However, it would be unwise to specify something whose best
implementation is "too" expensive, whatever "too" is. Much of this NaN
discussion probably does not belong in the standard, although having it
available for implementers in another form should be a big help in having
the proposed standard accepted. This discussion is VERY helpful in
formulating a wise specification.
It sounds to me as though reliance on the properties of IEEE 754 NaN buys us
so many benefits, that we probably prefer to accept any performance penalty
for now and add to the chorus for improved performance for any NaN
processing in future hardware.
Does anyone have hard experimental data on the performance costs of NaN?
>> Finally, you are proposing that the interval operators compare their
>> midpoint. This kind of goes against the whole point of providing
>> interval arithmetic so that people can do certified computations.
> The reason the conceptual models are worth discussing is that there is no
> single model for interval usage. There can be a single model for the
> computation of interval values (i.e., outward rounding), but there cannot be a
> single model for interval applications. C++ is often mentioned as a
> multi-paradigm language. If conceptual models are paradigms then there are
> definitely multiple interval paradigms.
> This is only a problem when one must choose a single paradigm to use in a
> linguistic feature such as operator notation. The problem is that one must
> choose zero or one paradigm. But selecting a single conceptual model for
> comparison operators does not "go against" the other models. They still exist
> and are exactly as useful as they were prior to the selection.
> In fact several accomplished users of intervals have written
> that they prefer explicit bounds tests rather than comparison functions. From
> this one can deduce that they can do certified computations without comparison
> functions, which certainly implies that they can do without an operator
> notation for those functions.
> The point of operator notation is to make it easier to express simple
> concepts. It may be that there is no simple concept related to intervals, in
> which case there should be no operator notation for comparison of intervals.
> But there is an opposing viewpoint. When adding up a column of figures, say
> measurements taken during an experiment, one might want to know how accurate
> the total was. Adding up intervals constructed from the measurements and
> their associated errors makes it trivial to find the error for the total of
> the measurements. So the utility of intervals for common-place calculations
> is extremely high. One need not be doing a global optimization to prefer
> intervals over built-in types.
> Comparison operators are quite useful in common-place calculations, and they
> can be used quite naturally without danger of misleading the programmer. For
> example given the need to sort some numbers the precise details of the sort
> mechanism are probably irrelevant. The only universal concern is that the
> result of the sort operation be reasonably reproducible (e.g., swapping of
> identical items is usually well tolerated). Comparison of principal values
> meets that need for simple and reasonably reproducible results. None of the
> other conceptual models can meet that need.
>> So I am a bit reluctant to consider your proposal.
> I quite understand. If you believe that the inclusion of PV comparisons would
> inhibit the development of certified computations you should definitely not
> add them to your standardization proposal.
I'm intrigued. I insist on "Thou shalt not lie." I favor simple. I favor
considering carefully constructs which lead the unwary to write code that
appears to lie.
Let's continue the discussion. Who else has opinions?
Dr. George Corliss
Electrical and Computer Engineering
P.O. Box 1881
1515 W. Wisconsin Ave.
Milwaukee, WI 53201-1881
George.Corliss at Marquette.edu
414-288-6599 (office); 288-4400 (GasDay);
288-6280 (Dept.); 288-5579 (fax)
Office: Haggerty Engineering 296
More information about the Std-interval