Yes, GH2 will provide two basic equality tests for values involving floating point numbers.

- Absolute difference: that is, two values are considered equal if |a-b| \leq t for any user specified tolerance t.
- Discrete difference: that is, two values are considered equal if the number of possible in-between values is equal to or less than some user specified distance k.

The former approach is basically what Rhino uses when it’s dealing with tolerances. It’s reasonably intuitive and straightforward, however it suffers from poor interplay with the non-linear nature of floating point values. The latter approach tries to formulate a equality metric which tracks the nature of floating point numbers.

Case in point, 64-bit floats can represent numeric values up to roughly 1.8 \cdot 10^{308}, or 179,769,313,486,232,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000* if you insist on writing numbers like my girlfriend.

However there are an equal number of distinct values between zero and one as there are between one and this upper limit. Which is to say; close to zero consecutive values are packed *extremely* close together, while at the upper limit consecutive values have *extremely* large gaps between them. Gaps big enough to fit entire universes in.

And yet despite this enormous range in magnitude, all numbers have about 16 decimal places of accuracy. So when you perform a calculation on very large numbers vs. the same calculation on small numbers, the number of garbage digits you end up with is the same, but that inaccuracy manifests as a much larger absolute error for large numbers.

It is very difficult to predict what particular value of k is a good choice for a specific equality test, but with some experimenting I think good values can be chosen which will then scale correctly up to whatever the magnitude of the numbers involved happens to be.

\* (this is a representation of the base-10 approximation of the maximum value. The real value doesn't end with lots of zeroes when written in base-10, but I felt that would just make it harder to read/comprehend.)