Calculations far from origin and floating points (?)

Hi guys, (@wim, @Gijs)
I did an analysis on a mesh that is far from origin and I got some strange results.
Could you please update Rhino so all calculations are done within the bounding box of the relevant objects instead of the actual values? (like moving the object to origin)


I projected points to mesh, moved them and did mesh.ClosestPoint() calculations.
Done close to origin everything works fine, but done at 500-000,6-000-000,0 they didn’t work very well.

In my mind that isn’t very big numbers so I searched a bit and found this:

Single precision
Single precision Floating Point numbers are 32-bit. That means that
2,147,483,647 is the largest number can be stored in 32 bits.

Double precision
Double precision Floating Point numbers are 64-bit. That means that
9,223,372,036,854,775,807 is the largest number that can be stored in 64 bits.

So if I have this coordinate: 594887.0,6647047.0
Does that mean that single precision gives me 10 digits to a number?
And if so then X = 594887.xxxx ? In my mind that gives me a tolerance of 0.0001, which appears to me to be plenty.

Y = which also should be enough to give me a tolerance of 0.001, so when working in Meters I imagine I should get a 1mm tolerance.

Obviously I must be missing something, can anybody clarify?
(I thought Rhino could do double precision calculations too, and that should eliminate any issues.)

Anyhow, being able to calculate far from origin should be looked into, either as high numbers, or by moving objects, or subtracting boundingbox[0] from the values, and then reapply it to the result.

1 Like

Floating point numbers have part of those bits assigned for what comes after the decimal comma and part of what comes before it. Using big numbers takes away the amount of places after the comma you can use

Additionally, remember from our many earlier discussions about floating point arithmetic and number representation is that not all numbers can be represented exactly with floating point values. Remember this Python snippet?

0.1 + 0.1 + 0.1 == 0.3

Double precision will give… more precision. But still has ultimately similar limitations as single precision floating point.

Keep in mind that when you are operating on values you loose precision. Each transformation to a value compounds the error essentially. Translating points around is part of all that error compounding transformation.