Bug? Rhino isn't accurate far from origin

This is not just a Rhino problem.
I use Autodesk product extensively and the “far from origin” issues are numerous.
I always use world 0,0,0 as a base point for any drawing elements and when it has to be placed somewhere, e.g. real world co-ordinates far from the origin, a drawing is compiled of referenced files at the desired locations. That way, any individual plan / drawing can be worked on without the inherent problems…
Here in Australia, our survey datum point is close to Melbourne, most of my work is in Sydney being 1000 kilometres away, and working in millimetres means that the origin can easily be 1,000,000 units away, and objects just do not behave as they should. “Grip points” at the ends of lines may not appear is a very basic example of far from the origin behavior.

Barry.

OK, so what you are saying is that floating point is with a FIXED NUMBER OF DIGITS? And the point floats around somewhere within that fixed number of digits? That makes a lot of sense. So “wasting” many digits before the point reduces the amount of available points after the digit.

But there must be another bug then because what doesn’t make sense is that the number of digits used is 11, which is half precision isn’t it?

I know Y=6 652 615.116 meters is quite far from the origin, but why can I can snap the dimensions so accurately? And why can’t MESH PATCH do the same?

Are some tools running in lower tolerance than others?

Also take a look at this meshPatch vertex deviance from it’s input points:

and that should have been and working in millimetres means that the origin can easily be 1,000,000,000 units away

No, it does not work like fixed number of digits. A floating point number is represented as s*1.m*2^e where s is the sign bit, m is the mantissa and e is the exponent. Together there are 32 (single precision) or 64 (double precision) bits to store the sign, mantissa and exponent. This way, an enormous range of numbers is representable, but the gap between the representable numbers becomes larger and larger as number size increases. This is the origin of the inaccuracies.

Representable numbers, I think, you can imagine as the mantissa differing by a single, least significant bit. I’m no expert by a long shot, so please inform yourself by the Wikipedia articles :smile:

1 Like

my words i arrived at the same conclusion like you

https://www.exploringbinary.com/the-spacing-of-binary-floating-point-numbers/

i digged deeper into the topic. i found that even when working 1000km far from origin in mm which is billion of units, the precision (gap size) should still still be in range of nanometers but obviously rhino has issues when working this far from origin (why?).

i used formula
image
to calculate gap size for 64bit floating point number and the result for billions of units from zero remaining precision within that range is:
image
i might be wrong somewhere but if i this is correct it would mean it should be absolutely possible to work even further from origin without issues but reality is that there are many issues. My latest experience that gumball position was off by millimeters from correct position when working only 500km from zero.

Can someone explain why there is discrepancy between theory and reality? (if there is any)

Computer code math functions usually have different versions for single floating point precision calculations vs double floating point precision calculations. My guess is in the code for Rhino there are a few single precision functions, probably left over from the past when double precision math could be much slower than single precision math. For most purposes the difference between single precision and double precision isn’t noticed, but ocassionally such as very far from the origin it is noticeable.

i thought so but its quite dangerous when user has not way to see that or is warned (visually everything looks alright, how could i notice 1mm off?).

thats what i have been saying for two years look under the hood and eradicate all issues like this once and for good, instead i have been told i should model closer to zero (why should i when it all depends on fixing known and resolvable issues). because now its halfbaked and rhino cant be considered “industrial strong” as i read in some other post which i liked.

i can work far from origin, volume centroids are calculated correctly BUT gumball position is off. How can one be sure what is right what is wrong? these are very important things because this is CAD software which is supposed to be accurate and people rely on it thus should be priority. i know there was big leap from v5 to v6 but none from v6 to v7. Till when should users wait there are no more issues like this? Elon might land on mars by that time …

Only limitation is 64bit everything else is responsibility of the software developers. I know rhino is not the only one but could be only one which makes it right. This is the burden of 90s cores in almost every software on this planet holding us back.

Take is as legitimate criticism please.

My guess is the remaining single precison math is buried in libraries or similar, and finding them is not just a matter of someone taking a couple of weeks to go through the code .

in that case its even bigger challenge but it needs to be done like US should have adopted SI 100years ago :slight_smile:

The display pipeline, i.e., your graphics card works on single precision only (some expensive ones may work double-precision). Therefore the ecosystem is mostly built on single precision.

It’s similar to someone tens of years ago saying “IPv4 is enough” :rofl:

1 Like

Rhino uses double precision for geometry. Rhino - How accurate is Rhino?

Rhino uses single precision for render/display meshes. Serious bug - moved object results in destroyed render mesh - #4 by scottd

1 Like

its a hybrid of double precision coordinates mixed with some single precision operations and single precision render mesh. is there any cad software out there which uses double precision solely for everything thus having no issues with far from origin?

i tried the same thing in supposedly “industrial strong” autocad. failed the same way rhino does:

This could be solved by a max drawing bounding box where all calculations handled by the graphiccard uses that bounding box as “world”, and eliminating objects outside that box. How difficult it would be to incorporate I don’t know, nor how much of a performance hit, or gain for that sake.

2 Likes

it’d be a performance hit as every point would be transformed.

In theory yes, but in reality I don’t know how much that would be. Subtracting a fixed number from those numbers is something a cpu does lightning fast. And working with smaller numbers could potentially be faster, so it might even it out, it would be interesting to see it in real life.

I expect that many graphic cards will use single precision for rendering. That means that even though your render mesh may be defined in double precision, it will still look jagged on screen.

Almost all (if not all) graphics cards use single precision. OpenGL apis have functions for double precision, but I haven’t seen a card yet that really uses those numbers as doubles. When Rhino geometry is far from the origin, we place a scaled and translated copy of the geometry on the GPU with values that are “nice” for floats. We then apply the transform to the world to clip matrix in double precision on the CPU before setting the world to clip transform in a shader.

There is a difference between visual inaccuracies and numerical inaccuracies for geometric calculations. We should have most of the visual inaccuracies fixed at this point. If there are samples where things look wrong in Rhino 7, please let me know.

1 Like