Ideal maximum distance from Origin

Hi,

Objects far from origin can have snapping issues, rounding errors, and inaccurate modelling issues.

I am interested in knowing the distance from origin within which we should try to keep objects if given a choice.

Here’s how I think about it.

Since Rhino uses 16 digits floating point numbers, the larger the number the less digits we have to represent the decimal portion.

1234567812345678.0 (Will not be accurate since we didn’t leave any space for decimal portion)

12345678.87654321 (Here, we are at least keeping 8 digits for the decimal portion, so this is better than above)

So is 99999999 (8 digits) units a good threshold to use?

1 Like

Great question!

This is a bit OT:
I have wished for Rhino to under the hood calculate all calculations within the objects concerned’s bounding box instead of at world coordinates.
(In other words: get bounding box, move all stuff from boundingbox[0] to origin, calculate the stuff and move back from origin to boudningbox[0])

Thanks Holo. Although computationally expensive, that is one way to address it.

It would be great if someone from McNeel could share what a good threshold is. @menno @scottd

The problem is there is not a clear threshold that is black and white.

There are so many dependencies in the chain. For instance the object types will make a difference. The command used will also make a difference as some are more or less tolerant.

An example we have seen often. The model is started on the Civil drawing out at state plane coordinates. It works fine until for site work and general modeling of the massing.
But then as we work down in smaller and smaller detail, all of a sudden, failings of intersections start when detail modeling on a staircase railing intersection.

Hi @Devang_Chauhan,

Judging from your original post, I think you might be a little mixed up about the nature of the numbers used. You do accurately refer to them as “floating point numbers” (link) :slight_smile: but then proceed to describe what are actually called “fixed point numbers” (link).

If Rhino were using fixed point arithmetic, something like what you wrote might be reasonable. However, since all platforms Rhino runs on use floating point arithmetic (the IEEE 754 standard, to be precise), it isn’t relevant.

With floating point arithmetic, you have ~16 digits along with an integer exponent. So, I can represent numbers ~10^-16 as well as numbers around 10^16, provided that they are all roughly the same magnitude. The challenge arises when dealing with numbers that have a different magnitude that exceeds the dynamic range of the floating point number.

The key concept here is something called “machine epsilon”, which is the distance between 1.0 and the next largest number after 1.0. For the numbers used in Rhino, this is about 2.2*10^-16. There are some things which can be a little surprising and disconcerting here. For example, if I fire up Python, I can do something like this:

>>> import numpy as np
>>> np.finfo(np.float64).eps
2.220446049250313e-16
>>> 1 + np.finfo(np.float64).eps
1.0000000000000002
>>> 10 + np.finfo(np.float64).eps
10.0

Oh no! In fact, it turns out that the floating point operations aren’t “associative”: combining numbers in a different order can yield a slightly different result.

Despite the apparent drawback of floating point numbers, it’s a very, very good thing we’re using them instead of fixed point numbers. The nightmares would be more frequent and more terrifying otherwise. :slight_smile:

Anyway, the answer to your original question is fairly straightforward: trying to keep the objects in your scene the same “scale” should help. But it is a little subtle. If you model something around the position (10^16, 10^16, 10^16) with a detail on the order of (1, 1, 1), then you will completely blow out the dynamic range of the floating point format and lose all ability to model.

Best,
Sam

3 Likes

Not everything uses double-precision. Plug-ins, rendering, and meshes may involve the use of single precision, which has only 7-8 digits of precision, even if the underlying Rhino model and geometry are using double precision.

I would say a good practice would be to take your Rhino unit precision (0.01m or whatever), multiply by 10^7, and try to keep within that distance of the origin. You may be perfectly fine going well outside that region, but it is not always obvious when underlying tools or hardware are reducing coordinates from double to single precision so the above practice can help avoid that inadvertent weird issues happening.

There is a ton of information about floating point numbers and their peculiarities out there. An often-cited and dense one is this. But you may want to start here.

The way I would explain it is: you have a fixed number of floating point numbers that can be represented exactly between each order of magnitude (10^x). Let’s say for the sake of the argument that you can represent 10 numbers exactly, so between 0 and 1 these are: 0, 0.1, 0.2, 0.3, …, 0.9 and 1.0. and between 1 and 10 these are 1, 2, 3, …, 9, 10; between 10 and 100: 10, 20, 30, …, 90, 100 and so on.

Now imagine you have an object that has a size of about 5 units. This is best represented with numbers that fall between 1 and 10. If you move that object to a location at 300, the closest numbers to represent it are 200, 300 and 400. This will not be accurate at all, while between 1 and 10 you’d have it at its most accurate.

Of course there are way more numbers between 0 and 1 when you use 32-bit or 64-bit floating point numbers, so you start noticing the granularity of numbers only when an object is really far away from the origin.

To close off, there is no real ideal maximum distance from the origin. You need to be mindful not only of the representation of numbers as outlined above, but also that operations on numbers, especially division and subtraction will introduce rounding errors.

3 Likes