I have an architectural model in meters (0.001 units) and world coordinates (very far from the origin)

I routinely transform it into project coordinates (nearer the origin) in a mm (0.01 units) file.
There is no scaling since the original file meters file is worksessioned into the millimeters file.

Unfortunately several objects don’t survive the journey

This is frustrating since manually copying, pasting and moving between files works, despite the same objects becoming Invalid rather than null.

I’d prefer invalid over null any day, since at least i can see the geometry and select sub elements etc.
Why is grasshopper indiscriminately culling these elements? I’d like to decide if invalid objects should be passed around or baked.

unfortunately i can’t share the files publicly, but I will upload @dale

The transformation likely makes the Brep invalid, and since invalid breps are a crash-risk they are removed. The problem is figuring out a way to not get invalid breps in the first place.

So yes, we’ll need those breps and the transforms you’re applying to figure out what’s going wrong where and whether or not it’s solvable. It might not be. Numbers very far away from zero have way fewer decimal places available than numbers close to zero. Whenever you have two small numbers which are definitely different but you add some very big number to both of them they may well either become identical or their difference may balloon depending on rounding.

So any time you move something far away you lose a whole bunch of accuracy in your digits. This effect may compound if you do it repeatedly (depends on the steps in between the large transforms).

Consider this as a parallel:

Assume that numbers in the computer are written in the form 0.xxxx \cdot 10 ^{\pm ee} where you only get to pick the x and e digits. Smallest possible positive number is thus 0.0001 \cdot 10^{-99}, and the next smallest positive number is 0.0002 \cdot 10^{-99}. The distance between these numbers is absolutely tiny. Like, we can fit more of those distances into a Planck length than you can fit Planck lengths into a lightyear.

But when we’re trying to represent big numbers, like 23,\!756, then we fail already. Best we can do is 0.2376 \cdot 10^{+5}, which already rounds away the last digit. The distance between consecutive numbers at this remove from zero is ten. You can’t even increment by whole numbers, only multiples of ten.

Now imagine adding a small number like 0.2500 \cdot 10^{0} to a big number like 0.6000 \cdot 10^{+4}. Should be 6,\!000 \frac{1}{4} right? Wrong. It’s just 6,\!000 because there simply isn’t room for the least significant digits.

Thanks David, i’ve uploaded privately with a link to this topic. I’m thinking this is probably not a grasshopper problem, though I’m sure I’ve seen grasshopper more recently ask me if i want to continue baking invalid objects or to skip them. I like that it gives a choice. In this example there is no choice.

I’ve done some more testing and found that the problematic breps are turned invalid just by being worksessioned into a file with smaller units. eg original file is in meters, tolerance 0.001 units.

in original file move the breps nearer the origin. Nothing is invalid at this point.

create new file in millimeters tolerance 1 unit or 0.01 units (doesn’t seem to make a difference)

attach original to new via worksession

what shows the objects as invalid

This is not a problem if the active worksesison file is the same units or larger units than the original file. (eg meters ->kilometers)

I have no idea what’s going on with worksession unit conversions under the hood.

i’ve just found that changing document units from meters to millimeters makes the same elements invalid, regardless of any transformation or worksession

Yes, just changing the units in Rhino will throw a “The "DocumentPropertiesPage" command created 2 bad objects” warning. I’ll make a YT. - RH-54566
-wim

I’m sure limitations of floating point numbers are causing some of the grief. But so is tolerance. This is probably a good time to review tolerances in Rhino. A key point to take away from this article is that objects, which were previously modeled with some tolerances, will not be fixed if you change the tolerance.

Your model was constructed using the Meters unit system with an absolute tolerance of one Millimeter (0.001 meters). When you change the model’s unit system from Meters to Millimeters, your scaling the geometry by 1000. By scaling, your geometry is no longer within the tolerance it was modeled with. Specifically, Brep trim curves and their distance to edges. Thus, the geometry reports as invalid.

If you know this is something you’ll be doing - modeling in meters and converting to millimeters - then consider beginning your modeling with a much tighter absolute tolerance, 10^{-6} in this case.

To fix your model, duplicating the edge curve and then untrim/retrim with these using the absolute tolerance of the millimeters model is probably what’s requried.

FYI: I edited the wiki you linked to. I split Relative tolerances from Angular tolerances
To add the lack of a relative tolerance setting from rhino V6 upward. Since the Video still shows and mentions it It’s best left as an item to mention IMO

Thanks Dale,
I haven’t come across that page before. And I must need to think through a few more examples regarding floating-point error. My assumption was that an element modelled at 1.000 metres long with 0.001 units tolerance would be identical to an element modelled at 1000 millimetres long with tolerance of 1 unit.

Unfortunately we aren’t in control of the architectural file tolerances here, but it’s something to think about for future project digital work plans.

Hi - RH-54566 is now closed as Won't fix.
Make sure that any workflow that involves scaling Far From Origin geometry includes a step where resulting bad objects are manually fixed before sending this to Grasshopper.
-wim