Extremely slow generation of GH Surface objects in Rhino

I am under the impression that something is broken in the generation and/or display of GH surfaces in Rhino.
For instance, I generate a few loft surfaces, and it will take almost a minute before the interface gets out of coma and actually displays them.
Strangely, the profiler doesn’t register this time and doesn’t even display any execution time.
Once the geometry is created, I can hide or preview the surfaces in a snap though.

Generation of Lofts takes forever.gh (29.2 KB)

The issue is you are working really far away from the origin. Just move your stuff closer to the origin and it works fine. The pain with models far away from the origin

Generation of Lofts takes forever nomore.gh (27.9 KB)

Hi Michael, and thanks for the help.

This is part of an extremely large building, and the origin is in the center.
I wish I could generate the geometry where it lies in the general layout, and not have to transpose it close to the origin.

Shouldn’t that kind of issue be something of the past ?


Shouldn’t that kind of issue be something of the past ?

I’m not sure, I don’t work for Mcneel. At first I thought maybe your curves had a lot of control points (a usual case of slow lofts or ruled surfaces) but they don’t. Moving to the origin resolved the issue (on my end atleast) before I moved it it froze my gh when trying to preview. One thing I do sometimes is in gh, move stuff to the origin, perform the operations, then move back to where it needs to be.

The other thing to review is the file units and document tolerances. I know we had issues in the past in my previous off with large projects where the file was in mm. Fair enough… Personally i prefer working in Meters.

The issue was they had not change the document tolerances so it was trying to be accurate to a Micron! 0.001 or smaller. When the most people need to build with is on site at most is 0.1mm. If your making something more complex i would be building it in a separate file and referencing it in / moving it after.

It is intrinsic to the way numbers are defined in computing. Solutions do exist (quadruple precision, arbitrary precision) but no hardware is capable of working directly with these more accurate types, meaning that if you start using them your performance will suddenly drop by about a factor oh-look-I-just-lost-all-my-clients.

Once CPUs and GPUs will be able to operate directly on 128-bit floating point numbers we can upgrade our code to make use of that.

For small objects that exist in their entirety far away from the origin there exist solutions, by keeping all the coordinate numbers small and specifying an offset for the shape as a whole. But this trick doesn’t work for a shape which is just huge in all directions, and Grasshopper doesn’t work with these tricks anyway.

Grumble, grumble…

I agree that it should have been solved by now, but it requires a global effort and apparently those in positions of power are dragging their feet.

In the aerospace industry the standard tolerance for geometry is 0.001 inch. For parts of a large airplane that are located far aft of the nose (an airplane’s coordinate system has (0,0,0) at the tip of the nose) it is normal to define the part’s geometry with an independent (0,0,0) origin, along with a transformation matrix that relocates the part to the proper place in the airplane’s coordinate space.

As David pointed out, repeated mathematical operations on large numbers result in the loss of precision simply because computers maintain a fixed number of digits of accuracy. This is especially problematic when subtracting 2 numbers that are very close in value. Using (0,0,0) as the part’s origin keeps the numbers as small as possible, and this minimizes loss of geometric accuracy due to truncation and roundoff errors.

Then why not implement a scheme whereby local coordinates are arrayed in a cubic grid at a round distance (say 10 meters for millimeters), and internally describe the coordinates of objects relative to the closest coordinate system ?
A quick and dirty criterion could be used to make the matching , and then coordinates would be internally expressed as a simple vector plus local coordinates.

Of course, this could be completely invisible to the user.
If that’s a bad idea, then look for a better one, do some research, innovate !

This has to be implemented on the level of the numbers themselves, as individual objects may cross these boundaries. And then we’re back to performance, as arithmetic on these custom number types is no longer do-able by single operations on processing units.

Hello! Did you somehow generate the original curves or were they hand-drawn?
Just wondering if anemone was used.