Mass addition, Multiplication, and Division bugs with large numbers

I am working in millimeters for large architectural projects, which has created some unique challenges. It appears that Grasshopper math components were not built to handle even modest area values expressed in square millimeters, and errors and erratic behavior quickly results.

“1. Exception has been thrown by the target of an invocation.”

This can be reproduced by simply placing an Area component in Grasshopper and referencing a few curves drawn at building scale in a millimeters document, and then doing a few simple math operations. Chaos ensues.

I have begun replacing all of my GH maths components with Impala components, but I have to write my own C# components to handle Mass Addition and others. Just thought I would post to let the dev team know. This is probably a known issue? Can’t imagine I’m the first to run across it.

Cheers,
Marc

GH supports the .NET double and integer. Values below or above their extremes are not tolerated by the memory. There are other structures for more exotic numbers but GH has no support for them.

The Double value type represents a double-precision 64-bit number with values ranging from negative 1.79769313486232e308 to positive 1.79769313486232e308, as well as positive or negative zero, PositiveInfinity, NegativeInfinity, and not a number (NaN). It is intended to represent values that are extremely large (such as distances between planets or galaxies) or extremely small (the molecular mass of a substance in kilograms) and that often are imprecise (such as the distance from earth to another solar system).

Int32 is an immutable value type that represents signed integers with values that range from negative 2,147,483,648 (which is represented by the Int32.MinValue constant) through positive 2,147,483,647 (which is represented by the Int32.MaxValue constant. The .NET Framework also includes an unsigned 32-bit integer value type, UInt32, which represents values that range from 0 to 4,294,967,295.

This is a bad idea, not only because of these problems, but because you are using an inadequate tolerance, which can lead to worse performance or unnecessary accuracy. Why not change units at the beginning and/or end of the process?

1 Like

this issue ressembles all far from origin geometry problems. is it theoretical problem to use such frameworks which can work properly in normal distances and tolerances? talking about earthsize dimensions and precisions about fraction of a millimeter? i hate these kinds of workarounds to switch tolerances or convert units. 16core computers with 64gb of ram could handle this in my opinion or where is the problem?

it is completely adequate to work in millimeters on large architectural projects… for example you work on a railway track 30km long where precision 1/100 millimeters are neccessary + you work billions of units from origin in real life coordinates. would it lead to worse performance by how much if you really wanted to achieve this precision and accuracy?

You’re confusing modeling or simulation with reality. Tell me why you need so much precision to visualize or model something. What’s the point of making an intersection that calculates to 1e-8 if you can’t even put one millimeter more of cement. :man_shrugging:

Don’t try to fight the machines, you can’t beat their game.

1 Like

because some errors are then cummulative if not working with very high precision. but i really wonder how much would it cost to get such precisions? would everything become much slower? is it a matter of hardware requirments or some theoretical unovercomable problems? it is not about how precise i can build but how precise i get it in virtual model no matter the purpose

There are real issues like planar geometry doesn want to close to a planar surface because it is out of tolerance even if it was created on a plane which is odd.

Hi Juan - please post such example. Rhino 6 and up should be able to deal with that, even in “far-from-origin” situations.
-wim

I think what Daniel is saying is that you need to understand the limitations of your tools, and design your process around those limitations.
An example, a design approach which works in real life, for large projects, was devised a long time ago for use in surveying. You establish a baseline, survey the line (one way), then flip your instrument over and resurvey the line (the other way). Your baseline is the average of the two measurements. Lo and behold, the method works with old instruments, and accounts for the inherent errors of a surveying transit manufactured in 1890, and then used in the field for a number of years (and calculated with a slide rule only good to 3 significant figures). Then, you set up a series of baselines at geometrically ‘strong’ angles to each other (usually between, say 20 degrees to about 70 degrees) to establish control points. Local measurements are then made off of those control points; error is distributed among measurements to the control points. In this way surveying accuracy (a hundredth of a foot) can be reasonably maintained over a distance of many miles.
Now most of my work is no longer on a scale of miles; but in thousandths of an inch. That said, on a larger project (which for me is now about 24" in aluminum, subject to dimensional changes from thermal expansion) I still use a method of control points; the method and sequence of points largely adapted from the original survey of Puget Sound done in 1870. That is how I can achieve extreme precision using dimensionally unstable materials on an imprecise machine.
My point: think through your measurement approach, anticipate and account for the errors of your tools and calculations; engineers and architects have done this for a long time. While I haven’t referred to it in a long time, one of the reference works on my bookshelf is “Surveying” by Charles Breed, published in 1908. The approach to designing your control measuring system is still valid today, even though we use computers and software to achieve in 10 minutes what it might have taken Mr. Breed a year (or more) to accomplish.
You still need to understand the accuracy and limitations of your tools, quite nicely explained by Mr. Abalde.

1 Like

This may be true, but it’s in some ways out of my hands – I am doing toolset work for architectural designers who work on projects in the way that they have always worked on projects. And in many firms, in many contexts large-scale work is done in millimeters. Our consultants do this as well. Can I recommend working in meters instead and then sharing their work with consultants in millimeters? Sure. But as a toolset developer, I want to support as many of the use cases my users throw at me. For now, my workarounds seem to working with adequate reliability and precision.

That said, if I continue unearthing issues, I may very well make this recommendation as part of the product onboarding process.

Thanks,
Marc