Change all point precision to int

Hi all,

i have sometimes problems with the rhino precision in python scripting.
For example if i snap points to the same location create a list with these points and try to count all equivalent members with Collection.counter or use these points as a key in a dictionary they are sometimes not precisly on the same location and some error pop up.

If i change the precition to int or 2 decimal for the point coordinates i dont get any errors but changing all time the precision for each axis individual is not a option.

Is there a way to tell python Grasshopper to change point precision globaly to max 2 decimals and cull the rest?

Thanks for help

No, floating point numbers are ultimately in base-2, and any “fix” which is defined in base-10 is a really bad idea. Also rounding may severely exacerbate the difference between two values. Consider 0.499999999999 and 0.500000000002. If you round them to whole numbers their difference goes to 1.0

You must always use a tolerance when comparing floating point values for equality, if those values come to you via different paths. This is just a sad fact true for all of computing.

If however you feel that a certain value has unnecessary inaccuracy, then that’s a separate issue we can look at.

The difference between these numbers is 2.9999336348396355e-12

:wink:

Thanks for the explenation.
It is sometime pretty annoying if i check the dictionary keys and see in the print result exactly the same key two times with two different values.

print L[0][0]
print L[0][1]
print D[L[0][0]]
print D[L[0][1]]
[345.66765305792205, 1037.0029591737655, 1035.4917210494687]

[564.27953376698224, 274.19471237816327, 411.62535739045899]

166.224680824916,39.4418824454224,0

166.224680824916,39.4418824454223,0

My solution this time is to compare key items to each other with tollerance and merge the values to one key and delete the other.
like this.

L = []
for a,b in itertools.combinations(D,2):
    if rh.Geometry.Point3d.DistanceToSquared(a,b) < 0.1:
        L.append([a,b])


for i in L:
    D[i[0]].extend(D[i[1]])
    del D[i[1]]

edit:
ups the last number of Point.Y is different

Yeah, it’s a drag. I don’t know about Python but in .NET there’s no standard key-value type which handles floating-point-comparisons-within-tolerance. It’s something you have to write yourself. And then decide whether to change the key value over time. Rounding may simplify some of these steps, but you have to be very careful to never round before performing a comparison, as the difference post-round can be much bigger than it was before.

If you sort the values first then you can reduce the number of comparisons by only checking the distance against recent nearby points.

A deque could be a good data structure to retain recent points in as you can efficiently discard the first points as you add more, by setting the maxlen parameter.