Hello, I want to remove duplicated points and the list length is around 1 million. The distance between points is approx. 1 unit.
So, I’ve left grasshopper calculating and culling duplicated points for about 30 minutes and when I got back - the process was still running, what is understandable.
What would you do in this case scenario to remove duplicate points. Should I wait or play with tolerance to fasten the process?
no idea if 30 minutes is to much for 1 mio points.
looks like the cull - algorithm is not optimised for this amount.
if you re into coding, Rhinocommon offers an RTree Implementation.
In a first step, you initialise a spacial structure - which is the majority of workload, in a second step you can search the RTree with a great performance.
Honestly I am not into coding - thus I wish to learn.
Points represent terrain surface - these are probably lidar scan data.
Thank you for providing me a detailed guide on how to achieve this thru coding - unfortunately, I am wiling to complete culling process by the end of the day - that way coding method would take much longer to complete.
Yes, I want to filter point list (it isn’t technically PCloud, atleast in Rhino it isn’t).