Ok, so I test for point inclusion in a Box. Result is valid as long as the box is aligned to world. When rotated it tries to trick me. Point’s outside the box is reported as inside the box. How can that be?
I’m pretty sure that GH represents the (rotated) box as a box (the box is greenish when clicking the Brep component). But the RhinoCommons command Box.Contains(Point) definitely thinks “BoundingBox” for the test.
(The blue line would be the edge of a BoundingBox, and as soon as I go “outside” of the blue line, the Included result goes “false” :
A very fast method that will work for any convex brep with planar faces is to create a list of Planes and for each plane test whether a point has positive or negative distance to it. You won’t need any expensive intersections and it’ll work for a wide variety of shapes.
However, if you know for a fact that your input is a Brep shaped like a box, then you should try and create the correct Box. There’s various ways to approach that, either look at the vertices, or the edges, or start with a single face and take it from there…
Do you now have code for correctly determining if point is inside box?
I am now working on pointclouds and want to make a clipping box that only shows points inside the box. The cloud has a 100 million points so I am trying to come up with a efficient way to do the test. The algorithm is a key element. I will eventually move the detailed calculations to C++ for best performance
I’m not exactly Rolf, but I did something similar but with way less points in C++ a few weeks ago.
What I did was define a struct that represents a bounding box and has a maximum and minimum point, or glm::vec3 in my case, as well as a containment method - to see if some other point is inside of it -, and a method to draw it as a box (not a Rhino project).
The imaginary line that would connect the minimum to the maximum box point would be the longest box diagonal.
The point containment method only has to check, whether a three-dimensional point is bigger than the box minimum point and smaller than the maximum point to find out, if it is contained.
I don’t know whether this is super efficient though.
I have since discovered that I can use the existing Rhino Clipping Plane to do what I wanted with our pointclouds. So now there is no need for me to implement a clipping box with Python/C++ code. Rhino’s Clipping Plane is very fast to adjust and show changes. I think it is using the GPU to do most of the work so it works well even on large structures.
Here is an example of using a clipping plane on clouds with 200 Mpts:
It works really well for this application. I was totally ignorant as to the benefits of Clipping Planes before this as I had not come across an compelling application. Now I have.
I use my Python/C++ script to read in the 200 Mpts in the clouds created from Lidar scans in about 30 sec and then the clouds are ready to be clipped for inspection of interior details. This is being used in BIM and CAD flows by builders in Poland. My script rapidly imports the 0.75 billion points from the scans by decimating the clouds exterior to the house of interest by a large factor while using all the points from inside the house of interest. This results in clouds that retain all the interior details while keeping the total point count low enough for smooth adjustment of the Clipping Plane and adjustment of the perspective view of the clouds. This is turning out much better than I expected.
Now if I can just come up with an efficient method for generating surface normals, then the verticalness of the surfaces in the house can be evaluated from colored maps. I know how to color multi-hundred million point clouds in under a second, but it is going to be a challenge to find adjacent points for computing the surface normals. Quite the sorting problem but I have some ideas beyond using rtrees that should work well on computers with many cores. Still it will be a challenging problem. Any ideas here?
Interesting! Some shader magic probably? I reckon, point clouds really don’t need to be created as Rhino geometry objects - unless the user explicitly wants that, in which case the performance should drop a little -, but can rather be computed/displayed in a shader instead. This should be really fast, even for millions of points, and doing some basic calculations, like signed Pythagorean squared distance calculations to a plane and such.
Impressive as always, Terry!! Great work.
Wow, do you need a rough massing model of the building to evaluate which points are part of its volume and which ones are exterior points?
This is really not my domain, but you could probably use “machine learning”? Density- and/or distribution based clustering? Although, I don’t know if that’s really all that efficient and fast, since machine learning models usually need to be trained and consume lots of computing power.
Another thing that comes to mind are KDTrees, which kinda are hierarchical clustering structures and I believe even used in some “machine learning” algorithms (e.g. k-means clustering). KDTrees are not that hard to implement in C++ and can possibly be built faster than the rtree, especially for low-dimensional data (< 20d). I believe the rtree on the other hand performs faster nearest neighbour searches though.
Take all of this with a grain of salt, since I’m not an expert.
In terms of the Clipping Plane, I brought up my GPU monitor and can see that it works hard when rotating the Clipping Plane. Since my GPU is big and fast, the clipping action is very smooth for clouds with up to 220 Mpts.
TDistinguishing the exterior points from the interior points was the easy part; they are in different scan files. I just prompt the user to select the two sets of files.
My most recent thoughts are to try to get more information about the Lidar scans. Knowing the XYZ of the scanner location could be use in multiple ways: (1) filtering out distant noisey points to reduce point count (2) enabling geometry calculations to identify surfaces and hence surface normals. To align the points from multiple scans, the location of the scanner must be known. So I just need to get than information. Never having used a Lidar scanner, I don’t know how available this data is but I will ask the owners of the point clouds I am processing.
Getting down in the dirt and trying to recover the surface normals from the list of cloud points is not my first choice. Let’s hope there is a easier way.
I’ve been using Radovan Grmusa’s (@RadovanG) very fast cylinder inclusion test also for boxes.
The way I used it was by defining three center axes (and a cylinder enclosing the box in that direction) and testing for each axis/cylinder, and skipping (break) the first time a point isn’t inside the cylinder.
Unfortunately my final code is a bit “involved” and entangled in a library I had planned to adapt to GPU processing, but in Radovan’s cylinder test is in the link, and well documented. Just nest the axes/cylinder tests to break early, somethiong along these lines:
bool PointInsideBox(Point3d pt, Vector3d axis1, Vector3d axis2, Vector3d axis3)
if not PointInsideCylinder(pt, axis1) return false;
if not PointInsideCylinder(pt, axis2) return false;
if not PointInsideCylinder(pt, axis3) return false;
return true; // is inside
The concept looks something like this if illustrated (no rotation is needed to align axes with world, but I have not tested if that would be faster or not, all I know is that this test is quite fast):