I made a script with a recursive method to populate a 2d region with equally spaced points.
It works well, but I can’t get past the 3k pts threashold without crashing.
I’m wondering if my code is terribly inefficient, or the problem lies somewhere else.
For “more or less even” rectangular/box collections I would suggest to create a regular grid while each point is “distorted” randomly: p + deltaX * rand.NextDouble(-distort, distort) + deltaY … + deltaZ …; where deltaX is a Vector3d with half the step X value length and distort is anything between 0 and 1 (BTW: if distortX/Y/Z == 0 then the result would be a “normal” ortho grid). This is real-time fast and “looks” like a random collection. Try it in a challenging flat BrepFace (say a “thin” star like topology that gives the classic rnd pt Methods a hard time) or some Brep/Mesh (use the BoundingBox and resY/Y/Z for the grid).
But in general you can use a RTree or a Point3dList for candidate pts min dist checks. Avoid at any cost to define a new RTree for each candidate (just add to the RTree the candidate that passed the check) … meaning that these 2 must be public/private.
I’m not a C# aficionado, but you can greatly improve speed by keeping things simpler.
For instance, on line 67 you construct a Circle instance, which kind of is unnecessary, since you’re only using it to find points on it. Alternatively, you can use only maths to do this:
x = circle_origin_x + radius * cos(angle) // point on circle x-coordinate
y = circle_origin_y + radius * sin(angle) // point on circle y-coordinate
Then, in your first loop - where you create points around the circle -, also do the containment test here. there’s really no need to loop twice. I’m also fairly certain that C# has that constant Pi hard coded, even 2 * Pi!
When evaluating the closest point, use quadtree or rtree like @PeterFotiadis mentioned about. Simply put, quadtrees are indexed faster, and rtrees do faster nearest neighbour searches.
If you want to stick with distance measuring at least measure the distance^2 and compare it to (radius * tolerance)^2, since that is much cheaper. Square root calculations are slow!
Since you’re using recursion, you should also make sure that you’re passing by reference, instead of copying. I don’t really know how this is done in C# though. For instance, when you recursively call Buscar you want to pass it your points, the initial point, and region by reference. Standard types (float, int, double, etc.) are usually not passed by reference, but again my C# knowledge is limited.
I got dramatically less recursive calls this time, meaning my code is generating a lot of bloat somehow.
This would reduce the bloat a lot, as I wouldn’t creat a new list of candidates with every call, but the issue is that I shuffle the candidates list to generate a more homogenous and natural looking result.
Right now I’m thinking of re-writing the code to avoid recursion. Any tips on how to achieve this are welcome. Or maybe there is a way of increasing the cap of the stack?
It will depend somewhat on which .NET framework version you’re running. But I’d recommend never writing a recursive process in C# which is expected to run more than 100 levels deep. You can usually rewrite a recursive algorithm as a loop, although that is certainly not always the case.
Recursion (but anything else in fact in coding) is kinda a loaded gun: use it the wrong way and you’ll be in big trouble.
That said I’m a big fan of the right recursion (and I hate the wrong one). For instance the attached does recursively a prox Graph from any root point with 3 search options (shown: RTree) and 2 modes (shown: allow crossing [case 2d graphs]). By Graph I mean the full combo including the classic 3 connectivity trees (VV, EV, VE). The Elapsed is around 2 milliseconds (using some i9 K) for 1K non “uniform/even” rnd points. With a couple of “minor” changes in the code the time could grow 100x times.