@Holo, @Helvetosaur, @nathanletwory,
Thanks for letting me know. I am no longer using MeshPatch but does this also impact CreatePatch?
I checked my points list using rs.CullDuplicates and found no duplicates. I did find that CullDuplicates runs very slowly, taking as much time as my script for a 24,878 vertice patch mesh. So using it to save time in creating a mesh with MeshPatch may not be the way to go.
When I tried CullDuplicates on my large test case with 660,000 vertices, it took 2451 sec. After moving to CreatePatch, this test case runs in only 16 sec so I am not inclined to use CullDuplicates. Maybe there is a more efficient function? CullDuplicates seems to run in n^2 time. Maybe this can be reduced to n x log n time with a better algorithm. I had the same run time problem with my script. When I first started out, it took over an hour to do what it now does in 16 sec. Most of the time reduction came from eliminating 95% of the calls to:
curve_geo.Contains(pt, XYplane, 0.01) == PointContainment.Inside
which runs very slowing for a mesh with 2.2 million points. I fixed this by pre-binning all the mesh vertices into 5’ x 5’ bins for the 340’ x 400’ area of the 3D mesh model. This took 4 sec but now its very quick to look for mesh points inside the boundary curve by just inspecting bins that overlap the xmin,ymin by xmax,ymax limits of the boundary curve. Bins that fall completely inside the boundary curve are not inspected at all; all their points are just swept into my list of points inside. In the old days this was not popular as it uses many MB of data for the bins. But today, with computers full of GB of data, it is a cheap way to go fast. If I remember correctly, the combination of these improvements sped up the script about 50X (3600 sec down to 72 sec). The rest of the improvements came from using more Rhino calls vs rs. calls, using CreatePatch instead of MeshPatch and, so far in one case, tasks.Parallel.ForEach.
My next big challenge is getting tasks.Parallel.ForEach to live up to its potential for a 5X to 10X speed improvement on my 18 core CPU. I have many functions that can be executed in parallel with no interaction. I then collect the sub-lists from the outputs and combine them at the end. One example is gradient coloring of my mesh where it takes a couple dozen simple calculations to compute the color for each of the 2.2 million vertices in the 3D mesh model. The calculation for each vertice can be independent of all the others so I divide the points into 36 sequential groups which gives each thread 61,000 colors to calculate before it returns its list of 61,000 colors. But right now, the code runs more slowly using tasks.Parallel.ForEach no matter how many threads are used. Using 1 thread gives the fastest result and it just gradually slows down as more threads are added. Obviously nothing is being computed in parallel. So far I have not gotten help that improves this result.