Hello,

is it possible to apply parallel computing in Rhinocommon and benefit from it using Brep.CreateContourCurves (with interval defined), or is it only possible for some custom loops?

Łukasz

Hello,

is it possible to apply parallel computing in Rhinocommon and benefit from it using Brep.CreateContourCurves (with interval defined), or is it only possible for some custom loops?

Łukasz

Have you seen this Python parallel example:

```
import System.Threading.Tasks as tasks
import Rhino
import rhinoscriptsyntax as rs
import time, math
import scriptcontext
def radial_contour(brep, parallel, slice_count=360):
"""Generate series of curve slices through a brep by rotating a plane
multiple times and intersecting that plane with the brep. This function
demonstrates the use of .NET Parallel.For in order to run the function
in parallel
Parameters:
brep = the Brep to contour
parallel = If True, this function will compute intersections in multiple
threads using Parallel.For. If False, all intersections will be performed
on a single thread
slice_count = number of slices to generate. Slices are evenly distributed
over a full circle
"""
if not brep: return
results = range(slice_count)
rotation_axis = Rhino.Geometry.Vector3d(0,1,0)
intersect_tol = scriptcontext.doc.ModelAbsoluteTolerance
# local function that does the intersection work. This function is called
# once for each angle in "slice_count" and needs to be thread-safe
def slice_brep_at_angle(i):
try:
angle_rad = i/slice_count * 2.0 * math.pi
plane = Rhino.Geometry.Plane.WorldXY
plane.Rotate(angle_rad, rotation_axis, Rhino.Geometry.Point3d.Origin)
rc, crvs, pts = Rhino.Geometry.Intersect.Intersection.BrepPlane(brep, plane, intersect_tol)
if rc: results[i] = crvs
else: results[i] = None
except:
pass
if parallel:
tasks.Parallel.ForEach(xrange(slice_count), slice_brep_at_angle)
else:
for i in xrange(slice_count): slice_brep_at_angle(i)
return results
if __name__=="__main__":
brep = rs.GetObject("Select Brep", rs.filter.polysurface)
brep = rs.coercebrep(brep)
if brep:
# Make sure the Brep is not under the control of the document. This is
# just done so we know we have a quick to access local copy of the brep
# and nothing else can interfere while performing calculations
brep.EnsurePrivateCopy()
#run the function on a sinlge thread
start = time.time()
slices1 = radial_contour(brep, False)
end = time.time()
print "serial = ", end-start
#run the function on mulitple threads
start = time.time()
slices2 = radial_contour(brep, True)
end = time.time()
print "parallel = ", end-start
if slices2:
for curveset in slices2:
if curveset:
for curve in curveset: scriptcontext.doc.Objects.AddCurve(curve)
scriptcontext.doc.Views.Redraw()
```

1 Like

I have created parallel contour function. Performance for large number of sections is increased by about 20% with 200 sections, and reduced by 20% for small number of sections for example 20. Since small number of sections gives still results fast enough, I will stick to code below. Maybe someone would suggest some changes below to make it faster. Tested on Win 10 with 8 core I7.

```
public static Curve[] ParallelContourCurves(Brep brepToContour, Point3d contourStart, Point3d contourEnd, double interval)
{
Vector3d normaldirection = new Vector3d(contourEnd - contourStart);
double distance = normaldirection.Length;
int numberofintersections = (int)(distance / Math.Abs(interval) + 1);
Plane[] cuttingPlanes = new Plane[numberofintersections];
Curve[] curves = null;
Curve[][] tempcurves = new Curve[numberofintersections][];
Point3d[][] temppoints = new Point3d[numberofintersections][];
bool first = true;
var rangePartitioner = Partitioner.Create(0, numberofintersections);
Parallel.ForEach(rangePartitioner, new ParallelOptions { MaxDegreeOfParallelism = 2 }, (range, loopState) =>
{
for (int i = range.Item1; i < range.Item2; i++)
{
cuttingPlanes[i] = new Plane(contourStart, normaldirection);
if (i == 0)
{
Rhino.Geometry.Intersect.Intersection.BrepPlane(brepToContour, cuttingPlanes[i], Tolerance, out tempcurves[i], out temppoints[i]);
}
else
{
cuttingPlanes[i].Transform(Transform.Translation(normaldirection / (numberofintersections - 1) * i));
Rhino.Geometry.Intersect.Intersection.BrepPlane(brepToContour, cuttingPlanes[i], Tolerance, out tempcurves[i], out temppoints[i]);
}
}
});
foreach (Curve[] arrcurves in tempcurves)
{
if (!first)
{
curves = curves.Concat(arrcurves).ToArray();
}
else
{
curves = arrcurves;
first = false;
}
}
return curves;
}
```

I had hope to get something more, but it seems this task is not very suitable for parallel computing.

Why are you setting to max parallelism to 2?

Higher values corresponded to longer computation time. Its seems this is due overhead. Also without partitioning parallel computation is longer than existing contour function on my computer.

I have forgot Contour also has overload for single plane, so I have found it scales very well in parallel computing, opposite to intersection. Over 100% boost for 200 sections, and 30% boost for 20 sections. No partitioning is needed. Code below:

```
public static Curve[] ParallelBrepContourCurves(Brep brepToContour, Point3d contourStart, Point3d contourEnd, double interval)
{
Vector3d normaldirection = new Vector3d(contourEnd - contourStart);
double distance = normaldirection.Length;
int numberofintersections = (int)(distance / Math.Abs(interval) + 1);
Plane[] cuttingPlanes = new Plane[numberofintersections];
Curve[] curves = null;
Curve[][] tempcurves = new Curve[numberofintersections][];
bool first = true;
Parallel.For(0, numberofintersections, new ParallelOptions { MaxDegreeOfParallelism = 8 } , i =>
{
cuttingPlanes[i] = new Plane(contourStart, normaldirection);
if (i == 0)
{
tempcurves[i] = Brep.CreateContourCurves(brepToContour, cuttingPlanes[i]);
}
else
{
cuttingPlanes[i].Transform(Transform.Translation(normaldirection / (numberofintersections - 1) * i));
tempcurves[i] = Brep.CreateContourCurves(brepToContour, cuttingPlanes[i]);
}
});
foreach (Curve[] arrcurves in tempcurves)
{
if (!first)
{
curves = curves.Concat(arrcurves).ToArray();
}
else
{
curves = arrcurves;
first = false;
}
}
return curves;
}
```

I have set parallelism limit to 8 because I have only 8 cores on PC and I dont know how it will behave for example with 16 cores… Someone could test is it worth to set here a limit. Boost is present even on only 2 cores so it is a sin to not do it. However I don’t know how it looks like on Mac.

I am happy now