V6 Feature: Multi-threaded GH Components

(Steve Baer) split this topic #29

9 posts were split to a new topic: GH Feature Request: Parallel Groups

(Steve Baer) #30

Update (Sept 5, 2017)

The Sept 5, 2017 WIP now includes an additional 10 components that use multiple threads to solve. Here are the additional components that could use some “kicking of the tires” for multi-threaded solving.

  • Mesh | Plane
  • Brep | Line
  • Brep | Curve
  • Brep | Brep
  • Brep | Plane
  • Curve | Curve
  • Curve | Curves
  • Point in Brep
  • Point in Breps
  • Curve Self-Intersection
GH Feature Request: Parallel Groups
(Steve Baer) pinned #31
(NARUTO) #32

HI Stevebaer
I use the Contour component,Use multithreading to calculate the results are not the same.

Multi-threaded test.gh (7.3 KB)
Multi-threaded test.3dm (26.3 KB)

(Steve Baer) #34

Thanks Naruto, I’m investigating this

(Steve Baer) #36

This was actually available in last week’s WIP, but I forgot to mention it. David and I added converted another set of components to be able to use multiple threads for solving. The current list is:

  1. Curve | Plane new
  2. Project Curve new
  3. Pull Curve new
  4. Split with Brep new
  5. Shatter new
  6. Split with Breps new
  7. Trim with Brep new
  8. Trim with Breps new
  9. Area new
  10. Area Moments new
  11. Volume new
  12. Volume Moments new
  13. Brep Closest Point new
  14. Mesh | Plane
  15. Brep | Line
  16. Brep | Curve
  17. Brep | Brep
  18. Brep | Plane
  19. Curve | Curve
  20. Curve | Curves
  21. Point in Brep
  22. Point in Breps
  23. Curve Self-Intersection
  24. Contour
  25. Dash Pattern
  26. Divide Curve
  27. Boundary Surface
Performance cpu utilization 5ghz

I’m having some problems trying to use the Parallel option for the BREP | Line and BREP | Curve components … basically Rhino ‘stops working’ using either of them, but running single-threaded works. Calculation is ~6.4M lines intersecting a BREP … although i’d like to actually check the 16 individual faces of the BREP, but first things first …

Related - is there any chance Mesh | Ray could be next on the list of threaded components? In the same script I’m currently testing 2818 rays from 4771 points for intersection with a (rather large) mesh - the ~13.4m tests take ~4 minutes so i’d like to get this multi-threaded

1 Like

Has this list expanded?

(Steve Baer) #39

No; is there a component that you need us to investigate?


Do you have parallel faces on the Brep?

// Rolf


@DavidRutten would it be possible to parallelize the Surface Geodesic component? This is a computationally heavy component which I suspect could be easily parallelized. While you’re at it, I’m wondering if you can expose the tolerance value in the component?

Surface Split is another component which could be parallelized and is often computationally expensive.

(David Rutten) #42

Filed under RH-43240 and RH-43241.


hi … sorry my email notifications are still not working

I don’t have the case in front of me, but i suspect there are parallel faces and even if not, it is likely that other models would include parallel faces

(it is building envelope analysis, so pretty good chance they are ‘box-like’)


Is it possible to parallelize the mesh collision component?


Have you considered an “Auto” mode for multithreading? I have found that when dealing with a small number of geometries, many of the MT operations are considerably slower than their single-threaded counterparts (e.g., 254ms vs 15ms) because of the set-up time for the threads themselves (as outlined in previous discussions/posts w/ @stevebaer).

In an ideal world, a scalable definition would use Single threading with a small number of input objects and Multithreading with a large number, and there is likely a set of thresholds that represents the breakeven point between the two. I understand that this breakeven point may differ considerably between components and may even be complicated by the fact that some multithreading components can accept different kinds of input types, and therefore further complicate the breakeven threshold.

That said, I could see some sort of one-time regression analysis done on this problem in a sandbox scenario that could determine of a set of thresholds based on an accounting of the input geometry/data, which would be prebaked by McNeel after a set of tests. Those thresholds would simply apply ST vs MT based on those thresholds, and therefore scale appropriately as the amount of input data increases/decreases.

Right now I am considering implementing a hack that essentially routes data into a ST version of the component or a MT version based on my own realworld tests… but boy, it would be nice if GH could handle that.




After testing some GH0.9.0076 => 1.0 migrations, we have found that components that are now MT-capable have multithreading enabled by default. This is becoming a major problem for us.

As this list of enabled components grows, the challenges updating old definitions grow as well. I would like to make a case to have MT set to OFF by Default, or at the very least on any components that were written with an older version of GH (and therefore were not explicitly enabled).


Many of our definitions actually do a significant amount of work on smaller amounts of geometry, and due to the overhead of setting up the threads (and not making up the difference in performance) MT operations are actually slowing down performance tremendously (adding 2s to a 300ms definition, for instance). This means we have to search our definitions for MT components, test them in various scenarios, and then decide whether MT is a benefit or a hindrance.

Moreover – As it stands, with MT ON by default, we will be subject to sudden slow-downs when new components are MT-enabled, and the end-users of our GH applications will experience UX issues (and more importantly, unreliable and inconsistent UX over time and over GH versions) until we are able to identify new MT components, track them down, evaluate, and push a new version to our users.

This is a real maintenance issue moving forward. Ultimately, if some kind of Auto mode is implemented to make a best-fit execution scenario, this problem will be solved, but in the meantime it seems that setting a default to OFF would be a good solution.


(Steve Baer) #47

I wonder how hard the analysis would be. It may be possible to set up something that runs the solution four times with the MT feature on and four times with it off and then chooses based on results from that.

I don’t see any problem with defaulting to off (or even making the default an option). @DavidRutten any opinions here?


Awesome, thank you for considering it.

Another two cents: default as an option would be great but if it could be OFF by default, that would reduce the amount of tinkering that a large organization has to do when deploying Rhino at scale. I don’t mind if advanced users want to turn it on, as it’ll be easier to explain to them the ramifications of their decision and why our standard tools are more sluggish than their amateur colleagues… :slight_smile:


(David Rutten) #49

No opinions. I can add an option for this into the Grasshopper preferences Solver category if you want, or we can just default to off the hard way.


Default to off is… should be the default approach. That simplifies all aspects of it.

Also given the fact that parallel processing nearly always is depending on factors not obvious until testing, OFF should really be the default.

Only conscious design decisions (often after some testing) typically motivates going parallel.

// Rolf