Performance cpu utilization 5ghz

Using Rhino and grasshopper 64 bit on windows 10.

5ghz 4770k and 16gb 2133 memory. gtx 970 4gbs

I have begun learning grasshopper and watching tutorial videos on youtube.

I have a pretty decent computer and grasshopper and rhino should run decently well.
Watching tutorial videos on YouTube I see them do definitions in rhino with little to no lag when changing properties like number sliders. However While I am following along my computer is not keeping up. I know Rhino and grasshopper doesn’t utilize cores very well. But my cpu core clock is quiet high and should be burning through these simple definitions.
I am wondering if I have grasshopper set up wrong or rhino because I never see either one use more than a few % of cpu power in task manager.

Anyone have any ideas I have been searching around for tips for a while.

Edit: I have also tried rhino 5 64bit and the rhino 6beta with no changes of grasshopper performance. The task manager for comparison shows up with higher cpu utilization than either program.

Thanks

My experience has been similar to yours - what I have figured out is that at least part of the reason for the slow response to changes in the slider control is that the slider forces a complete recalculation of all the intermediate values it goes through.

For instance, suppose you have a slider set up to go from 10.0 to 12.0 with 1 decimal place changes. If your slider is set to 10.5 and you change it to 11.0 the entire GH script will be recalculated 5 times before you get control back If your GH script is at all complex this can take quite a while.

What I do to work around this is to Edit the slider, change the value to what I want the new one to be, and then accept this. That results in only one recalculation. It’s not as nice as seeing the resulting geometry change in real time, but until Rhino becomes fully multi-threaded (and I have seen no indication that it ever will) this is about the best solution I’ve found.

Given the current state of microchip manufacturing technology and the limitations of physics and optics, I don’t think we will see any great improvement in processing speed unless and until our applications become fully multi-threaded.

It’s always a good idea to enable the Profiler widget from time to time and see which components take the most time. (‘Display | Canvas Widgets | Profiler’) Typical culprits, especially when there are many objects involved, include Boolean solids (’‘Difference’, ‘Intersection’, Union’, etc.), ‘Volume’, ‘Area’ (both are often used for nothing more than obtaining center points), ‘SrfSplit’, morphing and others.

Some can be avoided (there may be much faster ways of obtaining center points than ‘Area’ or ‘Volume’, for example) and some can be disabled while adjusting parameters interactively. You can still see the effect of changes on curves and points with much quicker response time. Enable the slow components only when you have the settings you want.

Also, if you are working at “high resolution” (an ‘Image Sampler’, for example, or thousands of points, curves, shapes, etc.), it’s way more effective to perfect your algorithms at a much lower geometry count (“resolution”) and only increase the numbers when you are satisfied that code is working as intended.

Finally, you have to keep a close eye on the output of all your components. A mistaken graft here or there can produce shocking numbers of geometry, tens or hundreds of times more than expected.

There is an indication. In fact, in V6 there are some components already multi-threaded.

What encouraging news - thanks. I was not aware of these changes. Here’s hoping more will be forthcoming soon.

I’ll be looking out for more dot decorations.

We just need samples that show components running slowly so we can see if multi-threading them improves their situation.

I think I’ll be able to provide you some samples. I presume you’ll want the GH file with all data internalized, along with specs on my CPU & RAM .

Would a GH file that has an SDiff that takes 44.2 seconds on a 3 GHz Intel i7 CPU be helpful for you?

Sure; I don’t have any specific requirements for how to get definitions. Internalized probably makes things easier since it would be one file.

Here’s a piece of code that, when run with the slider settings indicated, takes most of an hour to complete the ‘SUnion’ (and ‘SrfSplit’ and ‘SrfMorph’). Ah, but these components are not yet on the multi-threaded list

  • ‘# Horiz.’ = 27
  • ‘# Vert.’ = 33
  • ‘H Size’ = 1.3
  • ‘V Size’ = 1.4
  • ‘depth’ = 0.1
  • ‘height’ = 2.0

vase_2017Nov12a


SrfMorph_2017Nov14a.gh (23.4 KB)
(updated later to simpler version)

Joseph’s example is one he helped me develop. Here is a slightly different variation that uses Sdiff as the final component.
Capture

Sdiff-50ms.gh (23.6 KB)

My system has an Intel I7-3770S CPU clicked @ 3.10 GHZ with 16 GB RAM and an nVidia GT750 video card driving 2 monitors. I’ve got quite a few GH files like this so I can provide more if you want them.

To give credit where it’s due, I lifted key parts of that code from Lorenz Weiss, along with bits (‘Consec’) I added in this thread:

Wow! thanks for all the information guys. I’ll have to check out your suggestions.

Here is an example of a relatively simple, small bit of GH code that, through ‘Mirror’ and ‘Graft’ is generating 67,734 curves (sixty-seven thousand+!), yet only 22 are visible. All the rest are useless duplicates. Very slow response to changing any parameters.

Note that a component may take a long time doing a single iteration. If a component is slow yet the tooltip claims that it ran once (or any small number of times) then it cannot be sped up by making it multi-threaded. In these cases the only option is to make the underlying operation in Rhino itself multi-threaded and that may be very difficult and sometimes impossible.

Examples of this are solid booleans involving many inputs. Each union or difference changes the underlying shape potentially affecting all subsequent unions or differences, as such it is nearly impossible to parallelize this.

is an excellent point; thanks for making it clear David. When I made the part I posted above I was thinking “wouldn’t it be nice if GH could separate out the processing for each addition/subtraction, and then do 8 at a time”, sort of like how Photoshop divides up an image into separate pieces and parallels those.

But you are totally correct - in the Photoshop case the underlying object always stays the same, but with geometry it is very difficult or impossible to predict if any one Boolean operation would make a change that affects a subsequent one.

I guess one way to tell this might be to see if any of the addition or subtraction objects intersect, but of course doing that would add a whole new set of Boolean calculations, and that would be totally self defeating.

3D sure is fun, isn’t it?

This is exactly the case with the sample that @Birk_Binnard posted. The SDiff component only ran once which means we can’t multi-thread it at the component level.

There are potential optimisations even here, for example if the bounding boxes of all the cutting shapes are disjoint then perhaps part of the process can be parallelised. However it might also make sense to move the parallel code even deeper than the solid boolean method. For example into the surface-surface-intersection code. I do not know to what degree really low-level stuff like this has been made multi-thread aware in Rhino6, but I do know the maths boffins are working on making these sorts of improvements.

Though it is true we can not multi-thread solid boolean methods, we can, however, spread branches of inputs across threads (thank you data trees). For example, when perforating individual panels, each with their own set of shapes to be differenced, you can run the boolean process for each panel on it’s own thread. It’s not truly multi-threading the boolean operation, but still runs much faster for the branched use case. In fact, I imagine this could help any component that could not otherwise be multi-threaded.

Here is a generic swiss-cheese example with 64 slices:

I’ve attached the GH file for this example here:
RecursiveBooleanExample.gh (15.3 KB)

A side note: the boolean difference method employed in the definition is done recursively so that any shapes that fail to difference can be discarded to a list that can be used for troubleshooting without dumping the whole process (sphere in red). This takes a little longer than difference-ing the whole list at once, but it’s negligible and the troubleshooting benefit has proven useful for me.

2 Likes

So I had this idea: If Rhino and Grasshopper are single threaded programs (mostly) and HyperThreading “splits” your core in half, does that mean that they only utilise half of the core? Well… I tested: with HyperThreading ON I had around 12-15% CPU utilisation (i7 4770K, quad core) while performing some calculations in Rhino or Grasshopper. I set HyperThreading to OFF and now it get around 25-27% CPU utilisation and it feels faster and smoother. Am I imagining it or do you guys think my thinking has some logic? (I’m not computer scientist)

Hyperthreading is pretty much a scam as I understand it.