Model could not be confirmed because it took too long to compute!

Howsit guys,

After waiting years for the parameter order problem to be fixed I actually started down the route of writing my own rhino plugin based on my kite design code. Have spent 6 months or so working on getting that to a level of usefulness.

This morning I decided to have look at my Shapediver/Online design tool and give it a much needed update.

My local def loads on my 6yo i7 laptop in ~10 sec and recomputes in about 3.5sec:

Image 20220219002

However uploading to either the ‘old’ or ‘new’ platform hits me with this:

I’m really in the WTF am I meant to do here - my code is 10x faster than the SD ‘limits’ yet fails on upload. It also fails without ANY idea where to look in the just over 3000 components in this GH file.

I really need more of a reply than the link to the SD optimization blog post guys - what can you actually help me with.

Cheers

DK

OK - so this is something I’d never looked at before:

SD Tolerance component

My local Rhino files have for years been set to mm and a document tolerance of 0.1mm

There are a bunch of Boolean and Unroll operations in my def that have run times somewhat inversely proportional to the document tolerance setting.

The SD default tolerance is likely 0.001 and could easily have blown the run time of my calculations out past 30 sec.

Very surprised this is not made more clear in the discussions on def run time.

I honestly cant believe its taken me two years to work this out for myself - its head in hands obvious now that I think about it.

Cheers

DK

OK - what a difference 24hrs can make.

Even with the tolerance set to something that works better for my Def I was seeing a server time out on one very specific set of parameters that a client needed.

Managed to track that one down to a boolean split that was a little too close to an edge - extending some lines helped solve that.

Next up I was projecting some curves onto a Brep that was running OK locally but I suspected was blowing out the SD environment. That was pure decoration in the display - deleted and not going back.

After those I started looking at every single item on the bottleneck navigator:

Image 20220220001

In particular I was using AREA components to find points for sort operations. This is a BAD idea - the Area component has a long run time. I replaced a bunch of these with Deconstruct Brep etc and found a huge speed increase.

My local run time has dropped from ~6 seconds to under 2 secs (so quick Grasshopper no longer reports it!)

Still very strange how local run times of 6 seconds can blow out to 30 seconds on SD, but OMG - my whole Def is so much better now!

Sorry for the rant yesterday guys - it was just from frustration.

Honestly can’t believe I had so much performance increase available.

Cheers

DK

Glad to hear that you went down the rabbit hole of optimization and managed to get much better performance from your definition. Regardless of the performance of the ShapeDiver system and discrepancies you might experience, this is always going to help. That being said, you have pointed out some valid concerns regarding the system:

  • Tolerance settings: I agree that we do not mention this detail nearly enough and that documentation should be improved about this point. It can really make a difference.
  • Remaining discrepancy between local vs server computation time: an important point here that was maybe not discussed a lot in the past is the size of the definition itself (in terms of the number of components). While locally the number of components can increase a bit the loading time but have almost no effect on further computations, it is different on our servers because all components need to be parsed at a point, for multiple reasons (allowed and disallowed components, preview on/off, scripts, etc…). This parsing is then more or less proportional to the size of the definition, regardless of how fast it is locally. We are now investigating further this impact and cases where it can be improved. While it is difficult to measure, I can confidently recommend strategies to reduce the number of components when possible, by means of clusters or, ideally, smart manipulation of data trees which can successfully prevent the multiplication of instances of the same component.

I hope those comments clear up some details. Our parsing and dispatching algorithms have come a long way already but we are well aware that there is still a lot of room for improvement. As a matter of fact, we consider this to be one of the most valuable parts of the ShapeDiver infrastructure: rhino.compute has made it much easier for developers to launch a cloud system running Rhino and Grasshopper, but the performance and scalability of such a system remains a considerable challenge about which we have been developing solutions for many years now.

Hi @mathieu1

Thank you very much for those details.

That makes a lot of sense and does clarify what I was seeing.

Would it perhaps be possible to create a component that estimates the SD run time for a Def? That would be a useful development and debug tool esp for more ambitious pieces of code like mine.

Cheers

DK

Later this year we will roll out our IDE for Grasshopper which will make it even easier to upload models (directly from Grasshopper), and will also provide useful feedback about computation time etc.

1 Like