How to not to recompute whole definition, just compute new added parts

I am writing a definition which will eventually be hosted with Shapediver.
The definition will have the scope to build one parametric model, and then add another, and another model and so on.
Think of building a set of textile planar panels, once one if finished, you can add another and so on.
Is there a way to not have the definition completely recompute including for the parts already finalised?

You can control parts of your definition with a toggle and either the native Stream Filter or Heteroptera’s Freeze/Gate components. Set toggles for each step of the production logic and enable them according to the sequence.

Thank you Pavol. I will give this a go

I have probably not explained the issue correctly.
I have stream gates in my definition to turn downstream data computes on/off.
The issue is that when I pass data through a gate to enable further computation, the whole definition recomputes including the upstream data for which there is no need to recompute.
I want to optimise the compute time and want a way to only compute downstream from an opened gate and add this to the what has already been processed.

This is impossible within one definition, I’m afraid. Each new computation can be processed with a different worker at different location and therefore nothing is allowed to be stored within the Grasshopper file to keep one-to-one relationship between inputs and outputs. Learn more about the ShapeDiver infrastructure in this and this article.

A way to achieve this right now is to split your definition into separate definitions with partial outputs (for example mesh exports) and inputs (for example mesh imports) and connect them on API level. The good news is that chaining ShapeDiver models will be a lot easier with new features coming next year.

2 Likes

Thank you. I look forward to the release of the new components next year.
I have written the definition as you suggest as separate definitions, after item 1 is built, you have the option to build a second item added to the total output, and after item 1 and 2 are built, the option to add item 3 as you suggest.
It just seemed wasteful of processing time o have to recompile items 1 and 2 when you add item 3, when item 3 has no impact on items already built.

My apologies, I didn’t get your use case right the first time. What I suggested works well when you need to separate for example nesting or export of production meshes to improve model performance. As mentioned above, you can chain multiple definitions on API level but there is a brighter future ahead of us. Stay tuned.

Thanks Pavol.
I struggled to explain what I was trying to achieve.
I wanted to make sure the definition was as efficient as possible.
We have a separate definition being implemented on your professional hosting at present, and I did what you suggest, nesting is only performed when required, at point of purchase, or if a downloadable pdf is required. At point of purchase the nesting is performed and emailed through to us as a production file.

Hi @pavol,
Could you kindly provide an update on when these new features related to chaining definitions are to be released this year?
Thanks, Simon

1 Like

The solution Pavol mentioned is to break down the algorithm into several definitions and chain them using the API. In other words request a compute of the first one, get the outputs, use them as inputs of the next one, and so on… This is already possible using the API.

As a side note, we are exploring the best way to make this a built-in feature but there are still a lot of open questions so we cannot give a timeline for this specific update.

Yes, I’m aware of the method to achieve this right now so my question was referring specifically to

1 Like