Splitting GH Def into two parts to avoid computation time limit

Hi @pavol,
We have a Grasshopper definition consisting of a configuration part and a second part to generate fabrication data. It’s computationally heavy. The client is still on a Pro Plan and can’t afford to upgrade to the new Business plan. So we really need to find a way to optimize for computation time in order to stay below 10s.

As described in this blog post of yours, the definition could be split so that the second part would run asynchronously in the background as the user is not meant to interact with it. Compute In The Cloud With ShapeDiver!

Given that the same computation time limits apply to the part in the background, each part can now take up to 10s.

Alternatively, we leave the two parts in one definition but separate them with a gate, that would only feed data into the second part when the user hits the export data button. Are we hitting the computation limit if the first part takes 9 seconds and, after the toggle is switched, the second part takes another 9s. Does the sum of the two steps count towards computation time? Is the first part triggered again after switching the toggle?

Thanks for your help!

Hi @Simon_Vorhammer, let me clarify how the computation time limit works: Any computation being done for a Grasshopper definition is limited to take at most the amount of seconds the respective account is limited to. This limit is applied regardless of the client application the computation is requested from, i.e. it’s the same when changing parameter values from our web viewer or when sending an export request from a backend application.

Now let’s touch upon your second question: I understand you have to solve two different tasks:

  • driving an e-Commerce configurator which outputs visualisation geometry for the end user
  • optionally generating fabrication data based on the choices the end user made

Whether you split the Grasshopper definition for solving these tasks into two, or use a single one, depends on your specific case and application. Using a combined model naturally comes with some overhead, i.e. if you are scratching on the computation time limit then it might be wise to split it. On the other hand some of your logic might have to appear in both definitions and therefore have to be maintained in two places.

I actually have a question and/or suggestion that is closely related to that.

There are 2 very distinct use cases for Shapediver that I see happening all the time. One is the public facing front end API stuff, where you might have a configurator and 3D viewer and the other is connected with creating production data, which is usually tied to the backend API.

The huge difference between the 2 is, that the first needs to be highly scaleable, because many people might be browsing your web shop, while the other one is usually triggered by ONE system - usually some sort of ordering system and can be made linear quite easily. That means even if you receive a lot of orders its usually totally fine to not generate the production data simultaneously, but can be done serially one after the other.

Could I therefore propose the following: how good would it be to have Shapediver definitions that would be high performance, but have a parallel computation limit of 1. That would solve these issues where you don’t need things to be time critical when using the backend API, but would rather have the performance. For example our patch that creates the production files runs in 450ms on my machine with an i7-9700k, but takes just over 11 seconds in the frontend viewer. Sure there is some additional overhead with sending the parameters and then receiving the mesh, etc., but it’s still in the order of magnitude of 10-20 times slower.

Upgrading to a business account to get 30 seconds of compute time for more than twice as much money seems like the wrong tactic if you would need that time only for the backend API stuff that does not need to be parallel.

We are actually building a sort of local equivalent to Shapediver for exactly this purpose. If it is not super time critical, which production files usually are not, it’s actually okay to not calculate that in the cloud, but do it on a local machine that receives instructions via for example Google Sheet and then saves the result to a mounted Amazon S3 share and writes the URL back into the Google Sheet. Connect everything with Integromat and you have a much simpler (but admittedly not as reliable) solution to this.

Now that more and more people seem to be interested in using the backend API, it would be great to see some improvements in this area.

What IS the roadmap for Shapediver anyways? I have not noticed any update in months.

1 Like

Thank you for your suggestions, we will consider what is the best way to make the backend more powerful. Explanation of your use cases lets us understand details of your workflow much better.

Don’t worry, ShapeDiver is moving forward. We have been quietly working on major upgrades some of which has been already rolled out and lot more is coming soon including new platform with improved UX, better Grasshopper plugin, desktop clients, faster geometry backend…

Your model shouldn’t run so much slower on ShapeDiver and there is number of reasons why it doesn’t perform as you would expect. Try to narrow down the issue and post a minimal version of your definition. Our video tutorials can give you ideas on what to improve as well. We helped lot of our clients with their models and our sales team can give you more details regarding the private support if you are interested, let them know via the contact form at www.shapediver.com.

Hi @snabela,
Thanks for the reply. That all makes sense. However, it doesn’t answer my question and I just realized I did a really bad job at being precise. Basically, it boils down to this question:
Does a parameter change trigger a recomputation of the entire definition or only the elements downstream?

Hi @Simon_Vorhammer, many thanks for your feedback, and for restating your question. The answer is this:

There is a whole farm of Grasshopper workers processing the incoming requests. Our dispatching algorithm makes sure to dispatch the requests in an optimal way. There never is a one to one relation between a client and a worker. You might be lucky and your request might end up at the exact same worker as a previous request did, in which case only components influenced by the changed parameter values will have to be computed. Our dispatching algorithm make sure that this happens as often as possible, but you can’t count on it.

1 Like

Hi @snabela,

Thank you very much for the explanation. That clears it up for me. Should have asked this question way earlier :wink:

If I search “Data Dam and Shapediver” I get no results so I’ll ask this here…

Can you use data dams in shapediver?

Data dam is not allowed, see the details on forbidden Grasshopper components below.

Thank you.

It’s a shame about data dam… If the model does require a lot of computation time to generate manufacturing data that is of no interest to the user then the whole model is slowed down but I guess in most cases the model should just be split and manufacturing data can be calculated offline based on parameters output from the web part of the model.

In cases where the “product” might be a dxf file export that the user can either use themselves or send to a 3rd party for manufacture it would be useful to be able to split the model into a Configure part and Generate Manufacturing Data part to make the configuration less slow.

It looks like some of this is covered in the Pro Version Export DXF feature.