Varying upload/compute times

I’m experiencing an issue where the same file successfully uploaded previously will be rejected for taking too long if I try to upload it again. I find that if I repeatedly upload it, at some point it will compute in the time limit. This is for a file that on my computer takes <6 seconds to load from scratch, and it usually ends up over the 10000ms cutoff by a few hundred ms.

Would love it if the upload was more reliable, or had some leeway here so I didn’t have to keep trying to upload the same file until it clears the time limit.

At file upload, our servers do a number of steps to parse the file, and at the moment it can create the type of issues you are experiencing. We are working on more flexibility in computations happening when files are uploaded. In general, we have also just added the computation analytics to all commercial accounts (Designer, Designer Plus and Business), which allow you to explore potential bottlenecks and issues with your models on our servers.

Thanks for the response, Mathieu!

Overall, it is challenging to handle the differences in compute time between my computer and the SD servers. my model readily computes in well under 10000ms on my basic computer, but on SD my model breaks often. It would be nice if the SD compute time reflected my grasshopper run time, that would make my SD application much more robust.

Does your definition contain a lot of scripts? Does it output heavy meshes that need to be downloaded by the client? In general, it might help to have a look at our article regarding the differences between local and cloud computation times.

Mathieu,

My definition contains zero scripts. I have a few questions that may help me understand how to optimize my definition:

  1. Is the SD compute time roughly equivalent to my GH compute time? For example, are there instances where a certain component/workflow is faster on SD than it is in my GH?

  2. I notice that the components that take the vast majority of time in my script (using metahopper bottleneck navigator) are the import text components. Currently I have these routing from dropbox URLs. Are these components truly the bottlenecks in my script, and if so what can be done to increase their efficiency?

  3. Do clusters have any impact on computer time? Would it be faster to have no clusters?

  4. What is your definition of a heavy mesh in terms of faces + vertices?

Thanks Mathieu!