Clearing cache for failed solutions


we have a number of configurators based on the same Grasshopper definition.

Everything works fine, except that if for some reason a given combination of parameters results into a temporary failure (maybe a timed-out computation or texture loading error) then the same combination of parameters will not be recomputed again and will alsways return an error - like if it is a cached result.
(something already mentioned here: )

This obviously doesn’t work, especially when the parameters combination is the default one.

In the above mentioned thread, you indicated that re-uploading the definition was the only solution for clearing the cache, but since re-uploading the definition also changes the embed ticket, this requires to update all the configurators that are based on that definition.

Also, there is always the possibility that a computation fails in the future, so if that “failure” is stored as a “solution” the same problem can occurr again.

Right now, the only solution that I can think of in order to clear the cache, is to create a local table with locally defined configurator IDs associated with embed tickets, so that we can re-upload the models and update the tickets only in one place, but this would not solve the main issue of the cached failed results, and I don’t think there is a way of knowing if/when a model has failed.

Am I missing something? What can we do to solve this?

Thanks in advance,


Hi @Marco_Traverso, we will provide you with a workaround solution beginning of next week. In general we are working on a solution to mitigate the inconsistencies that inevitably occur for the computation time.

My workaround for this would be simply integer (like version) which does not affect the final result, but say to SD that is unsolved inputs combination.


That is a good tip, thanks for sharing. I’ve been using a similar solution for texturing by adding a version number to the texture URLs (as the same caching problem occurs with images).
This is going to work when you are aware of a computation error, but in general it doesn’t prevent the problem from happening.

Hi @snabela, thanks for the feedback. Good to know you are working on this and looking forward to the new system.

I hope that will also address some issues that occurr from time to time, where the computation time limit is hit even if the definition takes very little time to solve on a local computer.

Hi @Marco_Traverso, the workaround for your user accounts is in place. In case of previously timed out solutions they should be recomputed now.

Yes exactly, this problem will be addressed at the same time.