Wish: Stupidity filter/ solver time out limit

Hi,
it happens to me more often than I would like to admit:
I just connect a wire while having forgotten to graft/ ungraft a connection and all of a sudden poor GH has to perform a few million calculations instead of a few hundred. That’s when it freezes and the big dilemma comes:
wait it out or kill it and hope not much of the work has been lost.

Is there (/could there be) a way to set a time-out limit for the solver? (e.g. 30 secs) after which it could automatically be disabled?
I was thinking of an -on canvas component- like a list or slider with which you can select the maximum calculation time for the solver.

3 Likes

Nice idea, maybe you know already (and it doesnt always work) but do you know that esc key can break the solution?

It is supposed to, but when you’ve made a big boo-boo even that cannot save you… :frowning:

2 Likes

Changed the topic to a GH2 wish

1 Like

wow! my first serengeti post! thanks! :slight_smile:

1 Like

As solutions will no longer be running on the UI thread, this should no longer be necessary. A long-running solution can start and will automatically be canceled when the next one starts. And even if it gets stuck forever, you can still save and quit Rhino properly. Only risks remaining are exploding memory use, call stack overflows, or thread locking on certain shared objects.

5 Likes

“Stupidity Filter”. Though it can be “achieved” with scape, there should be a button for it, just for the sake of the phantastic GH sense of humor @DavidRutten imbued into it.

Does this mean the solution will be computed in the background and I can continue working on the GH-definition while it is computed?

Yes it does. The solution will also run on multiple threads, and at the moment there is still a faintly noticeable effect on UI responsiveness when all those threads are busy.

However a tricky issue here is that it’s almost impossible to do anything in Grasshopper which doesn’t in some way invalidate the current solution (ready or not). So if a solution takes -say- 10 seconds to complete, and you make a change to the file every -say- 5 seconds, GH2 is forever busy cancelling old solutions and starting new ones without ever displaying a finished result. Seems like a lot of wasted processor time to me… haven’t quite thought of a good way to remedy this.

2 Likes

If we make a mistake with graft/flatten of 100<>100 object , that should be 100 outputs (which could take, say, 10 seconds) and instead it become 10000 outputs (16 minutes).
Currently, the esc key is not always reliable. The times you really need esc key to work, it won’t work.

A check, telling you “hey, you are about to create 10000 outputs!” , before doing the actual operation, would be very handy. (for example, divide curve… you can know the amount of outputs before calculating it)
(I remember asking for this too, in past)


You mean that, “fixing” the tree structure of the inputs will automatically stop old calculation and make the new, correct , one?

1 Like

I know, it’s terrible.

Sort of. It’s a bit more subtle than that. You can’t just go around cancelling running processes or you’ll end up in an invalid application memory state before you can remember how to spell “concurrency-race-conditions”.

Stopping a running process is only safe when the process commits suicide, not when it gets murdered in the face.

So as soon as a new process starts, the old process is told to cancel itself, but it may take a long time for it to respond to that suggestion, possibly forever. Until it stops, it will continue to use processor cycles and claim memory, slowing down the rest of the computer.

If you make a mistake such as running a script with an infinite loop, then you may well end up with dozens or hundreds non-cancelling solutions. At this point your only option is to (save your file and) shut down Rhino and start again.

2 Likes

I’ve stumbled into this problem more times than I care to admit. So I worked out these steps to minimize my recovery time:

  1. Close Rhino and abort the creation of the crash file.
  2. Restart Rhino (which for me auto-starts GH.)
  3. Open the GH Autosave folder (File/Special Folders/Autosave Folder)
  4. Drag the top file in the list, which is the last save before the one that started the long compute loop, onto the GH window.

The puts me back to where things were just before I made that impetuous change that started the CPU loop. (Well, maybe it’s not a true loop, but there’s no way to actually tell.) These steps work well for me because I use only GH, and don’t create things in Rhino.

I have an idea of something we can try for GH1. Does someone have a sample definition where hooking up a component puts GH into a tizzy?

Thanks for the detailed answer.

… is Tron a documentary?
Anyway i love that example, made me understand the problem.

But couldn’t it be that every process have a check “IsEscKeyPressed()” inside every, say, 5 seconds? If so, Suicide().
A “public bit”, changed by esc key outside, but checked from inside the function.
Shoulnd’t hit bad the performance…
As usual, i’m a frog looking at the sky from the bottom of the well. Give not too much weight to my words. :see_no_evil:


@stevebaer
tizzy.gh (5.7 KB) (components disabled)


(The error here is the graft.)
If i press Esc key in time, it aborts.
But as soon as the cursor goes 2021-01-24 01_40_38-Window , Esc key is no longer useful.
The number in the panel, at 100 is no problem, even at 500 it freeze my pc for 5 minutes. Exponential effect.

Thank; I appreciate the sample

yes, i have this issue all the time, nice idea

Nope, because in a single thread the order of function calls is strictly defined by the source code. You cannot inject new functionality at runtime.

The only thing that can happen is that the process has predefined points in its code where it checks to see if the escape key is down. For example when a new component begins calculating, or between component iterations.

This is what’s happening now. If the iterations are short, I check for escape a lot. If a single iteration takes minutes because you’re solid unioning 10,000 objects all at once, I check for escape every few minutes. If you’re in an infinite loop, I’ll never check escape again.

1 Like

I see how this could be a problem.
In other programs this is also what happens and it can be frustrating in different aspects (mostly overheating of the pc /battery drain when using big files with continuous updates)
Some ideas from my part to make it easier:
Have a button readily available to enable and disable the solver (a standeard shortcut would also be great I use F7)
Parametric breakpoints with a UI to manually advance through the definition:
Kind of like a data-dam but with a good way of pressing play from a central location, preferrably with shortcuts. This way I could silo off the definition very effectively. It would also be great to then have a way of seeing how much of the definition has been calculated with new data and how much is still using old data.

Just my 2c, I fully trust your Judgement on this DR, thanks for the great work and your engagement with the community.