Connecting downstream expires/reevaluates upstreams - Why so laggy?

When I connect or disconnect a downstream component, like the Sphere output (Brep parameter) pictured below, it takes ~7 seconds before GH becomes responsive again.

This doesn’t seem normal to me. And it slows down the editing of the GH definition to an unbearable level.

I’ve tried to figure out what can be the cause by using the Profiler and also MetaHoppers “BottleNeck Navigator”, but as you already noticed, I didn’t modify any component that requires reevaluation downstream, only a terminating Brep parameter, so any update should take no time.

Any clues about why this lagginess is happening? Why so zombie?

Edit: 6.8.18205.11111, 07/24/2018

// Rolf

So if you remove the connection to a parameter that has no recipients of its own it takes a long time, but when you delete that parameter entirely it is quick?

Also check your definition size and autosave option. I think by default, grasshopper stores a copy at every component edit. With really large files and a slow storage this may become noticeable.

3 Likes

Are you using events, multithreading, plugins which observe what your definition does, or any against-the-downstream-logic hacks?
How many components are in your definition? Any custom drawing on the canvas?

(btw, nice to see that you are using the least square approach :slight_smile: )

Yup. Spot on. Aparently there’s some lookup going on in there which slows things down. To be clear: Deleting happens in “no time” whereas connecting and unconnecting takes 6-7 seconds before a dead end parameter becomes responsive again.

// Rolf

Yes, I even kept your signature on it. I extended it with a dirty trick to make it adapt to a cavity I have. Very good. :slight_smile:

Yup, there it was!

Can settings like this have changed with an update? (I don’t uninstall when updating so it shouldn’t(?)).
The slowdown was significant even with a very simplified model with almost no downstream objects and also internalized the upstreams objects. For testing purposes I do have some internalized meshes which probably makes the auto-saving on modifications extra slow.

A zillion thanks. I had already started to become woried, despite TomTom’s fast algorithm. :wink:

// Rolf

Here’s my version of the SphereFit approach for my special case. Probably not so useful for others, but anyway.

SphereFit_LS_RC.ghx (597.0 KB)

// Rolf

2 Likes

I think @HaLo is right, it’s probably autosave. When you delete an object which itself has no recipients, then it is assumed that whatever happens in the file after the change is a strict subset of whatever happened before the change. Since it didn’t crash before, it probably won’t crash now, so no autosave required.

If however you remove a wire then the recipient may start doing something it wasn’t doing before (for example operate on less data, or maybe trying to deal with a lack of data, or maybe trying to operate on no data). This new behaviour may cause a crash so an autosave is performed.

You can disable autosaving altogether in the Preferences, or restrict how often it happens. Does that make a difference?

Yes, I disabled autosave altogether (missed my reply to @HaLo above?). Anyway, with this change it seems things are back to normal.

// Rolf

… but you just lost a safety feature.

It’s good to have so much freedom about when an autosave occurs, but with large definitions or lots of internalized data, the save noticeably stalls your work.

The idea to save just before a possible crash can happen is basically sound. For small files you will never really notice the lag. But for larger files that’s like committing the whole coding project every time you start a new block or end a line. Most of the time you just sketch out simple things on a limited subset of data with no big risk of anything going wrong.

@DavidRutten, could you implement a timed autosave option in a future version?

Nothing to save anyway if every simple change in the model takes ~7 seconds. :wink:

Timed autosave is what I assumed were in place, so it never struck my mind that event-based saving could be the problem.

I work with big meshes, and some commands are very taxing on the CPU so I therefore have a number of DataDam components as well. As I understand it a DataDam has a cache, but is that cache saved to file as well?

Anyway, during development I often have the big meshes internalized, which definitely takes time to save, so autosave on modification events is a no-no for me.

// Rolf

Certainly a ‘don’t save more often than once every N minutes’ option can be implemented, but ideally speeding up the save would be preferable.

As far as I know the only thing that will seriously slow down a save operation is lots of internalised data. This can already be removed from the file itself by using a secondary file with a Data Output and Data Input pair. It’s a bit of extra work to set it up, but for large files I assume the time saved well exceeds the time invested.

Nope. That’s actually a bug of sorts, they probably should.

Well, the problematic files in my case were multiple GH definitions of up to 50 MB without any internalized data. The definition would then iterate over about 1500 individual sets of input to generate geometry and output data for production. We tried to split the operation into separate files as much as possible, but lots of things were interdependent and the more data you cache (we did that with a shared rhino file) the more prone to synchronization errors you are.

So we ended up disabling the autosave and relied on saving at least every time we ran the iteration or hit a specific milestone.

That’s what I had. And even if I saved to a SSD it was slow. I’ll try with Data output and Data input to see if that plays better.

// Rolf

If there’s some looping happening when using data input/output it slows the solution even more. (not talking about the save)

@RIL, how about making backups? In my experience making a backup happens a bit faster than saving the file.

@DavidRutten, is it possible that instead of the autosave it overwrites one or two backups. This will also allow the user to pick which one to restore if crash happens.

That’s what I do all the time when I think I have something I don’t want to lose.

I didn’t even know about the auto-save on modification events. Instead I thought that GH saved with some time interval, like 5 minutes or so. Which is why I always used Save backup from the file menu when I felt I wanted to make sure it won’t get lost. Which is also why I don’t care about auto-save. And now, as soon as I realized that there is such a thing as auto-save, I immediately shut it off. And life feels just great again. :slight_smile:

// Rolf

1 Like

GH files are compressed, the bulk of the time is most likely taken writing the data to memory byte arrays and then deflating it. The speed of the disk probably doesn’t matter that much.

What’s the difference between a backup save and any other save?

There is a Save Backup option in the Grasshopper file menu, and you can modify the shortcut to it. Is this not what you are after?

I don’t know what’s different. I just have the feeling backups are created faster than when using save(autosave). This could also be due to the ‘master’ file is being in use and the backup is not opened. Maybe when performing save/autosave some space allocation is being re-done that is missing when creating backups.