I have some nodes that take about 20 minutes to complete.
I then use the output of those nodes in other nodes that take ages to complete.
This means that when I open the Grasshopper file it takes AGES to do its stuff.
Some of these nodes are plugins that I wrote in C# and sometimes I want to debug changes in nodes that are near the end of the pipeline. It is a PITA to tweak a plugin that takes forever to start running.
So…
To speed stuff up…
I take the output of a slow node and put it into a Brep node and internalize the data. I can then disable the node that generated the data.
This means that I only need to run the nodes that I want to change and things run faster.
Would it be possible to have a node that auto-internalizes the data if the source of its data goes away (is disabled)? Or could the output of a plugin be marked as auto-internalize?
Foremost, if your performance is bad, you should spend some effort to improve that. One less technical option is to ask within this forum, by providing an isolated example.
I know that with Breps, there often isn’t an easy way to improve performance, sometimes it’s simply a heavy computation with no room for improvement.
Grasshopper is quite resilient to computational error. It rather continues with an invalid Brep or with null states, instead of failure. But in a real world, this might not what you want. Instead, you should design your definition in a way, that it validates as much as possible, before even starting the heavy computation.
Other than that, I usually work with more than one definition in such cases. So one definition for one stage. Only if it passes that stage, you’re going to start the next one. What I usually do, I bake the geometry of definition A, and then I read it back from a given layer in definition B. For my use cases, this often remains a manual process. Pressing a button, to read from layer X. There are plugins out there, or you simply read it by code.
Reading from layers is easy if you design your definition in a way that it simply does not matter in which order you are reading data in, by sorting it again. If you are not dependent on what geometry is read in first, you are really free.
This also works with files. If you work with files, you can even maintain the order of your elements. You can dump class instances as byte arrays and recreate them from there. A ‘FileSystemWatcher’ (.Net class) helps you with detecting changes in those file, if this helps you. But again, you should not overengineer things.
That’s too much work, and I consider stuff in the main Rhino file as fragile. I could be zoomed in on an area, select all surfaces, delete, save the file, move on.
I have a component that cuts spikes out of a shell to make windows. I might not run it for months.
Perhaps I deleted my spikes but don’t notice the issue until I want to regenerate my window holes. The operation appears to work because I see some holes and haven’t counted them everywhere.
I set off some batch renders, review them later and find that there are some windows missing.
How did that happen? When did that happen. I could use an old file but do I KNOW what version of the windows were used?
All organization problems can be avoided by simply paying infinite attention and by not making mistakes.
But, to me, life would be easier if the computer just helped me out. Does the dog wag the tail or does the tail wag the dog?
Maybe I could mark a couple of outputs of a node as auto-internalized. I disable the node and everything downstream, that uses that output, continues to work. Since the lines are coming out of an auto-internalized output, I know that they are the latest version from the last time the node was active.
Not having this feature is not a fatal flaw in Rhino. I can live without it. But, for me, it would be nice to have.
As I said, you can serialize (“save”) the data into a file by code. You can save the geometry as byte-arrays into a custom file, and re-instantiate it from a different custom component. Internalizing data does the same inside the .gh file. You said you do write C# code, just google for binary serialization in C# and you find tons of examples.
But still, in the end, it is a workaround for a deeper problem. I would definitely disagree that working strictly organized is a hard thing. It requires some discipline, yes. But in the end, it’s easier. You can even use version controlling systems like Git to jump back to older states, although you might not fully benefit from the power of this tool. I mean, there are plenty of options on how to stay organized. So there is no good or bad way. But as a rule of thumb, if you think about agile development, you should always have something working within a short period of time. E.g. you could say, for each week you work on something, by Friday you definitely have something working, which is properly stored somewhere. If you come back to a problem in 6 months, you have a couple of weekly states you can refer to. Since you guarantee that these states are working, you are less likely running into the situation you are describing.