GH Large project Canvas Management

Are there any tools for rolling/unrolling or folding/unfolding sections of modules like you would do with a code editor for example?
Layers perhaps? Pop ups?
Grouping multimple modules into a single module with inputs and outputs exposed?
Anything to manage the sprawling spaghetti I am looking at : (

Separate GH files connected by Data Output and Data Input, though it has some quirks and limitations.

Make clusters. And more clusters. And more clusters. But avoid including clusters within clusters.

Group the components by functionality and put input and output parameters and don’t wire anything from a component to the outside, use these parameters that you put to the left or right of the group, in columns, to connect it to other parameters from other groups. It seems like a very redundant practice, but it cleans up the definitions a lot. This atomizes complexity and preserves the modurality that is lost in large definitions.

Use the named views and the jump component to move around and the relay to display less wires. You can hide some wires, but I recommend to do this only with irrelevant inputs (like the plane input of the offset component for example, not for the curve input).

1 Like

Don’t be afraid to start from scratch a few times. I typically end up redoing things 3-4 times, each time the definition gets smaller and smaller due to insights generated along the way.


Typically my definiton will ends up with one C# component. :joy:


I have been using Telepathy which is beautifully written and documented to get to about 3000 components arranged into about 20 blocks that each have discrete Sends/Recvs at start and finish. The functionality is basically in two broad categories, creating the solid Brep model, detailing the model etc. However, in the detailing blocks, there is a need for one of the Point/Curve/Brep sends from the model construction blocks.

The response time of GH when connecting wires has finally become annoying and so todays task is to find a way to break up the model.

I am unable to profile the project. Turning on profiler does not result in the times being displayed underneath the modules. If I create a new small file it works fine. For no reason I can put my finger on, I suspect it is a conflict with Telepathy for large projects somewhere. I cant tell where the recent slowdown began, but paneling and multiple brep splits or Remote Control Panel are possibly the cause.

BiFocals also does not work on the large file but works on a new smaller file.

This is in no way a complaint about the speed of Gh which given it is a JIT complier that stores geometry as it goes is frankly astonishing.

I have played with one Cluster, but I am not clear what the benefits of a cluster are other than canvas space management? The modules are recomputed with each recompute right?

Im also not clear what happens when a wire is connected???
I have probably 100+ send/recv components, is this slowing up the wire connections?
It looks like the downstream modules a recomputed, but when im building a new block, there are not downstream modules and yet there is still a 2/3sec pause.

Finally based on this thread:

It seems the only way to lay of large sections such that they are not loaded into memory and recomputed, is to save swaths of data to disk with DATA OUTPUT and DATA INPUT. Is that correct?

Unless I missed something, there is no way for sends to store their data in a .gh file that can be found and read by a send on the canvas without opening that file? is that correct?

Also Im not entirely clear what happens when two files are opened in GH when there are sends in one file and recv in the other. Are both files recomputed with a recompute event in one file, or does the file NOT in the editor, pass its send data without recomputing.

This question also applies to CLUSTERs. Does the CLUSTER output just pass data to the input, or is it recalculated with each recompute?

Previously read:

Update. After uninstalling some non working add ons, some components are showing their time (about 10%). Multiple brep split is 397ms , a boundary surface 6ms

3000 components can already affect the canvas drawing. But it doesn’t matter if you split it into multiple definitions or clusters, the definition will stay too big. That’s the actual problem here. I know that many people are proud of such big definitions, but they are simple not manageable anymore. It is literally spaghetti and I bet 2/3 of all components are data-management, simple arithmetic or data validation (panels, breakout components or custom displaying etc.), which doesn’t solve your problem by any means.

The only real solution is scripting (or even creating pre-compiled plugin components). Seriously, you can reduce it to 50 well structured and encapsulated modules, if you use very basic code.
I also know that scripting has a steep learning curve, but its really worth it!


While I know what you say is true, that will shift the focus from geometry to coding. The power of Gh is that I can focus upon the geometry problems using simple pieces wired together to do very complex stuff.

All I want to do is break into 2 or 3 pieces.

Well then you should go for clusters or multiple definitions just as the other have said. But anyway. You already do some sort of coding. There isn’t much of a difference between writing a line of code or connecting two “batteries” together. Coding can actually be simpler than Grasshopper. Especially regarding edge cases or complex data management. So its not necessarily harder, nor does the process is more abstract by any means.

Actually, it is. Inside the scripting component you cannot preview any geometry line by line, while with Gh components you can clearly see the geometry being created every step, every component. At least, that is my experience so far.

I think this might be in line with what @Proterio is saying here:

In any case, I agree 100% with what @TomTom has suggested, there is no magic solution here, for best results the only way os coding.

Now, if you really just want to do this:

Then what others have already suggested could work for you. Specially the Data Input and Data Output components. Just try them, most of your questions will be answered by trying.

Yes, correct. But for anything related to geometry, I would always start doing the rough “sketching” in Grasshopper and then refactor it in a script components. I mean the longer you do this, the less you don’t need this because you already know what each library functionality is doing. Constant refinement of what has been done is fundamental skill for any development process in my oppinion. (Something I have learned while working in the industry and sadly not during university.)

Another drawback of scripting is that the actual debugging is very limited in Grasshopper scripts. In Grasshopper’s C# components it is not possible to set a breakpoint and step through the code. This is actually extremly bad, because it makes debugging rather annoying. However while I create C# scripts, I always create an out parameter called '‘Debug’. And whenever I like to preview geometry for testing, I just push it to this output parameter as a temporary solution. Still not optimal, but this counters it a bit.


yes yes yes. Keep in mind, many Engineers can be given a GH file w encapsulated data at the front end and work on one piece of the model detailing without much training if they know CAD. Thats the beauty of GH. The price of that, is the JIT compiler.

Expecting designers to dive into coding is not realistic and detracts from the creative process. The depth of knowledge required is an order of magnitude higher.

I am trying DATA INPUT / OUTPUT, but my central questions remain unanswered )

  • what happens when a wire is connected???
  • I am not clear what the benefits of a cluster are other than canvas space management? The modules are recomputed with each recompute right?
  • It seems the only way to lay off large sections such that they are not loaded into memory and recomputed, is to save swaths of data to disk with DATA OUTPUT and DATA INPUT. Is that correct?
  • Unless I missed something, there is no way for sends to store their data in a .gh file of a sub segment of the model such that it can be found and read by a send in the current open file without opening that underlying file? is that correct?
  • if the model is in two files and both files are open in GH, will they both be recalculated, or is only the current file recalculated?
    *Is it possible to pass data between files loaded into GH without recomputing them all?

The question about what occurs when a wire is connected is key to my strategy for breaking up this model.
I presume that the way GH computes a model is by starting at the first module and running down the wires from there re-performing each calculation, storing geometry until it gets to the last module in every branch.

It seems logical that connecting a a wire from module n will result in recalculating module from n to the end of the chain.

The problem is, the delay caused by connecting a wire suggests that many other chains are being evaluated as well, or GH is going through the entire model evaluating what needs to be recomputed.

This is what I am trying to understand to best break up the model.

1 Like

That is a weeks work. There are hundreds of individual pt/crv/breps to be stored. I would like to have a better idea of the destination before I embark on that journey )

Also keep in mind, each pt/crv/brep needs to read back as individual item, that means > 100 inputs/outputs for the DATA INPUT/OUTPUT modules. Thats a module 3 feet tall on the canvas…

Is there some clever way to pack/unpack a number of items so that they can be read in by their label?
for example “pt.stairwell.corner1.floor4”

(Unfortunately, most of the food for Rhino options have not been updated for R6 and are abandoned projects)

I think the reality is I must dump Telepathy, and just write every pt/crv/brep to disk it is to be used anywhere else in the project.

Possibly the simplest way to do this is be MERGE all the data, then use ITEM many times to separate it all out after the INPUT from disk.

If I use a CLUSTER to do this, the names will be preserved on the input and output sides at least but I will have to insert a pt/crv/brep primitive and copy the name for each input so that I can delete the Telepathy send/recv after. Thats 100s of copy paste operations, and the removal of Telepathy as a workflow methodology, but in these days of SSD, its almost as fast AND i get the flexibility of breaking a project into infinite parts.

The only downside I can see is the file location. When that changes, all the INPUT/OUTPUT modules would have to be updated. I suppose I could use a folder at the root of C: drive and then a folder alias from the project folder or something…

Ideally, the data would be saved with the file like a Rhino file, but its hard to complain about GH because it is does what it does so well.

Its just 2 components. No journey. It takes less time than writing those two long replies.

No. Plug in a list of points into Data Output. Then on the other Gh definition, use Data Input to get that list of points.

You can then use List Item to select any point you need individually. Or if you really want all of them simultaneously you can graft the list and use Explote Tree.

Do this with all types of geometry that you have: brep/curve/surface.

You can also use Entwine to group all your data into one data tree, and then use data managing to explode said tree and go back to the previous data structure.

Also, consider making the partitions in the places where you would need the less amount of data being transferred from one definition to the other.

I think I understood that…
If The data is recovered without its label then somewhere there has to be a lookup table to remember what the name of each data item is. There is a lot of work in the naming convention )

Unless I missed something, the only way to have a named terminal on the INPUT module is to save each data item individually

My CLUSTER is acting as the terminal naming device while storing the data as merged lump in order by flattening each merge input

Like this

Where each side is a cluster

Grasshopper is not doing JIT compiling, but for obvious reasons it may not solve problems efficiently. But anyway, nobody said a designer or engineer shouldn’t use vanilla Grasshopper. I’m not saying replace anything with a script. Just that scripting is an efficient way in reducing component usage and to increase reliability, performance and encapsulation. Those two professions are not done by dumb people, I never understood this argument. Its quite common, but usually it just means something else. Same as the “distraction” argument and “no-time”. Its just about the right mindset. A lot of designers can write code, almost any engineer nowadays as well.

Because these question doesn’t solve the initial problem. If the large canvas is your problem then yes a cluster helps greatly. But now you talk about performance issues. But a lot of your assumptions are not correct and it doesn’t help at all if you are not willing to find a more direct way. I mean maybe someone more skilled could refactor your definition in 1/4 of all components used without any use of cluster, scripts or data components. Who knows that.

Dont wish to seem ungrateful for your thoughtful response, but I really just want to know what happens when a wire is connected ))

When the state of an input is changed, its component is marked as expired, i.e. it will be recalculated in the next solution, and all downstream components will expire. As there are expired components, GH runs a new solution, in which only the expired components are recalculated, prioritizing upstream to synchronize the dependency.

However, if the source of an input was recalculated and gave the same value, the input will unnecessarily consider it as the component should expire. As far as I know, this is the only inefficiency that the GH expiration flow has, but it is possible that the clusters work worse and the whole internal definition is recalculated instead of keeping the last state of the already calculated components, but of this I am not completely sure.

1 Like

Respecting that at this time you do not want to learn to program, really is the solution to your problem. And the world of design and art has been using programming as a tool for several decades, they are often called creative coders or procedural designers or generative artists. Much depends on the design niche you are in or your specialty, but just as learning GH amplifies your creative diversity from pure Rhino, programming amplifies it by several orders of magnitude from GH. And learning to program knowing GH is a third of the way, not like learning to speak again, but like learning another language, one in which instead of using input-process-output capsules, you use variables, loops, conditionals, objects, and also functions with the input-process-output structure.

It takes one or two weeks to become familiar with the syntax and the way of working if you are a standard self-learner. And from day one, with good information search skills, you can learn to program 3d graphics and only leave this domain when you need it, where you really start to see other large animals. That’s how I learned, and thanks to that, my GH level went up to another level by learning the basics. For these things, for creating plugins and more, will always have an advantage the designer who is not limited by the tools of others.