Could Grasshopper2 improve cognitive management through a dedicated component?

The generating fact for this entry is a need I feel with complex grasshopper designs. I apologize in advance if this subject has been dealt with before or if a solution exists.

In my case, a complex design is a design with linked grasshopper components where, at a certain stage, I forget the geometrical meaning behind components. The geometrical “meaning” is not its definition (for instance circle, or list, or …) but its meaning within the whole design. For instance a “Merge” component might mean “the unit shelve of the library”.

To better manage my lack of memory for the meaning of component, I tend to add:

  • “scribble” component, grouped with the component, with the literal meaning of the goal of the component (“unit shelve of the library”)
  • a “panel” component, where I output the component, in order to grasp its meaning
  • image of the geometrical output of the component in Rhino, taken at a stage of the development

All those cognitive scaffolders are not satisfying:

  • The “literal” meaning demands a cognitive translation from the literal meaning to the geometrical meaning.
  • For the image output, this is frustrating since you need to capture the output in Rhino and then paste it in Grasshopper. Also, the image represents the output at a stage of the development. You do not see the “up to date” geometrical translation.

How could a dedicated component in the grasshopper talk directly to my brain (graphically) the meaning of related component?

1 Like

I don’t know. A combination of Group and Scribble as a single object would be handy. Perhaps you’d like to attach a recording of your own voice saying “Shorten lines to avoid overlaps” every time the mouse moves across the group? Do you want to associate a custom-drawn image with a set of components?

Whatever the solution ends up being, it’s going to be either text, graphics or audio, so where does that leave us? Discuss.

In my opinion, the optimal cognitive way is graphic. I was expecting a graphic window with the output of the component.

Could you tell a line has been shortened slightly just by looking at it? What if the group you’re trying to document contains non-previewable data? What if the purpose of a group of components isn’t to change the data itself, but to re-organise it within the data tree?

In this case, I would expect the graphic component to have a “selected” entry for the shorten line (the line would be displayed in green, as when you select an object in grasshopper and it is displayed in rhino).
the component would have other “environment” entries (0…n) (the environment would be drawn as default in rhino).

For the case when the component reorganize the data tree, the graphic output isn’t changed. From my cognitive point of view, the tree reorganization is too intuitive for me to be translated graphically.

This REALLY interests me. I´m currently studying parametric tools for early design stagas and did this kind of thing to keep track of the latest definitions for my research and also at the office (where I have to build definitions for other colleages to use)

Not long ago I found this paper by Davis, Burry and Burry. Don´t know if any of them wander around here, but it works on the topic.

Don´t know if it is feasible as it is a completely different approach, but… could rhino viewport highlight GH components when mouse is over the geometry they produce or viceversa, GH canvas highlight the geometry they produce when moused over? Like a lighter version of the state change when selected.

1 Like

Let’s not worry too much about practical limitations. Even if something isn’t possible now, we might be able to make it possible. After all we have full control over both Rhino and Grasshopper source.

The only things I’m really not that interested in is if a feature would put a huge amount of additional work/responsibility on plug-in developers. I don’t want to make their lives harder than necessary. If it’s just a large amount of work for me and other mcneel employees, I’m happy discussing it.


indeed, the paper (, describes the need. They point to modular programming (which is a good idea), I point here to a different solution.

To better manage my lack of memory for the meaning of component, there is something I forgot to mention. When I go back to a component to understand the meaning, I usually repeat the same steps:

  • Memorize the current preview context
  • select all the components in GH viewport, then “Disable Preview”
  • select the component of interest, then “Enable Preview” (enable preview of other components if needed)
  • see the Rhino Viewport the result, grasp the meaning
  • set the preview context back to the initial preview context

The idea of the component would be to capture those operations. Now, I can pin down some proposed requirements for the component “Preview Context”

  1. The “Preview Context” component shall have no effect on the geometric content
  2. The “Preview Context” component shall memorize a preview context
    2a. The “Preview Context” component shall have inputs that are component with preview enabled
    2b. The “Preview Context” component shall have, among above mentioned inputs, a specific input that is “selected”
  3. The "Preview Context"component shall memorize the elements of the view (point of view, center, zoom level, others ?)
  4. The “Preview Context” component shall display the preview context, with the “selected” entry displayed in green
  5. The “Preview Context” component shall provide an action, when activated by the user, the component is computed from the current Rhino Viewport

For req 4. the display could be a resizable RhinoView visible in the GH Viewport. Another solution would be that if you select the “Preview Context”, then it automatically sets the Rhino Viewport according to the settings memorized by the component. If you unselect it, it goes back to the initial preview context.

That would help.

That is the modular programming way described by you and Davis et al in the paper.

Why not use the ‘Selected Only’ preview option on the canvas toolbar?

For Grasshopper1, Metahopper by @andheum has some neat features for automated structuring/labelling (including “Best Practicize Selection” and “Label Selected Groups with Scribbles”).

I find that graphical representation on its own is not enough after a long hiatus. A graphical representation paired with a textual addendum I find more useful. Even more so when more people are involved - what is obvious to one person isn’t for the next, we all interpret symbols differently, especially when the amount increases.


True. There’s a reason for why we don’t use hieroglyphs. Icon’s is only an attempt to force us back to ancient history. :wink:

But of course, a combination of both (hieroglyps and text) is the way to go.

Preview Future

You should have heard about this name before. Anyway;

“Change the future” said Bret in this clip (5 min). Watch from start, but wait for it; the interesting part from 2:00 to 3:xx something. Don’t skip it just because of the code on display. This may give ideas about “auto-preview”, or “anim-preview” of component or part-diagram functionality.

Bret on "Understanding Systems"

“Understanding systems” is the subject, right? Thinking the unthinkable. Watch also this clip from start although the peak would be from 5:00 min + a few more minutes after that.

Edit: See also at 22:00 min about search-study the behaviour: h_ttps://
// Rolf

( This clip should contain both examples, and more )

Rhino 7 & GH2 disclosed

Accindently Bret Victor disclosed (at 24:53) the inner secrets of Rhino 7 & GH2 and - remark - the Help files that comes with it. Ahum.

// Rolf

Thank you, I did not know the feature. I just tried it and it helps a lot. I will use it in the future.
However, “Selected only” preview is a graphical representation that request user interaction. The “Preview Context” component described above is a representation without user interaction. Therefore, it is more like a Rhino preview window in the GH viewport.

Quite often I see people inspecting data using the Text Panel, and then (horror) using the output of the panel to continue the data. Clearly this is what people expect to be able to do, and the fact that’s it’s a really bad idea in GH1 is therefore a kind of bug. I had the idea of creating a proper Data Viewer object which wouldn’t just display a summary textual description of the data flowing through it, but actually displays the data in useful detail. For text that would involve displaying whitespace characters, for numbers perhaps both rounded and roundtrip formats, for geometric types a 3D preview.

It sounds like this is something quite close to what you’re after, but maybe not exactly the same.

Quick mockup:


Yes, I was expecting something like that. At first, I was not expecting something for the real number or the matrix but that is certainly useful (Sometimes, the text panel is cryptic if your bionic brain is on repair).

That is what I expected ! bravo ! The mockup fits the need. I expect you will need to find a way to store in the component the 3D preview parameters (point of view, zoom related to the object of interest, …).

Thank you, very interesting presentation. One major difference I see with the “3D preview context” is that Victor displays all the stages of its design strategy.

My need is to display some stages: the complex stages where I need to grasp again the meaning. In the end, I expect to see all the “3D preview context” to be updated simultaneously when parameters are changed, like in the video.

1 Like

It’s unclear at the moment whether it makes more sense to use Rhino to render the geometry or whether to draw it directly in Eto. Or indeed whether to just draw a single fixed view, stately rotating view or fully nagivatable viewport. Note that RMB+drag already means panning in Grasshopper and Scroll=zoom, so having a different meaning for those actions based on where exactly the mouse-pointer is would be a mistake in my opinion.

Please excuse my ignorance, what means “Eto” ?

also what means “RMB+drag” ?