Struggling with GH Definitions – Design Principles or Best Practices?

Hi everyone,

I’m finding myself struggling a bit with Grasshopper from a design perspective. I often invest quite a bit of time creating large definitions for specific applications (mainly laser cutting), but I notice there’s little reuse between projects. There’s almost always some difference in geometry, logic, or output that ends up requiring extensive re-tweaking of the definition.

This has me wondering – are there any established design principles or best practices for structuring Rhino/Grasshopper scripts? Would it be better to focus on smaller, more robust definitions that can be reused and combined?

Can anyone point me to guidance, resources, or even examples on how to approach this more systematically?

Thank you!

I am a software architect not a CAD/manufacturing professional, so take my answer sceptically.

Always factor for the lowest common denominator. Create functions (functionality) that does one thing well, then build on these to do more work. In my VERY limited knowledge of Grasshopper (just got Rhino last week and am on starting going through some grasshopper basic courses) this is just “programming”.

So if I am not wrong, do small task/thing/functions, chain them together to get more complete tasks, then make those usable by having clear inputs for end-users and you should be good to go.

1 Like

the most difficult thing that usually happens to me is the time lapsed between the need to re-use a particular definition :slight_smile:
I was very, very bad in leaving hints and good descriptors of what a particular portion of each definition is doing, and that didn’t very much help the “future me” to well understand what each piece of puzzle was supposed to accomplish

I believe I have improved over time, but probably the best advice I can give you is that a bad, poorly structured, overcomplicated definition that is very well tagged and described will probably be more helpful/reusable than a slender definition that you don’t really know what exactly is doing, or where you can’t identify what is where: in two months from now, when your internal memory will have reset, your won’t recall anything whatsoever of that particula definition

consider that my general client wants the fish and doesn’t give a heck of the fishing rod… he doesn’t want to be able to solve the same problem in the future, he doesn’t even want to be able to solve it AT ALL: he just wants a solution NOW (actually, he needed it yesterday, as he says… as always: this is an EMERGENCY)
so, generally speaking, my main problem is almost always the available time, and the ability to provide solutions fast

consider that the following are very personal choices, and probably not everyone will agree with all of them, because it also depends on the field you are working on, and how complex your general definitions are, how many different people are going to work with those…
I’m a one-man team, so I have no one else to blame :slight_smile:

this said, if I had to list a few of those that I consider the best practices in gh, I would say:

  • first and foremost: bind the radial-menu option to spacebar by editing the .xml file
  • always structure the definition as if your main inputs carried data trees with many branches, even if they contain a single item
  • never Flatten (in case, use Shift Paths)
  • never Simplify (in case, use Shift Paths) {Do as I say, not as I do :slight_smile: I still use Simplify a lot}
  • never connect multiple wires to the same input using Shift+drag but use Merge component instead, even for obvious stuff (multiple wires + flatten = summons the devil)
  • never rename components or inputs/outputs, and [almost] never integrate input-expressions, but try to explicit those in a dedicated Expression component (nowadays I explicit even x-1)
  • scatter here and there Parameters as mere data-containers, to create some sort of checkpoints along your definitions, group those alone, and name groups with descriptive text like “section curves: branched by +Z and sorted by +length” (I shorten ascending/descending with +/-)
  • don’t make groups with more than just a few components (I mean like 5 components is already a lot for me, I do that only when they are obvious stuff like Construct Domain + Remap / Range and similar stuff)
  • never double-group components, but use Scribbles and spacing to create visual separation if/when needed (Scribbles are difficult to move: you can group them alone, to have some more “meat” to click on even when zoomedd-out; in order to move Scribbles together with the components they refer to, you might need to temporarily group them together)
  • Scribbles have a slider to control their size: use it to your advantage to create different sections in your definition
  • if you insert a slider, people will play with it: if the value is a constant, then put it in a Panel

I might edit this with more entries… at the moment this is all that comes to my mind :slight_smile:

7 Likes

GH data trees trip up everyone. Watch this: (and understand it :bangbang:)

7 Likes

Thanks al lot for these insights!

the most difficult thing that usually happens to me is the time lapsed between the need to re-use a particular definition :slight_smile:

This. I often barely recognize the definitions I made a month ago.

Another issue is that my Grasshopper definitions often break when used with different geometry—point or edge lists end up in a different order, or surfaces lie on different planes.

I hardly use any of the principles mentioned above, it’s time I start incorporating them.

Many thanks!

2 Likes

I stand by @Matt_Runion and @inno 's principles (with very few differences) and Andrew Heumann’s video posted by @Joseph_Oster , and I usually summarize those principles in 3 keywords: robust, readable, modular.

A definition should be robust as much as possible, meaning it does not break under the maximum possible variety of cases. One way of doing this is to use implicit data in place of explicit data as much as possible: for example, if you want the normal direction to a surface in the XY plane, use the surface normal instead of the global Z vector; this way, if you want to use the code for a surface with a different orientation, it will still work (more robust). Then, of course, the degree of robustness depends also on the use case, it’s a trade-off.

A definition should be readable in hierarchy and visual structure at different zoom levels: at zoom extent scale one should be able to recognize a structure of macro parts, then zooming in should progressively reveal the smaller scale structures until you get to the component level.

In terms of readablity, I can add that cable arrangement can make a lot of difference, and I suggest using SnappingGecko.

I treat groups as if they were “exploded clusters”, using relays as clear inputs and outputs, and colour-coding them as well. No amount of components is too small for me to form a group, it depends on the importance and modularity of the operation, and I do nested groups as well (but this is more personal preference I guess). I found out this helps with modularity and readability at once. Check also this post.

For modularity, I save those groups as individual .gh definitions and/or copy-paste them into text files (GH is XML), then save those text files and copy-paste their content back into GH when needed.

For comments and notes, I use scribbles for titles and/or short, structured (i.e. bullet lists) tips, and use panels in case a longer text paragraph is needed. You can define “text styles” for scribbles by saving preformatted scribbles as User Objects and call them into the canvas with a given nickname. They also help a lot with that recursive readability I was mentioning above (first you read the big titles, then the secondary ones, etc.).

4 Likes

@inno,
1
what are your thoughts on using general containers like ‘data’ instead of dedicated components such as ‘int’ or ‘float’? Similarly, would you use ‘geometry’ over more specific types like ‘brep’ or ‘mesh’?

To keep things tidy in my scripts, I usually follow the data format of the dataflow—for instance, if the input is a mesh or a surface, I stick to those specific nodes. However, when I want to reuse parts of the script, I often encounter issues where the same logic needs to handle a ‘brep’ instead, and I end up redoing sections of the definition. The same happens with floats and integers.

The plane orientation example by @ale2x72 is a good illustration of this challenge. I presume that using dedicated nodes might be faster than generalized ones, although I could be wrong here. Also, I wonder how node casting checks (like testing input compatibility or format conversion) might play into this decision—are they worth incorporating to avoid issues down the line?

I’m curious to hear how others approach this—do you prioritize flexibility by using general containers, or do you stick with more specific components for clarity, performance, and casting reliability?

2
Like most I’ve used various ways to organize and document Grasshopper definitions, but I’m wondering if there could be a more unified approach to improve the workflow. Currently, the only methods some combo of Scribbles, Panels with text, labeled groups, Clusters and personalized group color schemes. While these solutions work, they can feel a bit scattered, and each user tends to develop their own system, which can lead to inconsistencies in collaborative settings or when sharing files here.

Wouldn’t it be great if someone clever at McNeel (or David Rutten himself) came up with a more unified implementation to document/comments? For example, built-in tools (AI supported) that allow standardized tagging, group logic visualization, or even predefined color schemes could greatly enhance clarity and reusability.

1 Like

I see Parameter usage as personal preference, in my case if the container is along the flow of the definition, I tend to prefer parameters of the same specific type of what it’s going to contain, mainly to save a word in their tags

if it’s the final final output, ready to be baked, I tend to use generic Data containers and overdescribe its content in detail, because often I’ll be just looking to that text, and I want to know as fast as possible all the details about its content type, how it’s sorted, branched…

the final output that makes you happy mostly depends on the way you are going to bake that data back to Rhino: consider that even if Rhino8 and the new set of gh components has already been around for a while, I have just recently started to really implement those in my workflow (due to lack of time to explore them the proper way…) but they are definitely gamechanging: they really do allow to completely rethink the way you structure a definition from its very beginning, and also allow for an incredible amount of data extraction that earlier was only possible if you had some coding skills and a good understanting of Rhinocommon

casting stuff, in the context of gh, is a very delicate topic about which we could write a whole book :slight_smile: I personally assume casting would not work and tend to never cast something to something else, but instead I use the dedicated components to get to the very same -but reliable- result, the explicit way… well, to be honest, maybe the only time I sometimes still cast stuff are Integers into Booleans, instead of using the Greater Than component with a zero, but I’m slowly losing that habit

1 Like

I give you 10 ordered programming principles which work for me in most cases:

1.) KISS → “Keep it stupid simple”, because
2.) YAGNI → “You aren’t gonna need” the extra features/abstractions in most cases,
3.) DRY → even when you “dont repeat yourself”, general purpose components are not fully applicable in most cases. You almost often wasting your time on over-abstractions, but of course when you identify a common operation, then replace it with custom component (or cluster, or group of components)
4.) SRP → You should divide definitions/code in “Single Responsibility” units. Be careful, it does not necessary mean it does only one thing, but you should be able to explain in 3 short sentences what this group of components is actually doing.
5.) OCP → You should create a definition which is “Open for extension, but Closed for modification”. This means each SRP unit, can be replaced by a another version of it, but you should not modify it. Each unit …
6.) LSP ->…should be tested in isolation and simply being Substituted at the right place in the definitions. This requires a:
7.) ISP → “Interface segregation”, which basically describes a clear input and output boundary of your units within the definition. Keep it simple, maintainable and reduce the amounts of parameters. In GH, parameter reduction is an underestimated task. Smaller interfaces are better
8.) DIP → “Dependency inversion” means that your units should not rely on each other. You want encapsulation of your units. In GH this is less applicable, it basically means you only pass data over simple data types and do not expect any precondition from other components.
9.) Samurai → On your units, you assert for plausibility of your input and computations. You immediately stop the computation (=“kill yourself”) if your assumptions are not fulfilled. This prevents you from chasing null states upstream and prevents other weird computational errors.
10.) DIY* → “Do it yourself” (*if feasible). Plugins and Rhino functions are extremely mature and feature rich. However every dependency (even in Rhino) will change over time. There is nothing as bad as a definition you try to run 2 years later and nothing works, because your plugins don’t work anymore or Rhino has changed its programming interface. Its a trade-off to use 3rd party code. You only want it for the important tasks you cannot solve on your own.

4 Likes

It will echo what have been said above, but :

  • Annotate (scrible/panels, or sometimes I group an object alone and give the group a name, so I know the type of data with the GH component logo, but I know what object it is thanks to the name)
  • Maybe use a systematic approach for scripting : I came up with this simple strucutre that I use consistenly in my files so When I open an old one iknow how it works :

Input set by user (selection in Rhino) = Light blue (circle groups for geometry)
Inputs from the script (resulting of an output at some point) = light purple (rounded rectangle)

think I have seen people automating their systematic approach, with more colors and smarters ways - I have to admit I kept a very simple way to do it - but it keep everything readable - even for future me.

Here is an example :

2 Likes