Best Practices for maintaining Nested GH Definitions

Hi everyone,

Currently, I’m facing a challenge with maintaining a large Grasshopper script that heavily relies on nested clusters. Managing these clusters, especially when they are duplicated at different nesting levels and thus disentangled, has become a cumbersome task.

To streamline this process, I’ve considered exporting clusters to separate .ghcluster files, as this seems to offer a level of modularity and maintainability. However, I’m uncertain how to maintain this “architecture” when deploying the Grasshopper script for computation on a remote instance using rhino.compute. Note that I can also use Hops definitions instead of clusters, it’s similar in my case.

My primary questions are:

  1. Is it possible to use external file definitions for clusters or hops components without internalizing them, thus maintaining their modularity and version control, while sending the parent Grasshopper script for computation on a remote instance with rhino.compute?
  2. If answer to 1. is yes
    Should the linked definitions for clusters or nested hops components be located on the caller or the callee machine? If it’s on the caller machine, will they be automatically passed with the main file’s request ? If it’s on the callee, where should they be located ?
  3. If answer to 1. is yes
    Is it possible to store these linked definitions on git platforms like GitHub, allowing for version tracking and seamless updates on the nested clusters/hops definitions?

Any insights, best practices, or experiences shared would be greatly appreciated!

Thank you a million times for your time and help already.

1 Like

+1 I’m interested in knowing this as well as I start to break up a 9,000+ component definition into more manageable pieces. :upside_down_face: