Streamlining process of compiling multiple GHPY components

Hi all,

I am in doubt about transitioning from a development workflow with open-source code to a workflow with compiled components. How are others achieving this?

I lay out my precise question below under HERE is where I am currently stuck!, but I’ve shared how I’m currently developing and I’ve laid out two potential paths forward for context.

Option 1 will definitely work, but it is quite manual. I repeat the process for each of 20 components on a canvas, compile each one, and manually replace the ghpy files as needed.

Option 2 is less certain, but I think it can be automated. It requires significant changes to the source python code to ensure it can be compiled. I’ve hit some road-bumps with latin-1 encodings and batch-processing for multiple components.

Current Workflow:

  • The Grasshopper canvas has 20-ish Python components that point to .py files using the pattern in the image / minimal example below. (my previous post on setting this up)

  • The components run in the Procedural Script mode, not GH_Component SDK Mode.

  • Code is distributed directly using the same canvas.

  • Example of pattern & files (you will need to re-link Path 1 and Path 2 to and, respectively in order for this to run): (14.5 KB) (3.4 KB) (3.4 KB)

  • I prepare for distribution by tapping the ‘minus sign’ on the ‘code’ input parameter. That stores the last version of the Python code which was read inside the component, and breaks the connection with the .py files that I actively edit in Sublime Text.

This process works great… but has these disadvantages:

  1. The source code is open
  2. 30-60 minute manual process of preparing a canvas for distribution.

Option 1 - Manual workflow

This is essentially following the steps in Giulio’s tutorial, adjusting ghenv to self, manually and then also manually adjusting the documentation strings too. This will be a pain in the butt to do for each individual component each time I distribute.

Option 2 - Automated workflow

  • The ‘development’ components always run in the GH_Component SDK Mode, so that I can run a batch-export process for the compiled components at any time.

  • I have two separate canvases, one for ‘development’ (above) and one for ‘distribution’.

  • The ‘development’ canvas continues to link to the source .py files directly. (Same pattern as above)

  • The ‘distribution’ canvas is built out of compiled components. Code in these components updates each time we update the GH Canvas. I only need to edit the ‘distribution’ canvas when I change the number of input or output wires on a component or add new components.

  • Here are example files: (13.1 KB) (4.3 KB) (4.3 KB)

  • I am trying to batch-process the export of these files using this canvas, drawing on code in Giulio’s example:

    Here is the code for that: (40.5 KB) (4.3 KB) (4.3 KB)

  • @piac HERE is where I am currently stuck!.. When I use this code to disconnect all the relevant ‘code’ and ‘out’ input parameters. I receive an error related to the class no longer deriving from component.

      if Toggle == True:
          for obj in ghenv.Component.OnPingDocument().Objects:
              if str(type(obj)) == "<type 'ZuiPythonComponent'>":
                  if obj.NickName in NamesOfComponentsToPrep:
                      if obj.Params.Input[0].Name == 'code':
                      if obj.Params.Output[0].Name == 'out':

Next Steps::

  • Assuming it is possible to automatically disconnect the ‘code’ and ‘out’ paramters, then I want to…

  • write out the strings that come from Giulio’s component to .py files

  • Compile those with another component that runs code similar to this (I have tested it already, and that works):

    import clr
    outputPath = FolderPath + "AlexanderCompiledTool6.ghpy"
    inputPath = FolderPath + ""
    clr.CompileModules(outputPath, inputPath)

(I’ll tackle the distribution in a separate post later… but the GHPY files will either 1. be distributed via an internal Yak server or 2. via Microsoft Sharepoint, whereby our IT department pushes Rhino settings that point Grasshopper to check the SharePoint folder for plugins)

Summary of manual edits to source code between ‘Procedural’ and ‘SDK’ modes:

  • ghenv.Component must change to self for the compiled code i.e.:

    warning = gh.Kernel.GH_RuntimeMessageLevel.Warning
    message = 'Some warning message'
    ghenv.Component.AddRuntimeMessage (warning, message)

    needs to be this:

    warning = gh.Kernel.GH_RuntimeMessageLevel.Warning
    message = 'Some warning message'
    self.AddRuntimeMessage (warning, message)
  • All python files use Latin-1 characters, so this must be added to the top of each .py file
    # coding=Latin-1

  • I have been using aliases for Grasshopper and Rhino common… so I am simply running the import twice. i.e. SDK mode requires these import statements:

    from ghpythonlib.componentbase import executingcomponent as component
    import Grasshopper, GhPython
    import System
    import Rhino
    import rhinoscriptsyntax as rs

    whereas my current code uses these import statements:

    import Grasshopper as gh
    import Rhino as rc

    which means that Grasshopper and Rhino are being imported twice…

Remaining gaps:

  • I also had problems writing out the code that Giulio’s component outputs to .py files that can be compiled… something with encodings and interpretation of a unicode single quote character.

Hi Alexander,

Why is the latin-1 encoding needed?

In the plug-in I wrote, uncompiled user objects are created, not compiled components (the user can’t tell the difference). Building releases is all automated, a single main GhPython component builds all the components except for clusters and a custom text panel for the readme.txt. I even run it from a batch file, so there’s hardly any manual work at all anymore.

If it’s OK to leave your source code visible to your users, I can explain further tomorrow. If compilation is a fixed requirement, I’m still confident this workflow is automatable, and I have availability at the moment. If you send me a quick message describing which NDAs and anything else that needs signing, I can get that working for you quite quickly.

Best regards,


I just sent a PM about the collab.

As for Latin-1, You’ll see another component on that page that is using clr to do the compiling. That script is reading the .py files that I’ve saved to disk, and the encoding declaration was required for the read process.

This script does the same thing as Giulio’s file… see step 2.9 of this example.

Thanks Alexander. I’m replying now. The plug-in is sDNA_GH | Food4Rhino


@Christian_Kongsgaard … perhaps this thread becomes relevant.