Solving and expiring a cluster correctly with Rhino Inside Python

Hi –

I am working on a project where I would like to set up a Grasshopper computation engine/server.

Right now, my setup is a React frontend which connects via socket.io to the backend; the backend passes data via OSC to rhino.inside python.

Rhino.Inside python loads a GH Definition file.

My original plan was to have the gh definition file be setup as a a bunch of “callbacks” essentially - a container for different individual objects/clusters with no inputs and outputs. Each object/cluster would correspond to a different computation that could be called by a triggered by a user action on the front end.

I first tested this out with just individual objects, and it seemed to be working correctly - new data would come in, the correct object was identified in the GH_Document, the inputs were set, the solution computed, the output harvested, etc etc. I could update the inputs as many times as I wanted and get new results on demand - wonderful, very cool.

I have tried moving on to adding a cluster, but I can’t seem to get it to work properly. The first time data is sent into the cluster, it computes correctly, but on all subsequent computations, it outputs the first result. Obviously this indicates some sort of issue with either (a) not correctly clearing the input data or (b) not correctly expiring the cluster solution, however I have not correctly identified how to reset the cluster. Calls to the other individual objects (as opposed to clusters) do update correctly every time, even after the cluster becomes stuck.

Any tips would be greatly appreciated! I have a feeling it is something simple that I am missing. Here’s the code - there’s some additional scaffolding for parsing the data delivered via OSC, but the core of it is following a standard pattern that cropped up on a few posts on here for solving GH components with Rhino Inside.

import rhinoinside
rhinoinside.load()
from pathlib import Path
import clr

sysdir = Path(r"C:\Program Files\Rhino 7\System")
plugdir = Path(sysdir, "..", "Plug-ins").resolve()
rhinoinside.load(f"{sysdir}")

GrasshopperDll = f'{Path(plugdir, "Grasshopper", "Grasshopper.dll").resolve()}'
GH_IODll= f'{Path(plugdir, "Grasshopper", "GH_IO.dll")}'
GH_UtilDll= f'{Path(plugdir, "Grasshopper", "GH_Util.dll")}'

clr.AddReference(GrasshopperDll)
clr.AddReference(GH_IODll)
clr.AddReference(GH_UtilDll)

# Set up ready, now do the actual Rhino usage

import System
import Rhino

from pythonosc import dispatcher
from pythonosc import osc_server
from pythonosc import udp_client
import Grasshopper
from Grasshopper.Kernel import GH_Document, GH_SolutionMode, GH_Component
from GH_IO.Serialization import GH_Archive
from Grasshopper.Kernel.Data import GH_Path
from Grasshopper.Kernel.Types import GH_Number, GH_String, GH_Integer
import time
import json
print("Finished Loading Libs")

definition = GH_Document()
archive = GH_Archive()
archive.ReadFromFile(r"./rhino-application/operations.gh")
archive.ExtractObject(definition, "Definition")
print("Finished loading Document")


# Cast input/output data based off of argument type
args_typecasters = {'integer':lambda x: GH_Integer(int(x)), 'number' : lambda x: GH_Number(float(x)), 'string': GH_String, 'json': lambda x: GH_String(json.dumps(x)) }
results_typecasters = {'Vector3D': lambda v: [v.X, v.Y, v.Z], 'number' : lambda x: x}

# Register objects/clusters in the definition to keys
nicknames = {'sun-weather' : 'SunVectorCalculatorWeatherFile', 'sun-coords': 'SunVectorCalculatorLatLong', 'cost-calculator': "CostCalculator"}


def commandHandler(addr,additional_args,payload): 
    print("\n\n\n")
    client=additional_args[0] # Where to send response to
    data = json.loads(payload) # The data
    callbackName = data['ghCallback'] # Use this to identify which object to call
    args = data['args'] # The arguments, with type and value data
    outs = data['outs'] # the outputs to cull, with type data

    # Get the object nickname, if it's not registered bail out
    try: 
        objectNickName =  nicknames[callbackName]
    except KeyError:
        print("Command not supported.")
        return

    # Get the GH object by its nickname.  There should be a better way to do this...
    # Bail out if it's not found
    gh_obj = None
    for ob in definition.Objects:
        if ob.NickName == objectNickName:
            gh_obj = ob
            break
    if gh_obj == None:
        print("Command not supported")
        return

    
    # Clear and update input data.
    for input in gh_obj.Params.Input:
        if input.NickName in args.keys():
            input.VolatileData.Clear()
            arg = args[input.NickName]
            gh_param = args_typecasters[arg['type']](arg['value'])
            print(input.NickName)
            print(gh_param)
            input.AddVolatileData(GH_Path(0), 0, gh_param)

    definition.NewSolution(True, GH_SolutionMode.Silent)

    # Set up the resonse table
    response = {"id":0, "ghCallback": callbackName, "results": {}}

    results = response['results']
    for output in gh_obj.Params.Output:
        if output.NickName in outs.keys():
            print(f"Computing {output.NickName}")
            result_type = outs[output.NickName]
            caster = results_typecasters[result_type]
            output.CollectData()
            output.ComputeData()
            results[output.NickName] = []

            pathcount = output.VolatileData.PathCount
            idx = 0
            while idx < pathcount:
                b = output.VolatileData.get_Branch(idx)
                print(b)
                for item in b:
                    print(item)
                    results[output.NickName].append(caster(item.Value))
                idx = idx + 1
            
            if len(results[output.NickName]) == 1:
                results[output.NickName] = results[output.NickName][0]
    
    # Cleanup
    gh_obj.ClearData()
    gh_obj.ExpireSolution(False)

    # Respond to client
    response = json.dumps(response)
    client.send_message("/response", response)

# Configure the client
client = udp_client.SimpleUDPClient("127.0.0.1",3334)



# Configure the server
dispatcher = dispatcher.Dispatcher()
dispatcher.map("/compute", commandHandler, client)
server = osc_server.BlockingOSCUDPServer(("127.0.0.1", 3335), dispatcher)

server.serve_forever()



I even replaced the interior of the cluster with two text objects acting as passthrough, to make sure it was not something to do with any of the components inside.

I understand I could probably use a different architecture - for instance opening the cluster as its own definition/document, or creating a new document and inserting the cluster every time, but I appreciate the cleanliness of this setup, and I assume it must be possible to correctly expire and solve the cluster!

Okay, I added this to the section where the grasshopper object is extracted, and it is working now:

if type(gh_obj) == Special.GH_Cluster:
    gh_obj.CreateFromFilePath(f'./rhino-application/{objectNickName}.ghcluster')
    gh_obj.ExpireSolution(False)

This is sufficient to get the correct answer everytime, however I do still wonder if this is an anti-practice solution. It seems like reloading the cluster from file every time is overkill, however it also is pretty simple and if the alternative is iterating over all the objects within the cluster and expiring, I will stick with this!