[FEEDBACK] Add a compiler to Grasshopper 2

After making my first nice graph in Grasshopper visual programming language, looks slow. I’m trying to code in C# the components for performance reasons. But this should be automatic. Usually, the visual programming language includes a compiler to make a more optimised final machine code. Making a build is important. For example, compiling the cluster or make a build of the final project into a dll or exe.
from: Polar Array using C#?


And who’s property will be that dll/exe?

Interesting idea though :stuck_out_tongue_winking_eye: .

What would you do with that exe/dll?

  • distribute it?
  • sell it?
  • run it without Grasshopper?
  • without Rhino?

For my own project I first developp in C# under Grasshopper, and when my classes are robust enough I compile a DLL inside Visual Studio, then I reuse my code (via DLL and manage Assemblies) in a C# component inside Grasshopper. For sure it is not optimum, but it allows me to have quite small definitions.


I’m doing the same and looks like is the workflow. This manual process is time consuming.

1 Like

Sell it into the Rhino marketplace?

Making a compiler for a Cluster into .gha can be a nice start!
Also for inspecting and reading the code so that we can learn faster Rhino API C#.
If just Rhino C# can merge with Unity burst compiler wood be super awesome!

I believe you want too much for very little contribution from yourself.

If you want compiled gha build it yourself with Visual Studio.
If you want to export the cluster and promote it, put a password on it and export it as user object.

By your definition you can take a bunch of 3rd party plugins.
Plugins created by GH users spent countless hours coding in c#, offering their creations for free even for commercial purposes (most of them). And you want to use little creativity and sell their work?

Dude, I am all for open-source planet but what you wish for is plain wrong.

No, better suggestion I have for you.
How about you create something with C# that takes you a year of coding, then share it all under MIT permissive license or BSD or something. Allowing people to profit out of your creation.

Secondary suggestion.
Compile clusters into GHA then sell them, but pay royalty to David and Bob, also every other creator of a 3rd party plugin you use in your cluster. Leaving for yourself just about 10% profit? How about that?

Why 10% because that would be your little piece of work compared to the work of others involved.

@ivelin.peychev, I am not used to go in debate, but @AlanMattano request for a “GH DLL compiler” seems legitimate in order to simplify a workflow. I don’t make a dll to sell it but to simplify my work. I think it will be be difficult to say I don’t share my work. What I don’t share is all the workflow to make some of my own creations.

1 Like

The issue I see is that there is not one factual way to turn a definition into code. Scripts and definitions do not follow the same logic (it is not as simple as tacking on each components code to the next, this will be the same speed as a cluster). Script in execution is different (loops, threading, no data trees needed, etc etc etc)

The issue is multi-fold legality is one, performance is another…

@laurent_delrieu do you share the source of your components? Do you allow people to distribute your dlls/gha and sell them?

I don’t think that your post is responding fairly to AlanMattano’s suggestion about a “GH compiler”.

Whether the idea Alan is suggesting is easily implemented or now is a technical question. But questioning Alan’s intents is a moral question.

You seem to assume bad intent from Alan, and that’s not how we should perceive other peoples intents based on argumentum ad silentio.

Let’s instead assume best intent.

There would by the way not be anything wrong with designing a complex Grasshopper definition, and then sell this “brain work” to others. That’s not violating McNeels efforts to provide us with exactly such a tool. And there would be no difference if such a GH definition would be compiled into compact efficient code, based on the std components. In terms of IM it would be exactly the same thing, only one solution would be plain code, the other solution a visual workflow model.

I’t would actually be a good idea to be able to compile components/clusters into compact optimized C# code, and from there even another step like Unity, into Unity like burst compiler.

What would be wrong though, would be to decompile GH components and compile it yourself, without any permission or license from McNeel to do so, and then sell such a compiler.

To me your post looks like an attack on Alan and his intents which makes him look bad. But I cannot see any valid reason for painting his intents which such dark colors.

If McNeel would actually produce such a compiler one day (it would definitely be possible to compile GH definitions in many although perhaps not in all cases) I would probably use it more often than not. One problem would be similar to what I said about decompiling McNeel components though, in that such a compiler would not automatically have the permission to decompile (and the re-compile) third party components.

All in all I think Alan presented an interesting idea and I didn’t see any bad intent at all behind the basic idea.

// Rolf


Stop bashing me I already said it’s a good idea. and requested further info about what he’s planning to do with the compiled result.

1 Like

Problem is, anyone reading your post will possibly think that you had a reason for painting his idea with such dark colors. Questioning moral intent is not balanced by any other statement about technical benefits in your post (and there cannot ever be a balance between the two).

// Rolf

1 Like

No reason at all, but before you develop something like that you have to think about the consequences and the problems 2-3-5 steps ahead

Nonsense. I explained that creating a Gh definition and selling your brain work is perfectly legit business and is done by many people frequenting this forum. No problem at all. McNeel would only be happy if all of us did just that. No difference if such solutions would have been “compacted” into compiled code.

I think I cannot be clearer about that part. Remains questioning Alan’s moral intents. Not nice, my friend. (You can do better)

// Rolf


Node-in-code is available.

Compiling a cluster vs. Node-in-code inside a compiled scripting component?

I prefer the latter, you can do it right inside GH. :wink:

Was that said to me or Alan? :thinking:

I mentioned earlier your questioning of Alan’s intents.

But I don’t plan to continue this discussion. If you don’t see in your post what I see, then I cannot do anything about it. But let’s keep this forum the friendly place it has been for so long. And before I quit, we can all become frustrated and perhaps say things in a way we wouldn’t normally do. And I have also fallen into that trap at times, and I apologize for that. But questioning, or even seemingly questioning, someones moral intents, that’s is not one of those things that goes unnoticed by anyone.

With that I have said what I think needs to be said so that Alan knows for sure that there was nothing wrong with coming up with ideas about how to further Rhino/GH.

One more thing about models and compiling though, because I have extensive experience from dealing with such concepts, and I know from that experience that there are some typical traps you can get stuck in, and that is, if the “model” side is not designed with code-generation (and compilation) in mind already from start, it’s going to be… if not a mess, so at least lots of problems is going to surface. In my case I’ve been working for years with UML models, which was then spitting out code which was compiled, but also the other way around, sucking up code and generating a visual model from the code. So called “round trip engineering”. That kind of concepts has been around for decades but, as said, if such a concept is not the goal already in the early design stage of a visual modeling concept, it will not be an easy life to keep things in sync. The basic idea behind compiling pre-existing concepts (visual components or not) is also often used in DSL-solutions, which means that you pre-design algorithms and functions (somewhat like components are pre-existing functions in GH) and then you let different “problem domains” (areas of knowledge) put their own names on those concepts, and they can use that as a programming language, which in turn lumps (compiles) those underlaying functoids together into some efficient compound software. In Finland they had developed a world reknown system for doing this kind of stuff already in the nineties, and the system was used by Nokia (phones) and also by US military and so on. Really different “problem domains”, which all used their own words and terms which made it easier for “domain experts” in the specific field to do even advanced software development them self, without having to consult IT experts (with which well known “language barriers” would follow in terms of explaining the (specific domain) problems to be solved). That’s what DSL stands for - Domain Specific Languages, one language or set of terms for designing mobile phone software, and another set of terms and concepts for controlling drones and autonomous vehicles in enemy lands and… oh well.

So compiling existing models and concepts into more efficient code is not something new. It’s actually a very powerful idea, if the tools involved are designed with this intent from start.

Have a nice weekend.

// Rolf


Just got schooled again. :slight_smile:

One of these days I’ll learn to not defend people who didn’t request defense :slight_smile: . I call it the western way (letting injustice get by). :slight_smile:

Apparently not just you, but who knows what’s behind my words better than myself?

Just to be clear Alan said his moral motives, and that lead me to write that controversial post:

Legal/ethical issues aside, this is not possible as there is no code which can be compiled. It’s not like the components provide textual code which gets composited into a program when you hook them up. Components are already compiled and run as fast as they can.

The performance difference between a Grasshopper file and hand-crafted code is all about the stuff in between components. The data exchange and conversions and type-checking that happens when a wire is connecting two parameters. When you do this in pure C#, you get the benefit of a strongly typed language and the compiler will simply refuse to work if your method arguments aren’t of the correct type.

Another fairly large performance drain is the continual conversion from and to IGH_Goo types to useful types. This approach to data storage is no more in GH2, so some improvement can be expected here out of the box, but those type checks still need to happen for every single value access at runtime.

A decision was made long ago that Grasshopper should be forgiving rather than strict, and that directly interferes with high performance.


Hi David

Then I would turn Alan’s request into:
“What about a strict mode ?” :slight_smile: