GH is completely biased in a node-to-node approach. All right, I love it. But opening it to a process-to-process approach would be breaking the ceiling. And doing so in parallel makes it an ideal platform for experimenting with AI.
For a developer, the fundamental class now is the GH_Component, but the ideal should be something like GH_Graph or GH_Definition, where a document contains them inside, and it contains the components (which can be handled parametricaly from code at least) and where the solution occurs. Being able to categorize the processes as objects themselves in a native way seems to me to be the natural step that GH2 should aspire to. It allows, for example, to plug a process (a circuit of components) as an input into a component (the process with input and output restrictions, like data type or data structure), making GH more OOP.
Think of Kangaroo, you can only define the initial process and adjust the parameters on the fly, but if you could define the processes parametrically (like a decision tree) while running it? Then GH would be ideal for AI. Although not the most efficient approach, it is the most powerful. And I prefer that because I rarely work with huge amounts of data. My bottlenecks are boolean operations and that’s in other hands.