Hi @DavidRutten,
I saw this thing in a video about blender rendering nodes and it really looks handy. Do you think that’s possible to implement also in Grasshopper?
Hi @DavidRutten,
I saw this thing in a video about blender rendering nodes and it really looks handy. Do you think that’s possible to implement also in Grasshopper?
Houdini is also working that way with auto connections.
What would be the algorithm for choosing which inputs and outputs are connected? Sometimes it’s obvious, but oftentimes there are multiple good or only bad choices.
How about dragging and dropping the component onto the wire and upon release (mouse up), you’d be prompted to chose the desired component input to connect to, from a quick menu showing a list of available options?
This would be quite handy, although it should be integrated carefully, since large definitions with lots of components and wired connections could pose a problem, when adding new components in heavily populated areas. I believe in Houdini, you have to drag the component over the wire and shake it with the mouse to make a connection.
Another feature that would be handy, would be a shortcut, involving the mouse to slice wire connections in order to delete them quickly. Like cutting them with a knife!
Hello,
Since “Rhino.NodeInCode”. I dream of a side panel in Rhino similar to the Blender modifier panel, to modify the content of a “Rhino.Geometry.InstanceDefinitionGeometry”.
I had tried the idea with C#… Yes, how do I know which output to connect to which input ? And what are the inputs to define manually ?
For entry this is easy, the property “IGH_Param .IsPrincipal” can help (although it is not quite his role).
But there is nothing for the exit.
In UE4, this is also possible. Inputs and outputs are strongly typed, they are probably looking for a match by type typeof(Input).IsAssignableFrom (typeof(Output))
In GH2, do the input or output have custom metadata?
If we can attach “UserData” or similar to the GH_Param
. This could resolve many dead ends …
other than that I don’t see:
Component_MassAddition.PreferOutput (GH_TypeLib.t_gh_number, "R")
I already faced this in the past, both in Axon Widget and in the future version of Peacock. My solutions have been the simplest version, but at least my ideas have been able to go further. There is no perfect solution but it can be approached and make it worth having this feature. In my experience Axon works more than well.
Having S
source parameters (ouputs) and T
target parameters (inputs) of two respective componentes, there is a pair {S(i), T(j)}
that maximizes the probability of predicting the user action.
You can go down this gradient in several ways, but in my opinion they all lack enough factors to take into account in GH1, as I explain at the end. One way is to weight the factors with scores (between -1 and maybe 1), add them all and keep the highest positive pair. For example, we have:
S(i)
and T(j)
must be of the same type, or convertible. The similarity or proximity of type give the score. A curve is more likely to be a rectangle than a number a rectangle.S->T
and take the most used (this is what Axon does) or use a smarter discriminator.Other methods that take into account the context (definition)/intention of the user, would be with a Deep Markov Model/Reinforced Learning. But this is too much for this particular problem in my opinion.
** GH1 lacks support to qualify the parameters. If I have a component that makes a T
operation in a G
geometry and has P
scalar parameters, for example, the similarity between T
and any P
parameter is less than a P
parameter with another P
parameter, and not only because it has different type (which could be the same) but because G
is the subject of the operation and P
the modifiers of the operation. And just as you can qualify the parameters, you can do with the components, there are components that are input or output of the definition (as a slider (technically is not a component in GH1, but in my opinion should be) or the preview), others are modifiers, others are creators… Not all components are equal, what matters are the different types of operations. This difference seems trivial but it is not, because it is a brick for new features like this one, because it adds more meaning. So if you compare a component A
of modifier type with another component B
of creator type , the most likely is that A
connects to B
in a parameter of modifier type, not subject type. This is a small example but there would be a whole family of topologies of operations that would support the AI of GH definitions.