Found this article about new Hops functionallity for grasshopper
Looks interesting but how is it different from regular external clusters functionality?
Hops functions take longer to calculate and produce some unexpected tree mismatch if parallel computing is on.
Are there any particular scenarios where using hops over clusters is beneficial?
In a very basic short, CPUs run processes sequentially, which means that one process must be finished before the next one can start, they are more powerful but slower. GPUs run in parallel, each core can run processes asynchronously, they are very fast when the task can be spread across multiple processors, such as with graphics. Running in parallel usually means that the same task is computed separately on the GPU cores and then all the results are pooled together.
The cluster document is stored in the document that contains the cluster, so if you edit a cluster, it only affects that particular instance, making it a nightmare to maintain a cluster based plugin like Peacock was. On the other hand, if you edit the source document of a Hops component, it will affect all its hops components that reference the same source document, making it like a block of code and opening the door to cloud services like collaborate on definitions and maybe some day running heavy processing tools on powerful computers in the cloud. This also helps to unify RH and GH, since with a definition you can create a component in GH and a command in Rhino.
Can you share a sample where you are getting unexpected results? You really should never have to turn off parallel computing for hops. Right now, only one additional process is handling the remote definition solving by default which isn’t giving you a great boost in performance. This will change in the near future.
Dani is spot on with a good description of what is going on, thank Dani.
In regards to parallel computing with Hops, solving changes from a multi-threaded solve to a multi-process solve where multiple child processes could be solving different iterations of a hops component at the same time. These are separate independent exes that are solving which is overall safer. These exes are solving a single iteration at a time using good old single threaded code that GH has always been using.
I’ve been missed in GH an input-output signature, a kind of contract/guarantee that a component will have parameters with specific param types (plus some metadata matching maybe), bc it allows to control/define the processes as templates, like an abstract function in C#, where the developer defines a script schema/template and the user defines the logic under this constraints. It produce algorithms as objects itself, definitions with an expected behaviour, then the path of a hops component could be used as a variable input to switch between algorithm steps or definition parts in a clean way, and it only works when the ghplayer definition fits an input-output constraint. It will also will allow to understand definitions in a higher level of abstraction, as pseudo code, by the user and bu software, since is a way of compress the script information to be easily measured, analysed and exploit it in new ways. So… would be great if this new feature leave the door open to an input-output signature, to validate the source file rather than change the hops component parameters.
Parallel and linear computing give different results and both of them don’t match a cluster (which is correct). Even if you provide a single value as an input, it would add an extra null item to the resulting list if parallel is on.
P.S. It would be nice to have a relative path option for hops otherwise it’s hard to share them.
I’m not sure what you mean, what I’ve looked about it doesn’t seem to fit much. It’s like interfaces or abstract objects and methods in C#.
It’s about creating definitions with components that don’t transform the inputs to outputs, without implementation, but just declare the parameters and metadata of the components and the connections between them, like an algorithm template.
It seems to me a simple idea that can open up new applications, so I will take a closer look this weekend as an alternative version of the Hops component that has the path as input and the parameters are defined when creating the component.
Seemingly it just names “function”, e.g… Math.Abs is a function/closure/lambda expression here, and used as List.SortByFunction’s input. (the function here can also be a group of components)
Ah, this are like delegates in C#. But no, is not that. What I mean is not about using functions as arguments, which can be as well, but to modularize in a higher level of abstraction bc it allows to write and read definitions in a more conceptual way. Focus on tasks rather than implementations.
Delegates can be wrapped in GH using GH_ObjectWrapper, it is useful for the developer who wants to generalise processes, but maybe not for the user. Is easy using code functions, but with GH definitions I think is not yet possible with the current version of Hops component.
Isn’t interface generally a bunch of functions? I don’t get why the function-level isn’t enough and multiple functions are required. (the function here can include multiple components)
Say daylight simulation, you can wrap multiple components as one daylight function. And the function (as one sole component) can be used by the end user. * Isn’t that just what cluster & Hops do?
Currently with that you cannot go through a set of GH files, and automatically pick up those that are guaranteed to perform a task simply by defining some few data, bc GH doesn’t understand algorithms as operational objects, yet.
The parallel computing option doesn’t affect the output anymore (for this example, at least), but it still doesn’t match the cluster. I’ll look into it…