This is just the way it works now. Components have to do nothing special. It’s possible to provide slightly more granular progress feedback by writing some extra code, but that’s only really useful if a single iteration takes a long time (like quad remesh for example).
The UI must always run in the ui-thread, which is also the main thread for any normal application. Some things still happen on the ui-thread, like expiration, file saving, viewport updates, and they can still cause the ui to grind a bit now and again.
The parallelism works in three main ways:
- A new solution can start without the previous one ending. However, when a new solution starts, the previous one is always cancelled. The time delay between cancellation and actual stopping may be anything from mere microseconds to never. If the old solution is stuck in some infinite loop for example, it won’t stop running and using up processor cycles until Rhino is shut down. However, the ui remains live so at least you can save your files first if you need to restart.
- Components are executed simultaneously if they are independent from each other. This is not actually that big of a deal, since most components spend their time computing things rather than waiting for something to happen, and the amount of computational prowess of your system is a constant. So instead of component A taking 2 seconds to complete and then component B taking 2 more seconds to complete, A and B now run simultaneously but both take 4 seconds to complete because they can only use 50% of the resources.
- Component iterations are parallel. This is the major speed-up. When a component takes a long time to finish, it’s usually because it’s operating on a collection of data. For example curve offsets are reasonably fast in Rhino, but if you’re calculating 2500 of them it’ll take a few seconds. Those 2500 different offset operations are now distributed across all your available cores.
This is what is happening in the video right? The component doesn’t finish resolving but the UI doesn’t freeze because it’s on another processor, that’s cool for sure.
I’m not so sure that the document should be responsible for resolving all objects, maybe as default behaviour, but in GH1 that has proven to be a big constraint. This is still the case, isn’t it? I would like to have a dedicated object to control the processing of groups of components within the same document instead, i.e. to be able to expire and solve a subset of components independently without affecting the rest of the definition (expired or not, connected or not). Then loops or snippets or remote processing can be made clean by having mechanisms that ensure isolated processing of a group of components. Or do you have a different solution for loops? This controller can be brought to the canvas as a group-like object but with buttons to compute or stop (to control) its processing, and evolve these controllers into loopers and samplers, where other objects can reference these snippets for remote use.
Yes it is. Every new solution attempts to solve all stale objects. Objects that complete before the solution is cancelled will no longer need to be calculated again in the next solution, but if a component is even 99% finished when it gets cancelled, it still has to start from scratch.
If they’re not connected, they won’t get expired and won’t recalculate. So what you want is the ability to expire a load of objects, but only recalculate some of them? Basically, a more efficient way to disable/enable groups of components?
There’s already a mechanism in GH1 to limit the preview to components inside a hand-drawn region, the same principle could be extended to only solve objects within some other hand-drawn region…
Not yet. I wanted to come up with a solution for loops for Wip 1, but that’s not going to happen. I just hope that whatever idea I come up with later isn’t going to require a massive redesign of important core code.
I had thought that too, but it’s more than that. You currently prohibit an object from being able to connect downstream because it would form an infinite loop. But if this loop is created in a special group, a process controller, you allow the user to configure the number of loops and any other settings needed to simulate for-each, for-i and while loops. So it is not so much about disable or enable linear process flow, which can be done in GH1 in a cumbersome way with filters or data streams, but about rethinking the origin of the problem, and that is that GH is not really multi-circuit if it cannot process circuits independently. Looping within a loop within a loop doesn’t sound very promising if you can only compute the entire definition.
I made my own version of Anemone (with a start and end component) to build an algorithm permutator, which computes all possible combinations of input parameters, as well as a bunch of other utilities that required recomputing the definition, and my experience was a struggle to embed the logic with the constrain that GH computes globally, at the document level, because this affects all other expired objects which can come from timers or manual actions at runtime or many use cases that also ask GH to be resolved again, that may or may not be related to the new request for recomputation. There are several ways to orchestrate this, my suggestion is dividing the processing into separate circuits and dedicated controllers to overwrite behaviour, so default will work as now and can be extended with non-linear processing flows.
Imagine that the document does not resolve the necessary components, but that there is a dedicated class, which is created in each circuit, to take care of that. When a component is placed, the document creates a new controller internally and binds it to the component. By adding a second component, the document creates a new controller internally, for that new (single-object) circuit. When both components are connected, a new controller linked to the two components is created and the other two controllers are deleted. Each controller takes care of expiring and resolving the components it contains as if it were an isolated document, and ignores all the other components but notifies the controllers that it is connected, so each circuit can be processed independently, and therefore you have flow units, which are the necessary modules to chop definitions and make loops. And the document instead of orchestrating the components, as if it were a single function, orchestrates these controllers or objects at the circuit level, since each circuit is its own function. And we can override their behaviour by creating objects on the canvas (similar to the image below) to with play buttons, stop, number of iterations or whatever, you can do many things thanks to this.
Perhaps driver is a better name than controller for this object.
Without going any further, I have done this kind of group that controls the processing of a circuit in Tenrec, a plugin for unit and integration tests in GH definitions, which run from the Test runner of Visual Studio using Rhino.Inside, but it’s impossible to get it to the level it should be without changing how GH works. The same goes for Anemone or any non-linear processing flow.