I´ve often encountered the situation where I need to pan around and search for quite a while before I find which components takes time to solve. (and currently I cant even find it) It would be really nice to be able to spot this more quickly. Is there a way to list the computationally heavy components?
var activeObjects = GrasshopperDocument.ActiveObjects(); // get all active objects
var results = new Dictionary<string, double>();
for (int i = 0; i < activeObjects.Count; i++) // for each active object...
{
IGH_ActiveObject ao = activeObjects[i] as IGH_ActiveObject;
results.Add(string.Format("{0} {1}", i, ao.NickName), ao.ProcessorTime.Milliseconds); // ...store its time and name...
}
foreach (KeyValuePair<string, double> result in results.OrderBy(v => v.Value)) // ...and sort the collection...
{
Print("{0}\n {1} ms", result.Key, result.Value); // ...to get the winner
}
… besides that. you never ever should be in a situation where you can’t find the bottleneck. If your definitions are + 100 components you are doing something wrong. Use scripts, clusters, expression components or simply a better algorithm to shorten up definitions. Otherwise performance doesn’t matter, because you cannot maintain big definitions at all, which is much slower as any calculation time.
Thanks a bunch to the both of you! Your script is really neat TomTom.
And you are right about your best practice advice. (in this particular case I realized that the Cplane was miles wrong which created large numbers that clogged things down without giving notice on the profiler widget)
I’m not sure I agree with this – some problems are big and cannot be reduced to smaller algorithms.
Breaking up large problems into separate GH files is cumbersome, adding complexity as data must be packed and unpacked. Clusters have limits, there are routine operations that don’t work inside them. Scripts and clusters are difficult to share consistently between files, requiring constant cross-updating which is again cumbersome.
IMO finding bottlenecks is valid functionality.
well lets say it like this. In my experience the average grasshopper script is getting very big because of very small operations related to data management, handling border conditions, simple math.
I never use cluster but I script a lot. This prevents me from tree handling and reduces a lot of unnecessary components. Sure in the beginning scripting is more difficult and is much more time consuming, but the moment you start solving complex problems you get an immediate benefit. However even scripting is not scripting, because it depends on the ability of the programmer regarding finding a level of appropriate abstraction, his/her skill on documentation and his/her skill on solving complex problems by writing simple code.
Last week I reduced the component count of a script of 200 components down to 30. And its even working more reliable, faster and handles more cases now.
In theory you could reduce even further, but its always a matter of keeping things modular, extendable and simple. I would also not recommend to script when being in a creational process, but at certain size you should definitely begin to overwork your definitions, replacing logical blocks with code, clusters and/or mathematical formulas. And I strongly believe that complexity does not equals big definitions.
edit: I also often deal with too many input parameters in definitions. Everyone knows this: “Which slider do I have to move to invoke this change?”. I believe reducing input to a meaningful minimum is also important. What does it help you to create trillions of variants, but none of them are satisfying the quality needed. If you know which parameter leads to good results you can force them to become constant or create algorithms to solve its best value.
Yes, well, “be a better programmer” and “refactor early and often” are always good advice. Nonetheless some problems are gnarly, and sometimes neither of those options is practical tonight, which is when I need this thing to work.
If GH is going to have a wide user base and a lot of applicability, it needs to support lazy and amateurish programming practices as well as intensely focused and professionalized ones. Lack of analysis and debugging tools is not a virtue, and “you need to work smarter” isn’t a complete answer to "why is it so hard to figure out why my program won’t run?"
It’s not a bad answer, and I don’t doubt it comes from a place of love, but on some level you can answer every feature request in this way.
I agree. GH1 didn’t focus at all on developer tools, and that will have to change in the future. Much better profilers and some sort of debugger are clearly called for, as well as perhaps some automatic refactoring and best-practice schemas.
Well let us not forget that I and Siemen provided a solution to the request.
However the script I provided is solving the symptom, but the problem is something else I believe. This is what I was just saying.
David is doing Grasshopper de facto alone. A lot of these requests do have a certain justification, and its definitely a nice feature to have. But when its about further development on Grasshopper, we should focus on much more critical issues and requests first, although this feature is relative simple.
In my opinion the deeper problem underneath this request is of different nature:
A.) node editiors become difficult to handle when being big. This is a natural problem of them. So everyone using one should be aware to reduce component count, as hard as it might be (or just living with the problems involved)
B.) parametric designers always argue its workflow is superior to manual work in term of speed and complexity. I can quickly change this and that, I can faster create these repetitive elements. Its more efficient, more diverse, more accurate etc pp. I believe this is a false statement. People doing generative design, including myself, are much more often in a situation where deadlines are becoming a problem. Why?Because of people constantly claiming being faster. 4 hours there, 2 days there. In the end our work is less worth. Its not that the designer itself benefits from speed very much. I’m doing 20-50 % of my professional work with Rhino and Grasshopper, and I consider this as a lot. Hardly anyone I know, can fill a year with more time on Rh+GH. However I constantly try to get more time for my project. This is really tough, because others saying oh I can do faster. But you know, hardly anyone of them is able to produce a final product and if they can, they never do it in contracted time .That’s why I’m saying, never come in a situation where you have trouble isolating the bottleneck. If you find yourself in there, your time- and file-management is the problem, not missing features of gh.
The current Esc key behaviour is the best I can do for GH1. Solutions run on the UI thread in GH1 meaning that any mouse or keyboard event will not get processed until after the solution is finished. The only way around this conundrum is to actually check the pressed state of the Escape key every now and again and abort from within the solution. If there’s a long loop running which doesn’t participate in the escape-key-checking, then there’s really nothing anyone can do short of killing Rhino.
Isn’t this more a problem of Rhino? In theory, if you call a method of Rhinocommon in a separate thread, does killing this thread during execution safes you from crashing Rhino? I would guess no. Seems to be a dangerous thing to do with unmanaged code.
However I know from Icem Surf they run every command in a separate thread, same as VSR Plugin. So in case a command fails only the command gets killed. On the other hand you can’t script in there.
We’re not talking about crashing though, we’re talking about long-running, uninterruptible processes. Wait long enough and the computation will either complete, or you’ll eventually run out of memory.
You almost never want to kill threads, that can leave the program in an undefined state. Threads prefer to commit suicide. So two things are needed to make this work in a user friendly way:
During calculations, the UI must remain responsive. This means performing all long-running processes on background threads.
Threads must be cancellable to a degree. Maybe not immediately, but shortly after a cancel request.
This way you can always cancel a computation, either by specifically cancelling one or by starting another one. And even if a computation refuses to cancel (for example because it called into a long-running method which isn’t cancel aware), you can still keep on working with the remaining processing power you have left. Or, worst case, choose to save your file and restart Rhino the proper way.
I know, but isn’t this a bit of cheating
I thought @Bathsheba means actually to interrupt/stopping current execution without crashing, but you can’t do this with Rhinocommon methods(?). Of course if you call a rc-method 1000 times you can, but if you feed a rc-method with 1000 objects you have no influence on termination.
So by saying run until end of memory or computation, you work around this issue. Which probably is the best you can do.
My assumption just was that, to actually stop an external method from execution, you need some sort of violent thread termination, like other programs indeed do as some sort of “last resort”- functionality
I thought it was about stopping without killing Rhino via the Task Manager, which also isn’t, technically, a crash.
You can run most RhinoCommon methods on threads. If you find one that can’t, either there will be a very good reason why, or we will try and improve it.
The real problem however is thread safety. A lot of methods that change the underlying data structure cannot be threadsafe. If you transform a Brep at t=0 and that operation takes a total of 3 milliseconds, and you start transforming the same brep with a different transform at t=1ms then there’s really no way of knowing what it’ll look like after all transforms are complete. Probably just a gigantic mess.
However methods that operate on different instances or methods that do not modify the underlying data ought to be threadsafe. This is something we have worked hard on for Rhino6, but doubtless there will be cases we missed.
We’ve also added a lot of cancellation options to traditionally expensive methods such as solid booleans or Make2D. This means that these operations can be cut short by whoever invoked them.
If that method doesn’t take a CancellationToken or provides some other mechanism then yes. And if you cannot afford to just let it run to completion upon a cancel request then you will have to do some thread termination. But that is so difficult to get right, that it almost doesn’t bear thinking about. You’d have to essentially execute the external method in a separate AppDomain (which can be safely terminated), and figure out a way to pump the data from that secondary appdomain into your own appdomain when the method completes.
Luckily more and more apis these days provide cancellation mechanisms, and indeed RhinoCommon is one of these.
I suppose this isn’t a new idea or suggestion, but I’ve often wished that Grasshopper (GH2 maybe?) could provide some sort of pre-emptive warning or opportunity to back away from a solution consisting of many many iterations,
Often this is the result of accidental datatree mismatching or forgetting to change from Item => List Access, but it can easily happen that eg. a param consisting of 2,000 data points gets somehow wrongly grafted
or wired to the wrong input. This multiplies the number of iterations 2000-fold and locks up the solver, leaving no choice but to force quit Rhino.
So what if, at the start of a component’s calculation, it detects that it’s being asked to run a huge number of iterations and throws up an internal flag to start a timer, for e.g. the first 10 steps, or counting the iterations completed in a second. Extrapolating from that would provide some estimate for the expected time needed for the entire calculation, and if it’s some absurdly long period the user could be shown a pop-up warning like:
The component XYZ is scheduled to run 4,000,000 times according to its number of inputs. This might take approximately 56 hours (2.3 days) to complete. Are you sure you want to continue?
AbortContinue (20s countdown timer) ☐ Do not show again
And then aborting would restore the previous solution state from the UndoRecord and possibly undo the most recent wire connection actions. Of course I’m assuming that the main thread somehow regains control between each iteration step to do data management, but this might not be the case or that there are other complications?
still happens way too often to me I prevent such accidients by doing some sort of prove of concept with few members beforehand and by constantly saving, but still one moment of distraction and you kill everything.