The profiler is very useful to find execution time bottlenecks, but it doesn’t help to find out if the definition isn’t overloaded by components that duplkicate and hold large amounts of data.
That’s why I issue the idea of having a “data scale” Widget as well.
Duplicates and amount of data has to do with the cardinality of the data (number of elements), not with the weight of the data (bytes/space in memory). They are two different things, which one do you mean?
A million booleans can weigh less than a complex brep.
The software also can’t know (with the current GH paradigm) when it is a user error or its intention, and I don’t see a direct relation of this measure (in both interpretations) with being something good or bad, it depends on each case, on the complexity of the algorithm that the component uses… It is also quite easy to see which component is returning more data than it should, with a panel, with the tooltips, with the cardinality components, with sunglasses by zooming in… so what I am missing?
Another question is, what should it measure, the amount/weight of information in the inputs plus the outputs (which is usually output0 = input0 * input1 or output0 = input0 + input1 or similar conditionated by param access and data structure) or the differential between the output data and the input data (would this not give more precise information to map this information across the definition?)?
I can see its usefulness in very particular cases (when the user makes a mistake with the data structure for example, which can be seen in many other ways, the profiler itself for example for taking longer than usual) but do you think it is worth the cost (execution and graphical) to visualise that information?
Maybe the memory weight has some use, I see that profiler in VS when debugging, but how do you know when it’s being an anomalous result without being a god level wizard which wouldn’t make you really need this?
I know how to get the cardinality of a tree.
Of course, I mean the space in memory.
For example, I’m dealing with blocks, and I’d like to spot components that uselessly store duplicate geometry.
Well, same for the profiler.
It doesn’t make it less useful.
No, I’m concerned with RAM usage, that’s all.
I recently had a 130 MB rhino file which took 700 MB in RAM when opened in Rhino, and my definition gobbled-up over 10 GB.
That’s why.