Question on computing large amounts of geometry

Hi everyone,

I find myself needing to generate boundary surfaces of contour curves to then extrude them. But GH crashes if I try to do it all at once. Got 32GB of ram and windows still runs out of memory.

I wonder, would a python script for example be able to divide the list of curves in branches, animate the slider that controls the nr of branch and bake each list. Would this mean less ram used? Or would it be the same?

Edit: I also have one more question, how is that that the “animate slider” command is able to give an “Estimated Time Left” of the operation? With the discussions around GH having a progress bar I thought you could never have an estimate of an alogarithm before it actually runs, thus my question what is going on withe animate slider?

well, a quick hack that comes to mind is if you generate the solids in an anemone loop, or other methods so you are not generating them all at once: break your geometry up into areas, test for curve inclusion (or centerpoint inside a boundary). Another way is to break your data tree into chunks, and work them chunk by chunk. Using a version of this method in RH5 I found a huge increase in speed and a large decrease it needed memory for a bunch of solid booleans; haven’t tried it in 6 yet.

Probably more. Baked breps will also require attributes whereas Grasshopper breps are stored ‘naked’ as it were. How many different extrusions are we talking about here?

It doesn’t know, which is why it’s an estimate. Just like the Windows file copy progress. The assumption it makes is that the frames will take a roughly similar amount of time to compute and render, so given how many frames have been done so far and how long it’s been since the process started you can extrapolate and give an estimate. It may be wildly wrong if later frames take significantly less or more time than earlier ones though.

About 1k but I think the biggest issue here is not que quantity but the size and complexity. I am working in KM units right now. And the curves are polylines with many varying segments (contour lines). I already applied “reduce” to them, should I smooth them too? Would that help? Should I also scale to smaller units?

Is it possible to have a ‘nested’ view of the progress indicating number of components calculated, and the progress of the current component under it?

Might not be accurate in terms of time prediction, but it can show the overall progress well if I’m not mistaken?

Maybe something like this?