Why is working with many block instances so slow in GH?

Something I am just curious about and maybe @andheum or @krahimzadeh might know:

How come working with many block instances using Human, Elefront, etc. is so super slow in Grasshopper. It seems like it doesn’t matter what the block contains (it’s only a rectangular box in my case) and even if I am not exploding the block, but just getting some info, it is excruciatingly slow.

I have 5329 instances of a block which is literally just a box arranged in a certain way in Rhino.

Now I need to basically get the bounding box of each instance and nothing else.

Doing this takes 8.2 Minutes using Human on a PC with an i9-11900K.

That means it is getting the bounding box and some info at a rate of roughly 10 per second.

What is the limiting factor here? What is the thing that is slowing it down so much? Can it be made parallel?


Turns out it takes 8.2 minutes for all of them WITH preview of the component on, but only 30 seconds with the preview off.

Then the real question is: why does it take so long to preview a box and a plane for each block?

I’m probably doing something unnecessary and dumb in the code. I doubt this is a grasshopper limitation.

It’s likely because the component has to interrogate each block individually. It has no idea if you’re providing it a list of 4000 different blocks, or 4000 of the same block, so it just figures stuff out each time. There are likely some optimizations that can be done to cache the expensive operations per block definition (but I’m not going to do that in human any time soon).
Attached is a script that assumes it’s getting a list of mostly the same blocks, and only unpacks information about each definition once. Should be a lot faster.
fast block info.gh (6.5 KB)

Thanks for the explanation, Andrew. I thought as much. No, I didn’t expect you would optimise Human for this admittedly quite unusual case of working with thousands of block instances.

I will take a look at your script and see what I can do with it. I was just curious if there is some expensive operation that is taking place. I noticed for example that each block instance outputs how many instances of it are in the document and maybe it searches for them in some way and then outputs the count meaning that with 5000 instances each time it searches and finds the same 5000 instances again or something like that.

Fingers crossed GH2 will eventually have some native way of dealing with blocks and instances and have better ways of handling Rhino geometry directly without referencing and baking.

Hi Armin -

I’ve at least put that on the list now as RH-67826.

I have no idea what that means.

I mean working on Rhino geometry directly using GUIDs, so we can transform, duplicate, change parameters of geometry in Rhino without having to reference it in GH and then bake it again.

IIRC Elefront and Human will deal each input as an indivdual one without any caching, which might be slow.

You can try my experimental plugin (but the plugin is very premature so it’s only a speed test). .

pancakealgo_blocktest.gh (129.8 KB)

PancakeAlgo.gha (265.5 KB)

Calculating boundingbox only can be even faster. e.g. transforming the original boundingbox (not very accurate if your original shape isn’t aligned with X/Y).



Attached is an example workflow using the eleFront v5 beta that doesn’t do quite what you want, but starts to get there at least on some tangential level. The core issue still holds though that to push something from GH to the “real world”, it has to be created in the active document or exist in an active document somewhere. The issue we run into often, and likely yourself, is selectively manipulating a subset of objects from a larger set. In the attached example below is what happens:

  1. An external file is referenced in as “Ghost Geometry” and a subset of objects are chosen using a key/value pair filter.
  2. The objects are then operated on, in this case translated in space.
  3. The new objects are baked to the active document, with the new bake component returning the GUIDs of the newly created objects. An attribute is added to the moved objects to help filter or identify these later as needed.
  4. A new component “Modify Rhino File” is used to replace the originally referenced geometry with the translated geometry with all attributes inherited.

If you run the example, you should see something like this below, 3 groups of objects moved from their original position:

I did find some issues with the way layers are being created in the beta, so good I did this little thought exercise! Also exposes some issues with multiple BakeNames, but that is something for another time.


Elefront v5 Test.3dm (2.3 MB)
Elefront v5 Test.gh (16.4 KB)


Thank you to @andheum and @gankeyu for your solutions to the issue of getting bounding boxes and/or transforms for block instances fast.

Really blown away by how fast it is now:

The pancake solution taking just under 500ms for exploding the geometry of the blocks

The Human solution taking just 87ms to get bounding boxes for the same 5329 block instances

Both are really impressive and will definitely come in handy in the future. We create a lot of patterns out of block instances in Grasshopper and this will help with those as well.

@elevelle Also big thanks for sending the Elefront v5 Test. It looks super interesting and I am sure will open up a lot of cool possibilities in the future!

ps: @andheum I really love using the Objects by Selection component, but in normal use I always want to use a button to trigger the selection. Normally the button then just holds the data while the button is pressed. I always combine it with the Stream/Freeze Gate from Heteroptera. Would you consider adding a “Hold Data” option in the right click options where you already have “Live Mode”. Or can you think of another quick way of holding the data without any other plugins?

Here’s some more food for thought:

The new eleFront (v5) does more of the caching mentioned above, so getting the block information for 1,000’s of blocks is much faster. In this case, almost 12,0000 blocks in 90ms.

As you can see though, there’s about 37s referencing the blocks in. I’ll look into speeding this up, but part of the reason is because here, eleFront is caching 74 unique definitions, so if you are using fewer definitions this won’t take as long. (You can tell the caching is happening because referencing all instances is only slightly longer than reading instance per-definition).

Also, all 74 in this example are Linked Blocks - so eleFront is actually reading the source file, rather than the version cached in Rhino, which also adds some time. If they’re embedded in the local file, this time will be likely reduced as well.