For work I make quite detailed houses and other projects in Rhino. These contains hundreds of linked blocks and even more embedded blocks. Most drawings would have more than 1800 blocks. Common file size is 100 mb and bigger.
Of these 1800+ blocks most of them are the same.
In these 1800+ blocks are more blocks, and in there even more blocks in some cases (We use blocks to standardise detailing.)
During the end of a project I notice significantly lag in perspective view, loading time for commands and copy pasting objects.
If I check my task manager I don’t see extreme workload in the performance.
Maybe just that my memory is a bit high.
First i though it was because the file was getting to big, but i also have it in a file of 70 mb (Reduced the file size by making even more blocks linked). To me it looks like a render/video problem, but my GPU is barely used.
Does anyone know a way to check if its my pc or it just that Rhino cant handle the extreme amount of objects and blocks in my drawing?
Lenovo Legion T530-28ICB
Intel(R) Core™ i7-9700 CPU @ 3.00GHz 3.00 GHz
Ram: 16.0 GB
NVIDIA GeForce RTX 2060 Super with 8 GB
most likely scenario is your blocks are structured badly with blocks inside of blocks inside of blocks… Especially if you use assemblies from an online model bank or a manufacturers website.
Take a window for example. Often if you grab one from a window manufacturer, they nest the block with all the parts of the assembly, sometimes hundreds of parts deep. Rhino has to store transform data for every block within the block, that can be hundreds or thousands of data points depending on how bad the block is structured.
the fix is simple. If you only need the parts for reference, mesh them to a reasonable level, Assign materials as needed, and block that entire assembly, then import it into your model.
OR… if you need to edit the parts, explode the block to it’s core. (you will have to explodeblock until selblock no longer lights anything up.
then when the part is no longer a block at all and contains no internal blocks, create a block from the entire part. Now you have reduced hundreds or thousands of data points down to one.
Now, import that new block into rhino and you will be amazed how much faster your model will run.
The issue is, especially with a window or light fixture, if it’s structured wrong, you will replicate that block in your scene many times (or in a skyscraper, thousands of times) now those thousands of data points become millions of data points, which add massive, unnecessary computational overhead and can make rhino crawl.
Thanx for the reply. I was always under the impression that blocks would simplify the data points.
In my drawings its mostly the same block in a lot of other blocks. Then those blocks are used in a lot of other blocks.
The exploding option would not be ideal for us. We also draw in parts that we need to buy (like screws, fixings, but also sewage, electricity and water supply (in the least amount of detail that its still useable). If we explode those blocks we lose the ability to count those (hope that makes sense).
For example this is one of blocks:
All block in there are linked, and are repeated in there (or in other blocks) to keep the detailing of a block universal / modulair.
Just for my understanding; do you mean that a block tree (blocks in blocks in blocks) containing meshes at the deepest level performs a lot worse than a single block containing all those (identical) meshes?
I always figured that Rhino would internally construct some kind of single display mesh after loading a file and its (referenced) blocks, to ensure a tree of blocks performs just as well as a single mesh