Graphics performance with very large 3dm files

Hi Micha,

Thanks for alerting me to the bounding box feature. I look forward to
reviewing it when it is available.

I have been trying to develop a way to test object addressing
functionality (I believe this is where the problem starts). I load
MainFile that has a series of object links nested to 2 -> 3 layers and
SaveAs using a different name (NewFile). I then delete over 50% of the
objects from NewFile … and Save. When I insert (Ctrl I) NewFile as a
linked object in a new project, all the objects that were associated
with MainFile appear in the Layer section of the side toolbar.

I find this to be problematic and I think that it may be slowing down
the viewport rendering excessively. I am bundling some files to send to
Jeff for his testing and will post results as we proceed.

Talk to you soon

Ed

I always save small, but if your linked files are not saved small whereas you would have control over the rendering mesh, I might go through each file and change the mesh size, and re-save it.

I went through an entire project replacing every polysurface I could with lightweight extrusions.

Look for complicated objects. In my case, I override the meshing for particular objects such as the few true threaded fasteners I have in a project. Formed curved pipes made over curves often have a lot of polygons in their meshes. I have a chain drawn in a file, but the mesh is blockedited and overridden to be quite sparse.

For display speed, I often I shut off layers, and hide inverse.

Anyway, you might check the Anti-aliasing and Aniosotropic settings are reasonable in both Rhino and Nvidia’s driver.

I am also slightly curious how much memory Rhino is using for a file that large, because if it’s over your system RAM, you are using virtual memory.

I’ve suggested an imposter function for toggleable low-resolution placeholders for blocked objects. In other words, you would have a saved block, and perhaps a filename.imp.3dm whereas the imp file would simply be a stripped down version of the other, kind of like a 3D thumbnail.

Perhaps a diagnostic display mode whereas we could see everything as rendering meshes would be a good diagnostic utility for speeding these things up, like in idSoftware’s gl_showtris 1 If the user could see all the rendering meshes, they could pick the most complicated meshes and dumb them down. McNeel might even have this built into Rhino already, nudge, nudge. ShowRenderMesh only works on non-blocked objects.

Lots of linked blocks will slow a file down considerably, especially with higher (or any anti-aliasing) and ghosted display mode, or any mode with shadows.

Lastly, @Micha has suggested it before, and it’s made a massive difference for me: use a low-poly as is reasonable for your purposes mesh version (join all the meshes in each file, too) of linked blocks. I see you saved a less-detailed NURBS version, but just try out meshes for a couple of your blocks with the most instances and see if it makes a difference. Also try the existing bounding box display feature, turn off anti-aliasing, and use a wireframe mode.

Let us know if any of that helps.

Hi Brenda,

Thanks for the insights. How do you override the meshing for the
complicated objects … are there commands for this functionality? I
am not familiar with the concept of changing mesh size … do you have
a link that describes these issues?

Much appreciate your help

Ed

http://wiki.mcneel.com/rhino/meshfaq
If you haven’t looked at mesh settings before, probably at default jagged and faster which is likely fine, it’s not very high fidelity for fine objects. You can run FlatShade first if you want to see the effect of different settings.

With settings at jagged and faster, try ExtractRenderMesh, delete original nurbs, select all meshes and Join. Then save as a new file and change the block manager pointer to that file.

Hi Greg,

I have tried using the wireframe mode … unfortunately it does not give
enough information when showing the project to the client. I will be
sending Jeff a file with the object addressing issues … please review
if you have a chance and send your comments

Thanks

Ed

Firstly, in Document Properties/Mesh you can choose “Smooth and Slow” or “Fast and Jaggy,” or custom settings. I am not sure how this will affect linked blocks, if the settings are different, Rhino may or may not remesh them. If it does try to remesh a file as large as yours, it will take a very long time, and if it does want to remesh it, then you saved the files normally, and in that case, it might be better to change the meshing on the linked blocks instead, so it will not have to be calculated every time.

[I always save small, so it only takes a few coffee sips to load a 50MB file. BTW, 7-zipped Rhino saved small files give you an fantastic 80% compression ratio! You can save a lot of version like that.]

You can override a mesh on a particular object by selecting it, F3 to get the properties, and you should see “Render Mesh Settings,” there you can also change the settings, from there.

Although it might be tedious, perhaps ReplaceBlock might let you swap blocks. You can create placeholders for your factory areas, machinery, or systems, and keep them in your main file on load, and when you want to work on an area ReplaceBlock the placeholder/imposter with the real section. For this to work, Rhino would have to not load a block that has no instances.

Basically Rhino has the basic functionality to do swaps, but the block manager isn’t written for it, but if someone where handy with scripting they could write a toggle function that uses a special character sequence or extension…

You can also place factory areas on separate layers, and turn them on/off, but you have to be careful as there are a lot of gotchas with blocks and layers.

Hi Steve,

I will work on your suggestions. I have sent a file to Jeff explaining
the object addressing issues that I have … maybe he will be able to
have some comments on why the linked objects keep appearing in the
Layers toolbar even though they have been deleted.

I will keep working on this issue as I have to find a way to demonstrate
factory floor layouts to clients. The files are large (2 - 3 GB) but are
well within the capacity of a 64 bit system with 32GB RAM

I will keep everyone updated. Thanks for the help

Ed

Hi Brenda,

I have set the Document Properties -> Mesh to “Fast and Jaggy” … it
doesn’t give the performance needed. If I change to another window
(browser etc) and then go back to the Rhino window (large file > 1 GB),
the window takes several seconds to fully render and become active. This
issue is perplexing since the other techs have tested hi-res games with
no issues.

I really like your advice on SaveSmall … I am going to test this further

Thanks

Ed

For the demo, I would turn the video card power management from adoptive.
http://nvidia.custhelp.com/app/answers/detail/a_id/3130/~/setting-power-management-mode-from-adaptive-to-maximum-performance

Still, your rendering mesh is your best attack.

You can change the settings to custom, as well as you can with any give object, but I don’t know which settings win with linked files.

If you can see what your video card is doing, that might be helpful too. I’ve not read any input about SLI and Rhino. If you were on a 4K monitor, a 2nd video card might help as far as fill-rate is concerned.

What is your memory usage? CTRL/ALT/DEL/Task Manager/Processes/Image Name/Rhino.exe

The Resource Monitor has more detailed information, but I’ve had issues stating it under heavy usage. LOL!

I changed the power management mode to maximum performance - it helped about 3-4% (everything is worthwhile!). Memory usage is about 4.6 GB, average CPU usage is 15%. There seems to be a lot of overhead available in the system … I just can’t understand where the lag is.

Thanks

Ed

Lag on task change? If your system has a physical drive, windows might be using virtual memory, and windows makes such questionable usage of virtual memory, I turned it off on both of my systems, but when its out, it’s out.

THOUGH, you have such a large file, I wonder if you are using virtual memory? If you need it, then you can create a contiguous swap file, and or set up a RAID 0 stiping or a 15k RPM hard drive, or try to find a server motherboard with a PCI express slot that can take all the memory you need, such as this, which can take 128GB of memory! Careful, server boards are a ghetto of unreliable products for some reason, and not all have PCI Express 3.0.

(Sadly, the hysteresis for nVidia’s adoptive mode is not adjustable. Rotations mean that the video card can’t use much of the previous frame, and turns are usually not predicted well.)

The CPU percentage is an okay indication of workload, but if a single core is maxed out, that could also be your bottleneck. In the Task Manager, you can select “Show Kernel Times.” Which show windows background work a little more.

It would interesting to see if there is a performance gain with a 2nd video card. I don’t recommend buying one, but do you know anyone with one that you can borrow?

This is very disappointing for 3 years!

I’ve noticed that on load, Rhino only meshes objects within its view frustum/wedge, so for a demo, I would make sure that you zoom out enough to see the entire thing, so everything is meshed.

I have not read through the entire thread, so some of these tips might have been covered.

  • Turn off isocurves for the objects. Too many curves are not what Rhino likes
  • If you use rendered mode turn off “shadows” and “advanced Gpu lighting” to speed it up.
  • Avoid complex nested blocks.
  • Have a well organized layer structure and turn off everything you don’t need to see.

A fast cored CPU feeds the data to the graphiccard as fast as possible. Many cores doesn’t help, so a fast dual core i5 can beat a slower quad cored i7.

Also if rendered mode is what you work in then you can extract the render mesh and hide the nurbs, it’s faster.

Hi Holo,

Thanks for the tips. I am using all your recommendations … I do need to nest blocks (up to three layers). I hope that Jeff can see something in the files that I sent.

Talk to you soon

Ed

Also, 512 MB isn’t much graphics card RAM for the type of project you’ve described. Maybe you’re maxing out that card?

This is the gadget I use to monitor GPU and video RAM usage, but I’m sure there are other options, too: GPU Observer

Hi gregb

We have tried both the Geforce660 (2GB RAM) and the GeForce 200 series. The GeForce 200 series is actually faster … interesting.

Ed

Is it faster for simple files then? The above from the original post doesn’t sound like it’s working at all. Just saying it’s an old card with not much RAM and could be a bottleneck for the type of very complex files you describe. Using a GPU monitor will at least let you see what’s happening with the card.

Hi greb

Rotating with a mouse works well with files up to 700MB, then performance really starts to go downhill. I thought the issue was a function of the graphic card but 'net research indicates that this is an addressing or a CPU issue.

Hopefully it can be resolved.

Ed

It really depends on your model, but as mentioned before working with meshes makes the display performance faster, especially when the number of objects is not very high.

A while ago I have made a script that makes a ‘light’ version of the model by using extracted joined per-layer render meshes from your model. I use it a lot on very heavy and complex scenes where I use model snapshots for preview as placeholders but use main source NURBS files for any edits. Maybe it helps:
http://jarek-rhinoscripts.blogspot.com/2011/02/quick-model-snapshot.html

Buy a cheap used GTX285 with 2GB RAM and you get the best money-power relation you can get.:wink: