Hello everyone,
We have been using our Design Workstation with the following specs to try and run what seems to be a very resource heavy unoptimized model on Rhino 7.
Some specs of our machine: 20 core/40 thread Intel Xeon Gold 6230x2 (dual processor)
192gb ram ddr4 3000 mhz
Dual RTX 2080 ti with 11gb of memory
Samsung 970 NVME SSD.
I mean it’s no machine to laugh at yet what we see is rhino running on one core, maxing that out at 3ghz, no other cores in use, no GPU utilization and the model is unworkable, it’s a slideshow.
Steps we took:
We cheched Nvidia Control Panel that Rhino uses the dedicated GPU at full performance.
On Rhino Properties->Cycles throttle to 1ms (does this actually help?), made sure the RTX is selected under CUDA, turned down Viewport resolution sharpness to about 30% of the way, turned Viewport responsiveness all the way down.
Rhinocycles plugin is installed and running.
So my questions are:
Is there a way to optimize Rhino from the OS side(it uses one core and no GPU!)?
Is there a way to optimize the way Rhino computes the model or optimize the model itself with tweaks without making it unworkable?
And finally, do you see a viable upgrade path for the machine or if not possible a suitable replacement?
Thank you for your time in advance, your assistance is much appreciated.
This is only for Raytraced and Rhino Render. I’m thinking that you’re not referring to that. When you do switch to Raytraced or use Rhino Render you will see that your GPU most definitely is being utilized a lot. A good tool to ensure you see the actual utilization properly is HWMonitor ( https://www.cpuid.com/softwares/hwmonitor.html ) .
To identify what you’re running into: what exactly is slow?
When we switch to Raytraced or Render indeed the GPU gets used. But this is of little interest to us, it’s as if the CPU is a bottleneck perhaps?
We can’t move/pan the object without it becoming a slideshow nightmare, I’d say can take up to 10 seconds from frame to frame.
And one processor only utilized, terrible performance.
Are you moving an object or panning a view? Without specifics, it’s very hard to offer advice. How big is the file? When you select everything in a viewport, what does the command line report in terms of object types and numbers that are selected?
-wim
I’m guessing the others would know, but I think the CPU side of Rhino OpenGL processing is largely single-threaded, so all of those cores won’t help you anyway?
In reality, you are still on Intel Skylake architecture (albeit in a somewhat evolved form), and the throughput is probably not all that fantastic. Furthermore, for that particular generation, and most following, you would normally (I think), use a Xeon-W type CPU for CAD/CAM, which are formally for workstation use. I think what you have is something that is more meant for proper servers (Xeon Gold, scalable).
I think as a “modern” alternative, you could consider either of the following:
AMD Threadripper 7000 (Threadripper 7970X, for example)
Intel Xeon-W (w9-3475X, for example)
You seem happy enough to run proper gaming GPUs, so something like an RTX 4060Ti Super 16 GBor an RTX 4080 Super 16 GB would work, as long as you arent reliant on (11 + 11) GB of pooled GPU memory, assuming you currently have an SLi setup.
Even with all of those cores, even some standard new/modern consumer desktop CPUs can run rings around server class Xeons with 80 threads; and most certainly in single-threaded applications. You could even have a look at benchmarks and see if something like a standard AMD Ryzen 9000 would be fine for you, if you stepped back to 32 threads.
That’s mostly raw GPU OpenGl performance on one GPU, assuming the model fits in memory, and having 2 GPUs isn’t going to do anything to help that at all, SLI was only ever for games.
Regarding polygons there are 28,893,279 quadrilateral polygons and 42,893,971 triangular polygons in this model
There would be 100,680,529 total triangular polygons in this model after forced triangulation.
It’s not ideal but it was what we were delivered.
Any ideas?
Regarding if it fits on the GPU tough to say since the GPU is yet to see utilization with this model.
The realtime display uses the GPU, you’ve been using it, if it fit on it. What’s its VRAM usage?
I rendered 500 million polygons once on a 1080ti, but that was with heavy use of blocks. What are you even supposed to do with this model? The number of distinct “objects” in a Rhino file, more than polygon count, has a huge impact on performance, and you have an order of magnitude more than anyone would consider a ‘large’ file. Is this routine for your work?
0 seems unlikely, that seems like you’re looking at the wrong number, even if the model is too big and it has fallen back to software…which I didn’t even know was still possible.
I think the model might be too big and indeed we are experiencing a software fallback, it was originally a 10g model and we managed to reduce it to a 4g one, still too big you believe?