I am having resource issues when simulating complex projects in RhinoCAM. My machine is giving me all the symptoms of running out of memory, but I’m not sure if it is system memory or GPU memory.
I asked RhinoCAM about the issue, and their reply was that RhinoCAM does not utilize the GPU for calculations.
I don’t know if this is the bottleneck, but the GPU memory gets filled up when simulations are running, and there’s commonly another 5GB shared GPU memory being used. (Card is Nvidia Quadro P2200 w/ 5GB)
My computer has an i9-11900kf processor, MSI z590 motherboard, 64GB 3200mhz ram, 2TB M.2 storage, and Nvidia Quadro P2200 GPU. Windows 10 pro.
This might be a rabbit hole, but Rhino never uses more than ~31GB ram, even though the message at the bottom of the Rhino window always says that the entire installed memory is available. When the program is running super slow and is non-responsive, like it is out of memory, the RAM being used will be pegged at 31GB.
And I noticed that my computer says “Shared GPU Memory” is 31GB, half of the avaialble 62.2GB
Is Rhino limiting how much RAM is being used? Or is RhinoCAM limited in how much RAM it can use? I can’t find any settings for memory usage.
Or is Windows not letting me use any of this shared GPU memory, thinking my GPU needs it?
Would upgrading either system memory or the GPU make a difference? And if upgrading would help, which will make more of a difference, RAM or GPU?
Can’t say I’ve ever had that shared-GPU-memory allocation thing ACTUALLY lead to running out of RAM, and googling it says that’s expected…it’s nothing to do with it, unless the problem is just trying to display it all when you’ve not really got enough VRAM (5GB is just-above spec, but the polygon could would have to be insane for that to start to become a thing.)
What are your “symptoms of running out of memory?”
Just checked. Virtual memory was system defined and ~9GB. I forgot about that. I turned it off to test.
The symptoms are the program is slow to respond, freezing occasionally, and sometimes the program is momentarily greyed out and says not responding when there are no CPU intensive tasks running. In general it runs slower and slower until I can’t do anything. I’m thinking about yesterday in particular, and I rebooted my computer atleast 3x during the day.
It is easy for me to understand that the program is slow to respond when I’m running the simulation, because the CPU is maxed out on all cores. But after the simulation the CPU is no longer working very hard. The program gradually gets slower over time as I add and edit tool paths.
I do have the detail cranked up pretty high. I’m making molds that are nothing but curved surfaces, and I need to be able to see how well the tool paths are following the geometry. Some are 2 part molds that have to fit together fairly precisely.
The question I asked RhinoCAM was how much GPU can Rhino + RhinoCAM actually utilize. They said they don’t use the GPU, which was entirely unhelpful!
If need a better GPU, how much memory should I be looking for? How much is too much?
Nonono don’t turn it off Windows literally needs some to WORK. Allocate as much as you can, 9GB is a uselessly small percentage of 64. Or just let it manage automatically.
Well they don’t, they just use Rhino programming hooks to draw stuff.
The official recommendations are 4GB per monitor, which…is not really necessary, but sure why not? Only as a proxy that any half-decent modern video card will have 8+, preferably 10+, for crying out loud my highly outdated card has 11.
It’s really hard to narrow down anything just from a description like that…when Windows runs out of memory, it will tell you.
I guess you could run SystemInfo and post the results.
I agree it isn’t exactly broken. What is happening is I want to improve my machine’s performance, because I’m spending too much time trying to program tool paths, but not accomplishing anything due to sluggish machine performance.
I picked the i9-11900kf CPU for single core performance, and it runs happily at 5.3ghz on multiple cores. I think that is still pretty respectable. However, if someone thinks there is another CPU that would be a massive improvement, I’m willing to listen.
The 64GB RAM I have should be more than enough. But when I’m using exactly 50% of the RAM and Rhino is slow and freezing…I start to think it is unable to access all the ram? (Admittedly it could be waiting on a single thread process, and have nothing to do with RAM…) I know if instances where a program’s architecture or even the specific Windows version (i.e. home vs pro) limits how much RAM can be accessed,
The storage is pretty fast. Samsung 980 pro M.2 drives.
If the CPU is fast enough, and if it is just a coincidence that Rhino never uses more than half of the available RAM, then the GPU is probably the thing to look at. That’s my thinking. Am I totally wrong or missing something? The problem is specifically viewing and manipulating the rendering of the RhinoCAM simulation. Rotating jumps fraom position to position, and same with zoom. And then selecting different tool paths or even switching back to programming tab becomes extremely slow.
The ‘simulation model’ is just a mesh object in Rhino. If you have the visualization settings very fine, the mesh will be enormous and Rhino will probably have a hard time displaying it. Plus they may be using some plug-in technology that is not even Rhino’s for that, so it may be hard to find where the bottleneck is. If you want to see how big your mesh is, IIRC you can export the simulation model as an STL. Then look at the poly count/file size.
I would not want to depend on RhinoCAM’s simulation model to gauge surface quality.
Thanks Jim. Yeah, it isn’t the best card. I’ll do some research on what a decent upgrade would be.
I was hoping to gauge what a “reasonable” level of performance would be? You said your GPU has 11GB and is fine. I’m seeing cards that are 12, 16, 20GB, etc.
Well you don’t really need huge amounts of VRAM unless you’re doing GPU rendering or driving 16 4K monitors, it’s just a proxy that the higher-end cards have more. Just get the best Nvidia you can afford, that’s all, no point in even looking at anyone else their OpenGl drivers are not reliable.
That’s an interesting point. RhinoCAM has 2 simulation models, Polygon and Voxel. It makes sense that the polygon model is like an STL. Adjustments are limited compared to an STL, but the settings make more sense when I think about it like that.
[edit] I keep mixing up the two strategies. I thought I remembered that Voxel was faster, and polygon was more accurate? I’ve been using Voxel and increasing the precision.
As for trusting the simulation, I find it quite useful. Any issue I can see in the simulation translates to the machined part. Screen shot is to demonstrate examples of common issues that show up in simulations, allowing them to be fixed. (I programmed this wrong as an example…)
I know it isn’t the best CAM solution, but it’s working pretty well for me. And I can afford it.
In the screen shots above, the Voxel model used “custom spacing” of 0.001". And the Polygon model was set to “fine”. They took roughly the same amount of time.
I ran a couple back to back tests comparing the different models on this small part, and I concluded:
The Polygon model is advantageous in that it doesn’t put as much strain on the GPU when viewing the result, but generating the simulation at the “fine” setting takes 3x longer than the Voxel model for the same level of detail.
The Voxel model then puts a lot more strain on the GPU when viewing the simulation, but the simulation is 3x faster for the same level of detail.
My understanding is that the voxel model is primarily for faster speed in generating but the polygonal model is more accurate - at the expense of being slower. From the RhinoCAM Help:
But in any case, I do not think that ‘voxels’ can be displayed directly on screen, so I think it likely that there is also a mesh being generated from the voxel model - which is what you actually see. Rhino uses meshes behind the scenes for displaying all types of surface/volume objects.
Update: I ordered an RTX 4070 graphics card (NVidia/ “founders edition”). It seems like a reasonable upgrade in general, and I’m expecting it will make some difference. Not sure how much, though.
I’ve also being tempted by a newer processor. I’d like some opinions about the actual benefit of an upgrade? Cost isn’t a problem if it is actually beneficial. I don’t have much time to work, so getting more done in that time has tangible value.
I have the i9-11900kf processor, and benchmarks are indicating the i9-13900k processor is ~40% faster for single core and 2x the speed in multi core. If my simulations would run 2x the speed, that would be a very noticeable increase! But will that performance actually translate to Rhino?
And what about DDR4 vs DDR5? The internet says DDR5 is slower, the same speed, and marginally faster…
Upgrading my CPU requires a new motherboard, and DDR4 and DDR5 motherboards are the same price. So choosing DDR5 would just cost me the price of the new DIMMs, so $200 for a kit of two 32gb DIMMs (Crucial).
The concensus from the internet as a whole seems to be that DDR5 really doesn’t matter right now, and it can cause problems. They’re all talking about gaming. So I’m curious what you guys think.
I upgraded to an i9-14900k with DDR5 ram. It makes a big difference!
This upgrade of the new CPU (and ram?) fixed viewing Polygon models in RhinoCAM. Now I can easily view and manipulate CAM simulations that were previously overloading my system. I could simulate them, but I couldn’t zoom, move, or rotate them without a painfully long delay. But now, manipulating them is almost as smooth as with Voxel simulations.
Be warned that it took me an entire day to figure out the best configuration to make it run stable. And I may keep working on it. It took me awhile to figure out why it was crashing initially. It appears it was crashing because of the way it was automatically enabling and disabling cores, plus changing the core frequencies. Cinebench generated some type of “timing error” when frequencies exceeded 5.7ghz. So I manually locked the frequencies and set minimum CPU frequency to 100%. No more crashes since then! I have since bumped the frequencies up to 6 ghz, and it is still stable…so the problem was not the frequency.
Now it is clear I am hitting the thermal limits of the CPU It is maybe 30% faster rendering/ simulating. I expect it would be faster if I could cool it better. But I already have a 360mm AIO water cooler, so there are no easy gains from improving cooling.
A bonus benefit is that the “Compare” option (RhinoCAM) seems to be a lot more accurate now. Previously it did not seem very reliable. Now it looks much better, but it is still extremely resource intense! I can view the comparison, but I can’t move it or manipulate it without waiting an extraordinary amount of time. The CPU maxes out all cores. Maybe a Threadripper would handle it better?
(Screen shot is the Compare feature in operation, with core usage on the left. It takes 30+ sec of running at full capacity before Rhino becomes responsive again.)