Cores vs Clock Speed, and does DDR5 offer any benefits? (RhinoCAM)

General hardware questions. I’m hoping to get some input that will help me gauge the benefit (or lack thereof) I would get from a computer upgrade.

I’m using Rhino 7 and RhinoCAM, and the only slow downs I experience are when generating RhinoCAM simulations. Simulations can take 20-30min or more to generate. It’s taking long enough that a computer upgrade would make sense to me, if the upgrade would significantly reduce simulation time.

(I was previously having issues viewing those simulations, but I upgraded my GPU, which fixed the issue.)

Would upgrading to something like an i9-13900k processor make a significant difference for RhrinoCAM simulations? I currently have an intel i9-11900kf processor, and I see activity on all cores when running simulations. But would RhinoCAM utilize the extra cores/ threads if I upgraded?

I assume single core performance is still the more important metric? I’ve seen benchmarks that say that the i9-13900k is 40% faster in single core than the i9-11900k, but is it reasonable to expect Rhino to be 40% faster with that CPU? And if multicore is 200% faster, would RhinoCAM simulations take half as long?

And bonus question…does DDR5 make any difference yet? The internet has a split opinion about it.

No

You might get more feedback in the Windows Hardware category.

It really was not a yes/ no question.

Um, yes it was? No, a claimed theoretical “40%” per-core improvement between 2 pretty modern CPUs will barely be noticeable, assuming it’s not just marketing FUD. Unless RhinoCAM is specifically designed to somehow make full use of every core you can throw at it–most “creation” tasks are not actually parallelizable, but it would make more sense to ask the RhinoCAM folks about that–there’s no computer you can buy at any price that will be massively faster, possibly not even noticeably faster. Upgrading to a decent modern video card was a big deal, but that was all the low-hanging fruit. It’s all single-digit differences from here. You need to look more at inefficiencies in your workflow.

What am I saying above? Of course expecting anyone to actually have an answer is a bit…optimistic, we’re here to use Rhino(and make Rhino and do support for Rhino) not benchmark it, a lot of people here are college students running it on some piece-o-crap laptop. Another chunk will be using corporate PCs they have no say over. And the PC hardware hobbyists interested in this minutiae never seem to be running the latest and greatest.

I’m asking a generalized question about how Rhino responds to hardware upgrades. So “not 40%” doesn’t provide any information to help understand how Rhino responds when it is given more resources.

So I guess I have faith that I stand a better chance finding someone here who can provide some insight, as opposed to finding meaningful advice elsewhere on the internet. Because the majority of the internet only seems to care about getting 300 FPS when playing video games…

And I’m not saying a person needs the latest and greatest, because Rhino is generally not very resource hungry. I’m saying that when I’m using RhinoCAM, I suddenly have a legitimate need for a faster computer. But I want to ask around before committing to anything.

And yes, RhinoCAM does multithread. It will generate multiple tool paths concurrently. It will utilize all cores when running simulations, but the the % utilization will vary greatly depending on which tool path is being simulated. Sometimes it is near 100%, and sometimes it is more like 20%. I don’t know if that scales if I add more cores.

Faster cores should make everything faster…unless there is something else slowing the system down? That’s why I didn’t ask, “What’s the best CPU?”. I’m asking about the whole system. And I find it hard to believe that there is nobody out there who can give any insight.

I apologize in advance if this is a “Captain Obvious” question, but have you adjusted the “Maximum Display Interval” setting under the Simulation page in the preferences?

We don’t have these issues, but then again, there’s no way to compare what we use RhinoCAM for and what you do with it. How large are your models? How complex are they?

Well game performance is the closest analog.

Faster cores is more metter, that’s a basic thing, but there are five thousand other bottlenecks in the system, and benchmarks are worthless. A “40% better” single-core claim is simply a lie, like that’s only for 4 seconds until it thermal throttles or is on one specific task. A smaller advantage than that, you’re not gonna actually notice 99.8% of the time.

The insight to offer is that single-core PC performance improvements have slowed to a crawl over the last decade+. The last time I upgraded my computer, the old one was 4 or 5 years old, the main thing I noticed was that it was quieter. You have a modern enough system that NOTHING you can do will massively speed it up, not enough to cure your problem. A $20K monster system will be nothing but a crushing disappointment, it’ll gain you like…5, 10%? Dan there has just said that your 1/2 hour simulation times seem unreasonable, you need to look at other avenues than just throwing hardware at the problem.

How those settings affect speed is not very obvious to me! I ran a couple tests just now to remind myself, but I could be missing something. I’m using Voxel, with specified spacing of .002". This runs faster for me than polygon, but the detail level is about the same. And Voxel utilizes my new GPU for display, so viewing afterwards is better.

I ran two simulations with everything the same except the simulation mode slider. The slider at “min” took 7.5min, and max took 8min. That was with display interval of 100 (simulate by distance).

With mode slider at “Min”, I changed display interval to 20 (by distance), and the simulation took 9 min (+1.5min just reducing interval). I increased display interval to 1000, and it took 8 min (+0.5 min compared to 100 setting). The default setting is 200, so I tried that. I got 8 min, so the same as 1000.

My first test took 8 min with an interval of 100 and mode slider at max. So I repeated with interval of 200 (default), and it was 8.25min. So interval of 100 seems optimal for me.

These numbers are all based on this file, which is for a trim fixture. It is 18" x 30". I’m using ballnose and tapered ballnose cutters, so stepovers are pretty small. In this case I could have used a 0.5" endmill for the flat flanges to reduce machining time, but I’m not actually machining this one. I was having enough trouble running simulations on this one that I broke it into 2 separate files.

My largest molds will be 24"x 24", and for those I just have to walk away. Those are the ones that take 30 min or more. Not really sure how long, because I’m never there when they finally finish.

I used to believe this, and I think it’s true in general. (And I purpose build my computers to be as quiet as possible. Sound insulated cases exist. It was a revolation when I realized it!)

But there has been a big jump between 11th gen to the 13 gen intel CPUs.

I had a 2013 15" macbook pro as my main computer for 7 years, and the only reason I upgraded was because the newly introduced (at the time) 16" macbook benchmarked 2x faster in multi-core, and single core was also significantly faster. The 2x performance increase is my rule of thumb for upgrading, and they had also given the “ESC” key back!

I thought I was set for years with the 11th Gen CPU, but the difference between the 11th Gen and the 13th Gen Intel CPUs is roughly the same % performance increase that previously took them 7 years to manage.

the anticipated 14th gen is rumored to be going back to the small incremental increases we’ve grown accustomed to. I suspect intel was holding back, and Apple’s M1 chips forced their hand. I don’t think the last ~2 years are typical, but there has been a big increase in that time.

If you are willing to share one of your models I could test the simulation time on my computer and see if there’s a difference.

I realized there was a problem with my previous “tests”. I was switching between polygon and Voxel before, but RhinoCAM seems to have been stuck on polygon. Or it was doing something inbetween. So what I thought I figured out before is meaningless.

My results changed when I did a restart, and then they began to make more sense. I’m learning that RhinoCAM doesn’t function correctly if there are 2 instances of RhinoCAM open at the same time, and it seems that just closing Rhino and re-opening does not fix the issue. A complete restart is required, and then only use 1 instance of Rhino.

Settings I used. I just switched between voxel and polygon:

On Speaker Baffle file, Polygon was 10:20min, and Voxel was 2:16min
speaker baffle CAM.3dm (14.9 MB)

I did a second test on a different file. Unfortunately that file is just over the 20MB size limit. It is a smaller part, but it had a significant portion that I programmed with a projection pocketing tool path. In that file, Polygon took 2:45min, and Voxel took 5:55. So takeaway is that the performance of each simulation method varies by the type of tool path.

Takeaways:

Voxel absolutely kills it with parallel finishing tool paths, but is slow with projection pocketing tool paths.

Voxel uses 2 cores, needs a lot of VRAM, and heavily utilizes the 3D GPU cores to display simulation

Polygon is good at projection pocketing tool paths, but is very slow when doing parallel finishing tool paths.

Polygon uses all cores, and to display a simulation it requires minimal VRAM and low utilization of 3D GPU cores

(If your experience is different, let me know! Right now I’m not sure that what it’s doing right now is what it should be doing?)

Hi Darren,

If I set the simulation to Polygonal, and the max display interval to 1000 it takes almost exactly 4 minutes to simulate on my computer. If I change it to Voxel it’s done in about 20 seconds, but the results are nasty. The results with Polygonal were very good, but it takes longer. But this is to be expected, and is explained in the help file

Something else to watch out for is the simulation accuracy slider. The numbers above are with the slider positioned to the far left (standard). If you slide it to the right, the simulation has the potential for better accuracy, but it will cost you in time. Moving just 2 notches to the right and the polygonal simulation takes 8.5 minutes. If you compare the results you will probably find that the difference doesn’t warrant the extra time. You will have to make that call.

Just to determine how far out (or close) we are to comparing apples to apples, here are my computer specs:

image

And my system info:

Rhino 7 SR33 2023-9-5 (Rhino 7, 7.33.23248.13001, Git hash:master @ 332dda7497b18e9e6f82f118da5cba0c448151a9)
License type: Commercial, build 2023-09-05
License details: Cloud Zoo

Windows 10 (10.0.19045 SR0.0) or greater (Physical RAM: 32Gb)

Computer platform: DESKTOP

Standard graphics configuration.
Primary display and OpenGL: NVIDIA Quadro P2000 (NVidia) Memory: 5GB, Driver date: 3-28-2023 (M-D-Y). OpenGL Ver: 4.6.0 NVIDIA 528.89
> Accelerated graphics device with 4 adapter port(s)
- Secondary monitor attached to adapter port #0
- Windows Main Display attached to adapter port #1
- Secondary monitor attached to adapter port #2

Secondary graphics devices.
Intel(R) HD Graphics 630 (Intel) Memory: 1GB, Driver date: 6-1-2021 (M-D-Y).
> Integrated graphics device with 3 adapter port(s)
- There are no monitors attached to this device!

OpenGL Settings
Safe mode: Off
Use accelerated hardware modes: On
Redraw scene when viewports are exposed: On
Graphics level being used: OpenGL 4.6 (primary GPU’s maximum)

Anti-alias mode: 8x
Mip Map Filtering: Linear
Anisotropic Filtering Mode: High

Vendor Name: NVIDIA Corporation
Render version: 4.6
Shading Language: 4.60 NVIDIA
Driver Date: 3-28-2023
Driver Version: 31.0.15.2889
Maximum Texture size: 32768 x 32768
Z-Buffer depth: 24 bits
Maximum Viewport size: 32768 x 32768
Total Video Memory: 5 GB

Rhino plugins that do not ship with Rhino
C:\Program Files\Rhino 7\Plug-ins\RhinoCAM 2023 for R7\RhinoArt1FileExporter For Rhino7.0.rhp “RhinoArt1FileExporter”
C:\Rhinoceros 7.0\Plug-ins\AV Plug-ins\AVCommands.rhp “AVCommands” 0.1.8651.19964
C:\Program Files\Rhino 7\Plug-ins\RhinoCAM 2023 for R7\RhinoCAM 2023 For Rhino7.0.rhp “RhinoCAM 2023 - The cutting edge CAM plug-in for Rhino 7.0 from MecSoft Corporation”
C:\Users\dan\AppData\Roaming\McNeel\Rhinoceros\packages\7.0\NVIDIADenoiser\0.4.3\NVIDIADenoiser.Windows.rhp “NVIDIADenoiser.Windows” 0.4.3.0

Rhino plugins that ship with Rhino
C:\Program Files\Rhino 7\Plug-ins\Commands.rhp “Commands” 7.33.23248.13001
C:\Program Files\Rhino 7\Plug-ins\WebBrowser.rhp “WebBrowser”
C:\Program Files\Rhino 7\Plug-ins\rdk.rhp “Renderer Development Kit”
C:\Program Files\Rhino 7\Plug-ins\RPC.rhp “RPC”
C:\Program Files\Rhino 7\Plug-ins\AnimationTools.rhp “AnimationTools”
C:\Program Files\Rhino 7\Plug-ins\RhinoRenderCycles.rhp “Rhino Render” 7.33.23248.13001
C:\Program Files\Rhino 7\Plug-ins\RhinoRender.rhp “Legacy Rhino Render”
C:\Program Files\Rhino 7\Plug-ins\rdk_etoui.rhp “RDK_EtoUI” 7.33.23248.13001
C:\Program Files\Rhino 7\Plug-ins\rdk_ui.rhp “Renderer Development Kit UI”
C:\Program Files\Rhino 7\Plug-ins\NamedSnapshots.rhp “Snapshots”
C:\Program Files\Rhino 7\Plug-ins\IronPython\RhinoDLR_Python.rhp “IronPython” 7.33.23248.13001
C:\Program Files\Rhino 7\Plug-ins\Calc.rhp “Calc”
C:\Program Files\Rhino 7\Plug-ins\RhinoCycles.rhp “RhinoCycles” 7.33.23248.13001
C:\Program Files\Rhino 7\Plug-ins\Toolbars\Toolbars.rhp “Toolbars” 7.33.23248.13001
C:\Program Files\Rhino 7\Plug-ins\3dxrhino.rhp “3Dconnexion 3D Mouse”
C:\Program Files\Rhino 7\Plug-ins\Displacement.rhp “Displacement”

If you are interested in saving time on the machine and improving the surface finish at the bottom of the 2 pockets I would suggest using an endmill for the vertical walls and the floor. You could use a containment curve to stop the planar path from dropping into the pockets. But that’s a discussion you didn’t ask for. :laughing:

I hope this helps.

Dan

And just to add here - if you don’t already know - if you are not interested in actually watching the simulation run, but just want to see the results, instead of pressing “Play” press “Run to end”. the simulation will run internally without displaying on the screen and only update when it’s done. In my experience, it also saves some calc time, as it doesn’t need to constantly refresh the screen.

1 Like

Thanks Dan. I haven’t experimented much with the accuracy slider with polygon method. I listened to a webinar by RhinoCAM where they were saying the “fine” setting doesn’t make much of a difference for simulation time. So I didn’t test that out.

I did run a test the same as what you said. Polygon, with “standard” accuracy and simulate by moves to 1000. So I think the same as you did. It took 1:20min on my system. Whether or not I need the finer accuracy is another question.

With Voxel, I had the “specified spacing” down to 0.002", which cleans it up pretty well, comparable to polygon. It does take longer than 20 seconds, but in this case it still finishes in 2 min.

As for the fillets in the pockets…this baffle is being skinned with carbon fiber, and the original idea was to make a plug that pressed the carbon down into the pocket. I decided that was a bad idea so never did it. But that’s why the fillets were in there. Also on another unrelated topic, the surface as drawn needs a bit of bondo and sanding, but it’s the best I could manage I still haven’t figured out how to make it completely right. Tom from this site demonstrated that he has a couple proprietary scripts that could solve it…but I can’t have them!

Thanks. I did not know that. Having the simulation display as it runs doesn’t really work anyway.

I noticed that your accuracy slider in the image you posted is cranked all the way to the right. I think that’s your issue. Slide it to the left and try again. I mentioned the slider but I hadn’t paid enough attention to the image you posted to see that you have it set that way.

Also, you can get slight speed gains by hiding the toolpath as it simulates. I haven’t done a benchmark to compare the difference, but there seems to be a slight improvement.

Thanks Dan. I’ll have to do some testing to see how much accuracy I really need.

But concerning the hardware question, we have some data!

Simulations of the file uploaded above, polygon model, standard accuracy:

I don’t mean to dismiss the points being made thus far. Optimizing the process can be as good as more processing power, but optimizing would add to the benefit of more processing power. And I don’t see a down side to doing things faster?

I’m interested in getting more data points for other processors… I’m personally interested in 13th gen CPUs, but any CPU will add to the picture. Or other machines with the same CPU would be interesting to compare. I’m also still interested in DDR5 RAM, but it may be difficult to isolate RAM in this type of comparison.

I have an answer.

I upgraded my computer to an i9-14900k with 64gb DDR5. My previous setup was an i9-11900kf with 64gb DDR4.

I ran tests to find out how well it performed, and to see how I could get the most out of this CPU. I didn’t “over-clock” the CPU, but I did change some Bios settings to change how many cores it uses and to over-ride some of the built in settings.

Test 1: Stock settings. With completely stock settings, the performance was only slightly faster than my old i9-11900kf for running RhinoCAM simulations. This underwhelming result is because it appeared that nearly all the processing was assigned to the E-cores, like it thinks rendering is a background task? So it was effectively running on 16 cores @ 4.4ghz. This saves energy, but reduces performance.

Test 2: To improve on performance, I went into Bios and manually set the CPU frequency. And in Windows I went to “Power and sleep settings”, and selected the “ultimate performance” setting. These two settings keep the clock speed at the maximum and prevents cores from being turned off, or “parked”.

The result was roughly 30% better than the i9-11900kf. In this case, the 8 P-cores only ran a single thread each, and the E-cores all ran normally. So 24 cores/ 24 threads total.

Test 3: I was frustrated that only half of the P-core capacity was being used. They have 2 threads per core, but only 1 was being used. So I went into Bios and disabled all the E-cores. I also manually increased the CPU frequency to 6.0ghz. (I did not attempt to go over 6.0ghz.)

With the E-cores disabled, RhinoCAM is running on the 8 P-cores (16 threads @ 6.0ghz). Now it is 40% faster than the i9-11900kf…for RhinoCAM simulations.

(This configuration gets a lower score on Cinebench R23 benchmark, but performs better for Rhino. So something to keep in mind.)

DDR5:
I did get a motherboard with DDR5. I saw a 5% speed difference between enabling and disabling XMP (i.e. factory memory overclock…), but I have no way of comparing DDR4 to DDR5 with the 14900k processor. But yeah, the faster DDR5 is a real improvement over slow DDR5.

Benefits:
The biggest benefit is that now I can easily manipulate Polygon simulations in RhinoCAM. That was my entire goal. I need to be able to run a high detail simulation in a reasonable amount of time, and then manipulate it so I can see what result I am getting.

I did some CAD and CAM work since upgrading. My overall impression is that RhinoCAM is much more stable and easier to use now. In the past I would plan to spend a weekend machining, and instead I would spend the majority of the weekend just trying to finish the programming! Now a project that I expect to take 20 min…takes about 20 min! Before upgrading, a 20min project would end up taking 2+ hours.

Rhino 7 runs better too. Its the little things like object snaps are more responsive, making it easier to find the one I want. Everything is a little bit faster.