Considering an upgrade from a machine with Intel Core i5-7300HQ (rated at 2100) to one with Core i9 11950H (rated at 3300). The fastest rating is 3900 (units unkown - info from Passmark single thread performance charts). I am upgrading from 12GB Ram to 32GB.
It appears as though I can expect just 1/3rd more speed when displaying and navigating on large layouts with multiple detail windows (this is when my work gets laggy - even when I hide many details). I am hoping for much more.
(Upgrading my video card as well, and going from 4GB to 12 GB Vram, but I am less concerned with rendering speed.)
Thanks for your thought. From what I understand your comment is true and very important - but only for using the render command (which I seldom do) and maybe when creating a print file. Otherwise, in task manager, my GPU shows no activity whatsoever when using Rhino, and my understanding is that this is the way it is: Rhino uses the onboard Intel GPU for everything else. This was a surprise, and I hope its incorrect, but I keep hearing it from experienced users and at least one software developer.
I have no windows Laptop with hybrid graphic card setup (intel integrated gpu + dedicated gpu) so I cannot confirm this, but I think that this is not the way rhino is supposed to use your graphics card.
There should be a system setting (maybe power management?) that makes rhino use your dedicated graphics card for display purposes too.
It is no wonder you experience disappointing performance the way your system is set up.
Thanks. I will need to check at the office tomorrow. I think I trouble-shot that years ago but given the shortage of current Rhino GPU activity, maybe not - or maybe unsuccessfully.
These lags are recent and connected with a particularly elaborate project. Up to now it hasn’t been an issue, hence this weekend’s investigation and my associated seeking to understand the benchmarking.
In rereading my notes I see that I have only heard other users speak of Rhino not using the GPU except for rendering. Contrary to my o.p, a plugin developer did not say that but merely pointed out that Rhino only uses a single core.
You’re totally misunderstanding. The GPU is used for the viewport displays. “Rendering” means outputting a single image to save, the Raytraced mode will use a CUDA GPU for that but it’s a totally separate thing. If it’s not using your GPU for the realtime displays, then your laptop’s power mangement settings are misconfigured. Laptops just kind of suck. Please post the results of the SystemInfo command here.
Not necessarily. In laptops with both graphics combined with the CPU and a discrete GPU the GPU is only used if the system is configured to force it to be used. The link above describes how to make sure the GPU will be used.
In desktop systems with both graphics combined with the CPU and a discrete GPU which graphics are used depends on where the monitor is plugged in. If it is plugged into the motherboard then the GPU will not be used. The monitor needs to be plugged into the GPU for the GPU to be used.
I know about that. By “GPU” meant, either the discrete add-on GPU or the GPU integrated into the CPU, it’s still a GPU far as Rhino is concerned even if it’s part of the CPU crap.
“Rendering” means outputting a single image to save, the Raytraced mode will use a CUDA GPU for that but it’s a totally separate thing"
Yes, I have understood that part all along - and In pretty sure that’s the only time my laptop has been using my Nvidia in the three years I have been working on Rhino on it. It’s going to be interesting tomorrow morning checking this out…
i was surprised to see that Rhino does in fact use Nvidia when navigating in a Rendered View Mode Viewport (ie in modelspace), sometimes as much as 31% if I have a clipping plane active. I never noticed this: haven’t checked task manager when in a Viewport because I don’t experience any appreciable lag there.
Now If I activate a Visual Arq section instead of a clipping plane in the same conditions as above, Nvidia use drops to around 8% max and there’s a bit of lag.
On my slow layout pages, the laggiest of which has about 20 detail windows all with Va sections and level cuts active, there’s a problematic lag (it takes a bout two seconds just to select an annotation) but Nvidia use never goes above about 8%.
So it seems to me that this points to a disconnect between Rhino and the Nvidia when Va is active (probably when the Nvidia would be most helpful.) Unfortunately it’s not possible in Nvidia Control Panel to navigate to a Va .exe file and set the card options for using it. (The browser can find no such file in either the Va app folder or the Rhino plugin folder.)
That doesn’t point to anything like that at all. Task Manager usage isn’t super helpful. VA isn’t “disconnecting from the Nvidia card,” that’s not how that works. It’s not implementing its own OpenGL stuff it’s using Rhino’s own. There are a bunch of steps in drawing stuff on the screen in response to input, some of which are on the video card and some are on the CPU. What it does indicate is that that the bottleneck is not so much the video card but what VA is doing before anything gets sent to the video card. There are like 20 layers of abstraction between moving your mouse and the actual video hardware, and VA’s added another layer or two.
Thanks Jim. In such a case, wouldn’t the Nvidia, after the lag for those 20 layers of abstraction, show activity like it does (albeit much more quickly) for the same navigational tasks without Va active? But with Va active, it never does.
It sounds like you are thinking in terms of each step is completed before the next begins. I don’t think that is the way it works. Think of a pipeline with a number of valves in series. Partially closing some of the valves slows the enter flow.
The Nvidia system is probably displaying the graphics as fast as it receives the data. The differences you are seeing is probably due to how fast the data is created upstream.
So unless there’s a Way of accelerating Va (I doubt it but I will check) I am back to upgrading to a more powerful CPU. Thankyou all for the help in sorting this out.
This takes me back to the initial question - plug ins aside…:
"Considering an upgrade from a machine with Intel Core i5-7300HQ (rated at 2100) to one with Core i9 11950H (rated at 3300). The fastest rating available is 3900 (units unknown - info from Passmark single thread performance charts). I am upgrading from 12GB Ram to 32GB, and maybe even to 128.
It appears as though I can expect just 1/3rd more speed when displaying and navigating on large layouts with multiple detail windows (this is when my work gets laggy - even when I hide many details). I am hoping for much more."
It’s doubtful you’ll even see that. I did a major upgrade a little while back after using my previous PC for like nearly 5 years, and the main consistent difference is that it’s quieter. The differences just aren’t that big in the real world these days except for tasks that are designed basically like a benchmark like rendering.