Will Rhino 9 Overcome CPU Limits with Hybrid GPU Acceleration?

One of the persistent challenges in Rhino and Grasshopper workflows is the performance bottleneck when handling mesh-based parametric outputs, baking geometry from Grasshopper, or running computationally heavy components like Pipe, SubD, and tessellation. These processes are heavily CPU-bound and operate sequentially, resulting in long computation times for dense parametric models, iterative workflows, or geometry-heavy projects.

With modern GPUs like the RTX 50 series offering unparalleled parallel processing power, there’s a huge opportunity to accelerate these tasks. GPU acceleration could improve:

  • Baking geometry from Grasshopper to Rhino by offloading mesh generation to the GPU.
  • Running heavy components like Pipe, Boolean operations, or smoothing in Grasshopper more efficiently.
  • Real-time previews of parametric models in Grasshopper.
  • Faster exports by leveraging GPU-based decimation and re-triangulation.

As the architectural and computational design industries push the limits of hardware, Rhino’s reliance on single-threaded CPU workflows feels outdated. Is there any plan for Rhino 9 to introduce hybrid CPU-GPU computation, allowing users to better leverage modern hardware?

2 Likes

Rhino development is incremental and evolutionary. so, will it be more efficient some day? Yes. Will it happen in V9? I don’t think so.

Although what you mentioned is very valid in theory. for rhino as a multi-platform product, it is going to be a challenging path.

baking with GPU would require support for CUDA, OpenCL and CPU all together as well as I have no idea what to do with Apple silicon.

Still CPU Based, but you can enable the multi threaded meshing using this test command. you may run into many display artifacts with it.

Testenablemultithreadedmeshing

1 Like

They’re not really that impressive, their raw performance basically scales with the additional CUDA cores, with a big pile of AI BS on top. Whoop-de-doo.

Of course while there’s a bunch of stuff that IS multi-threaded and maybe possibly could be–and a subset of both of those that might actually increase performance after all the add overhead to manage it!–people seem to think that parallel processing is some kind of magic that can be applied to any problem. IT CAN’T. Most “content creation” tasks are linear in nature. You can’t make a baby in 1 month with 9 women.. If it could, it would have been already as multi-core CPUs have been common this entire millennium. Everything is not going to be accelerated 8000X by your GPU, get over it. Whoever convinced you of that was trying to sell you something.

2 Likes

Your baby analogy is amusing, but it completely misrepresents the actual challenges and possibilities of modern hardware in computational design. Let’s be clear: nobody thinks every task is parallelizable, but dismissing hybrid workflows and GPU acceleration without understanding the specific bottlenecks in Rhino and Grasshopper is uninformed.


The Core Problem:

Rhino’s reliance on single-threaded CPU workflows is outdated, especially when solving computationally intensive Grasshopper components. For instance:

  • Components like Pipe, Boolean Difference, and SubD often process geometry sequentially, leading to long wait times when dealing with complex, repetitive algorithms.
  • When generating and baking dozens or hundreds of parametric meshes, Grasshopper components are forced to solve in a linear fashion, even though these are inherently parallelizable processes. GPUs are purpose-built for handling large datasets and repetitive calculations, but Rhino fails to utilize this potential.

Your analogy of “one mom with nine babies” misses the point entirely. The correct analogy is one mom with dozens of children, :glowing_star: where each child needs attention at the same time. A GPU would let the mom handle multiple children simultaneously, but Rhino’s current architecture forces her to do it one at a time—wasting both time and resources.


Why Overhead Is No Excuse:

Yes, task management overhead exists, but modern frameworks like CUDA and OpenCL have already optimized GPU acceleration. Many software platforms like Houdini and Blender leverage GPUs to solve complex tasks like geometry manipulation, iterative calculations, and real-time previews.

  • Even Grasshopper’s solver could benefit from task parallelization, particularly when working with large datasets, iterative loops, or heavy geometry processing components.

This isn’t a hardware issue—it’s a software limitation. Claiming “it can’t be done” ignores how other platforms have successfully implemented hybrid workflows.


What’s at Stake:

Rhino’s outdated approach is holding back productivity for advanced users:

  • Baking multiple parametric models is far slower than it needs to be.
  • Grasshopper struggles with solving components that rely on heavy geometry generation because the CPU is the bottleneck.
  • Tasks like mesh subdivision, tessellation, or boolean operations are perfect candidates for GPU acceleration.

Hybrid workflows (CPU for sequential tasks, GPU for parallel ones) are no longer a futuristic idea—they’re standard in other industries. If Rhino doesn’t adapt, it risks being outpaced by tools that embrace modern hardware.

Rhino updates take 2-3 years. With Rhino 8 here, waiting for Rhino 9 to adopt hybrid CPU-GPU workflows will be too late. By then, RTX 60/70 GPUs will dominate, and relying on outdated single-threaded CPU processes…

Rhino has never been the most cutting-edge tool nor claimed to be a revolutionary software destined to reshape the future of 3D modeling. Instead, Rhino has earned the trust of its users through stability, consistency, and openness. The idea of rewriting the software in just two years with a team of fewer than 100 people is quite daunting to me. We’ve already faced the Apple Silicon dilemma on this forum for a while, and those were significant challenges for the developers to overcome.

you can check Grasshopper 2 Alpha for some form of multi-threading advances on a CPU, which will likely be part of Rhino 9.

for the rest, I’d prefer to have Autodesk deal with it, they have a nice marketing budget.

3 Likes

I don’t mean to be negative or dismissive of this idea, but we have previously utilized GPU-accelerated third-party tools like Flexhopper and TOPOs. I am confident that a community member could experiment with CUDA to create a Grasshopper extension called GPUbake, allowing us all to test it.
I’m very conservative of mcneels time because we appreciate closing most of the urgent YT issues.

1 Like

I asked a regular here about GPU processing. Not for geometry, but rendering.

I received a good, useful, and comprehensive discussion on the benefits and failures of conducting operations on a GPU. For example, graphics, games, and AI are very different beasts to getting the process correct every time.

You can see this in moden games and even render engines. They are permitted to run using stochastic processes, noisy data; and it is acceptable to return equally (or better) noisy data. GPUs are allowed to make mistakes, and even in the sacred AI cash cow, they throw data errors all of the time. They also even throw silent errors, and its suprisingly common.

It’s popular to bash CPU processing, especially for CAD, because frankly, GPU is “cool” and fashionable now. And in many cases it is genuinely excellent.

But CPUs carry the responsibility of maintaining accurate and highly, highly reliable processes that are repeatable.

CPUs help banking transactions, life support machines, they run a lot of the world that has to remain repeating, all of the time. GPUs are allowed to just inject random crap at the screen (see DLSS, AA, rendering engines…) and that is completely accepted because of what it is producing.

I wouldn’t want to get to a world where just on a random occasion, my geometry goes completely mad because the GPU decided that it will throw a silent data error. “Sorry your wife’s breathing appartus temporarily failed sir, the GPU threw an untraced and unforced error”. We’d be facing likely similar issues with a sketchy GPU geometry transform. I guess you’d have users probably rotating 50,000 curves, and be left dumbstruck as to why 10 of them got launched into what may as well be a different universe.

You see this with rendering too. There are many, many users out there still using CPUs to render, because they are reliable, and repeatable. There are still racks of Xeons and Epycs at places like Pixar because CPU remains a very good way of rendering; and sometimes the best way.

Even on the internet you can see even until last year that there are papers discussing the implementation of GPU vs CPU operations for geometry, because it’s just not an easy problem. “Sorry, if your subtracting a surface with an edge like that, you can only do it on the CPU”. We just can’t have that nonsense in fundamental operations like booleans and trimming; it’ll send us all mad.

Notice that even with the mighty… underwhelming RTX 50 series… in AI applications on rack mounted hardware, two Blackwell GPUs need an Arm CPU to sort out the statistical nonsense and whatever else they produce. It doesn’t just get returned to the user.

But seriously, if you have a good way of trimming NURBS in remotely the number of cases a CPU faces, reliably, I’m sure McNeel will be very receptive and thankful for it (as would all of us!).

4 Likes

An example of a GPU rendering business model (V-Ray RT, V-Ray GPU, V-Ray) has been utilizing CUDA for 15 years (since 2010). However, the V-Ray GPU still lacks some functionalities that are important to those who use CPUs. fortunately for Chaos, model accuracy is not that critical for renderings.
V-Ray GPU Supported Features - V-Ray for 3ds Max - Global Site

2 Likes

FWIW Rhino has had GPU-accellerated rendering since Rhino 6 in the form of Raytraced viewport, and Rhino Render since Rhino 7.

3 Likes

What does that even mean? You’re just repeating buzzwords and sales pitches you heard somewhere, it’s obviously not based on first-hand experience with any of that stuff. Like you’re not aware which part of that word salad is just what all 3D tools do including Rhino.

No it doesn’t. There is a ten-figures-long wish list for Rhino, 90% of which center around “do this thing AT ALL,” not “do this on my GPU.” Keeping Rhino running well on piece-of-crap laptops is a far bigger issue than this.

And stop BOLDING everything, stop shouting geez.

1 Like

@ata.safarzadeh1990 You might want to take a step back and think about not trying to put a square peg in a round hole. You’re quite literally trying to use the wrong tool for working with heavy mesh data. You’d do better brining in a 3D software that works with mesh data better and then converting it to a CAD file format.

Look at 3D software like going to the gym… you got the treadmill, free weights, bikes… etc that are used because they help develop different areas of the body. Use the best software to leverage what it does best and compliment it with others that do what it doesn’t. Waiting for Rhino to develop into something that may or may not happen is an exercise in only you being frustrated.

2 Likes

Yes, Rhino has had GPU rendering since Rhino 6, but that’s irrelevant because rendering and modeling are two completely different things. The real issue is Rhino’s outdated reliance on single-threaded CPU workflows for modeling, Grasshopper solving, baking, and handling complex parametric geometry.

The Bigger Problem: Rhino Wastes Multi-Core CPUs

  • Rhino still runs most tasks on a single CPU core, completely underutilizing high-end processors like Core i9s, which have multiple cores sitting idle while only one is maxed out.
  • Baking dozens of generated meshes is painfully slow, even though it’s an easily parallelizable task.
  1. You were asked to stop using bold face, so please stop that.
  2. It is not irrelevant completely. For modelling yes, for rendering no.
  3. Have you tried the test command suggested earlier in the thread?
  4. Have you tried the components in Grasshopper that have been adapted for multithreading?
4 Likes

…outdated… compared to what?
Do you know any other CAD that really is multithreaded for most of its calculations?
For example, Inventor (from a developer with way higher budget than mcneel) is still single-threaded too… https://www.autodesk.com/support/technical/article/caas/sfdcarticles/sfdcarticles/Support-for-multi-core-processors.html

Rhino is pretty much open with all its scripting tools and languages, if you really need something to leverage all of your CPU’s threads, code it yourself.

1 Like

:popcorn::popcorn::popcorn::popcorn: on me this is some good popcorn

To say it’s outdated is not true if you compare from v5 to V8 there has been a big improvement in performance perhaps you need perspective?

If you have tried different 3D cad applications they all have a good side and a bad side Rhino is no exception from this some are also ment for other tasks

Hope you will find the software you are comfortable with and does the job you require.

1 Like