I asked a regular here about GPU processing. Not for geometry, but rendering.
I received a good, useful, and comprehensive discussion on the benefits and failures of conducting operations on a GPU. For example, graphics, games, and AI are very different beasts to getting the process correct every time.
You can see this in moden games and even render engines. They are permitted to run using stochastic processes, noisy data; and it is acceptable to return equally (or better) noisy data. GPUs are allowed to make mistakes, and even in the sacred AI cash cow, they throw data errors all of the time. They also even throw silent errors, and its suprisingly common.
It’s popular to bash CPU processing, especially for CAD, because frankly, GPU is “cool” and fashionable now. And in many cases it is genuinely excellent.
But CPUs carry the responsibility of maintaining accurate and highly, highly reliable processes that are repeatable.
CPUs help banking transactions, life support machines, they run a lot of the world that has to remain repeating, all of the time. GPUs are allowed to just inject random crap at the screen (see DLSS, AA, rendering engines…) and that is completely accepted because of what it is producing.
I wouldn’t want to get to a world where just on a random occasion, my geometry goes completely mad because the GPU decided that it will throw a silent data error. “Sorry your wife’s breathing appartus temporarily failed sir, the GPU threw an untraced and unforced error”. We’d be facing likely similar issues with a sketchy GPU geometry transform. I guess you’d have users probably rotating 50,000 curves, and be left dumbstruck as to why 10 of them got launched into what may as well be a different universe.
You see this with rendering too. There are many, many users out there still using CPUs to render, because they are reliable, and repeatable. There are still racks of Xeons and Epycs at places like Pixar because CPU remains a very good way of rendering; and sometimes the best way.
Even on the internet you can see even until last year that there are papers discussing the implementation of GPU vs CPU operations for geometry, because it’s just not an easy problem. “Sorry, if your subtracting a surface with an edge like that, you can only do it on the CPU”. We just can’t have that nonsense in fundamental operations like booleans and trimming; it’ll send us all mad.
Notice that even with the mighty… underwhelming RTX 50 series… in AI applications on rack mounted hardware, two Blackwell GPUs need an Arm CPU to sort out the statistical nonsense and whatever else they produce. It doesn’t just get returned to the user.
But seriously, if you have a good way of trimming NURBS in remotely the number of cases a CPU faces, reliably, I’m sure McNeel will be very receptive and thankful for it (as would all of us!).