The short answer is no. And it’s no McKneels fault. This rests solely on Apple for killing openCL, and CHOOSING to not allow NVidia to be able to put in 3rd party CUDA drivers. As a result, Blender.org had no choice but to drop CUDA from more recent builds of Cycles.
Apple has effectively boxed off any approach to 3rd party apps have GPU assisted rendering support.
Are there workarounds? Yeah, but you have to go back in time hardware wise, not forward. You can (and I did this several years ago) run CUDA on older OSX builds (13.6 is as far as you can go) and then grab a previous build of blender that still supported CUDA. Then you can EGpu (or in my case with a small farm of 3 2010 towers all of which had 2 NVidia GTX 970’s in em) use that for a render farm. And even though the hardware is positively ancient, it outrenders a current gen top of the line Mac Pro 10 ways from Sunday. CPU rendering will never be anywhere near as fast as GPU rendering, as GPU renders are a massively parallel process and GPU renders (even with 4-6 cores per proc) just don’t scale. A single (relatively ancient) GTX 970 will still smoke the fastest multi core Xeon in the rendering dept.
SO if you really need to render a lot (and I’m talking animations not still frame here) this is the only viable approach to do it and still remain Mac based. Fortunately, the hardware is cheap because it’s so old. You can pick up 2010 towers for 3-400 a pop, upgrade em to hex core 3.46 Xeon’s for another $100 so so and dump in 96GB of ram for $50. Dunno what used 970’s run these days, I bought mine new when the 1080’s were all the rage.
How much faster?
I had to pull the 970’s in one of the boxes to stuff a 580 in it just to run the current builds of Rhino, and upgrade to 14.x to get metal to run. Net result is that one is now CPU only. A typical 4K frame in Cycles on the NVIDIA GPUS fully rendered in blender rolls in at about a minute a frame. A CPU is 7-9 minutes a frame.
Cycles Render of Rhino Model
The above animation rendered in about 30 hours on my now 2 GPU equipped render “farm”. If the 3rd tower was still GPU equipped that would have been cut by 1/3. If I rendered it solely in CPU on a single box current $20,000 Mac Pro it would have been an almost 2 week render. Thanks Apple.
I used to do fairly involved animations like that pretty regularly.
Rhino 7 is still fine on my 10+ year old tower. It’s gone as far as it can Go OS wise (13.x) but it’s fine for actual modeling and pretty much anything I need to do in Rhino BUT render full on ray traced stuff. For that I still need to export to cycles (and the Rhino blender import plug makes that a lot less miserable these days, back when the only option was OBJ due to no plug in it was really tedious).
So yeah. That’s sadly the reality we live in, as Apple has chosen to do all these things, mainly because they don’t want any viable means of doing anything on their OS that doesn’t involve you buying into the BS that the latest and greatest Apple hardware is the best thing since sliced bread speed wise, when the truth is that’s a total lie. It wouldnt’ do to allow one to slap in a relatively inexpensive GPU and get several more years of useful service out of the core box.
And despite the crap hype, the “new” apple offerings aren’t really all that much faster than the stuff from a decade ago in day to day use. Maybe for certain stuff where Apple owns the entire code base (Final Cut) perhaps, but for mainstream non Apple stuff? Nope. Not really.
How much faster? The linked animation here was a 1 minute, 4K @ 60FPS render