It is a little bit of a mutual problem here. AMD somewhat chose a couple of years ago to merely match nvidia cost-for-cost, except the very top line (where there are few customers). In reality, nvidia seems to claim the best API, and that’s important for developers.
For large game studios, this is not so bad, as they can afford to have developers support all three APIs on offer out there (CUDA/OptiX, HIP, oneAPI/Embree; for nvidia, AMD, and Intel, respectively), and so they all get adopted. But smaller studios cannot do this, and your choice is reduced to two only; AMD and nvidia. Of the two, nvidia will always currently win.
In production, the problem is merely compounded. Outside of Autodesk and Dassault, and Siemans, its probably very hard as a developer of CAD/CAM software to invest the time in even more developers to spread your development across 3 APIs.
Even Autodesk Arnold, which is probably the more obvious of the huge companies, only supports nvidia GPUs for rendering compute, as far as I see. The others don’t really render as such (or it’s just OpenGL type).
Even Blender has had a hard time supporting much more than nvidia recently, and it was only with Blender Cycles 3.3+ that they really got into all three GPU vendor APIs, so you can use any card you like, I think. Throw in Apple, and the problem compounds further.
This then becomes a negative feedback loop, in that the dominant force in nvidia, so production software with rendering capability targets nvidia, and if you are lucky, AMD at a push. So then people have no choice if they want fast (at expense of CPU “correctness”) rendering but to buy nvidia GPUs, which merely starts the reinforced cycle again. Developers of many production softwares simply cannot afford to hire more people to develop outside of CUDA or whatever Apple uses any given year.
Neither AMD or Intel can yet reach nvidia GPU capability or sheer efficiency with either of their GPU architectures; and I’m not entirely sure AMD really cares about doing so.
Regardless, I hope someone can do something, either about the lock in of APIs (maybe this is better for developers), or there is a natural diversification of GPU market (maybe bad for developers, but good for consumers).
I took a risk when I built my system and got an Intel GPU. It’s very good for a first generation. But I won’t lie to myself and say it’s anywhere near anything beyond a 4060 (Ti), with a market of less than a few percent, and a newish API. It will render in Blender and Chaos Vantage, but thats it. Utterly useless for probably 95%+ of production render users. Of course I am tempted to suck it up and get an nvidia GPU, and become part of the problem. But AMD and Intel both need to do much better, and probably work with developers to break this semi-monopoly.