Amd Radeon 580 RX very slow. Config problem or compatibility?

So maybe a good option is to return the graphic card, and change it for a nvidia model like gtx 1060?, it is the same value right now. I would wait for your answer! thanks a lot! thanks a lot for trying on that one
But its strange how much diferent fps we have, as we both have this card. could be the processor ryzen 7, 1700?

let me first see what the results are on my system with the GTX1060, when I get home in the evening.

my system has the newer Ryzen 7 2700X, so our systems are very similar. maybe it has something to do with the lower vram of your card. yours has 4GB, mine has 8GB. and the mode is rather large. maybe you should first try to uninstall and reinstall the drivers. also reinstalling rhino? I don’t really know.

subjectively I have to say that the model behaves quite well on my system. the geometry looks to be imported to rhino from somewhere else, right? it is large but overall quite simple geometry. if you would rebuild this in rhino, the whole model would behave much better / be more efficient i think. I also see lots of missed opportunities where you could use blocks.

your ~6fps really seem to be too low for shaded display mode. even if I maximize my viewport (2560x1440 monitor), I get around 20fps in shaded. even if i turn on all the layers and max the viewport, I still get over 17fps.
also all the curves make the viewport a bit slower. if i select all curves with _SelCrv and _Hide then I go from 17 to 22 fps.

1 Like

Yes it’s dirty import from sketchup. I know would be much better modeled directly in rhino, but it was already made on sketchup. but it was good to find that there ir something wrong with the card, or the drivers, or rhino ignoring my graphic card. There is a hughe difference from fps, from your card to mine.
I was looking on the resource monitor, and Gpu doens´t go up more than 3% when using rhino.
So, no clue whats going on…

i think @nathanletwory has a RX580 as well. I assume he’ll aslo get around 20fps (with those two layers turned on)? maybe he has a clou why your system performs so much worse. your driver seem to be the one that should be fine. maybe some other program runing in the background that utilizes your GPU. maybe a webapp? youtube?

also try holomark2. I guess your results there will also reflect the problem. here are my results with the RX580.

the windows resource monitor isn’t very good. for more detailed monitoring of your gpu you should use GPU-Z or msi afterburner

also got a rx580 recently - raw compute performance is good, but opengl is very bad. Line-AA is abysmal and general performance in Rhino is bad (V5 and V6)… It is my first card in a decade from Amd - was hoping that they got their act together with OpenCL / OpenGL, but they didn’t.

Not much anyone can do i guess… apart from paying 100$/€ more and get the nvidia equivalent.

antialiasing is often mentioned to be better on nvidia. I personally can’t spot any difference in the quality of AA between my system with nvidia and amd cards in it. though to be fair I have to add that the two systems are not at the same location and use monitors with different pixel density, so the comparison is very subjective.

regarding openCL, do you mean openCL vs cuda?
I think this is just a matter of support/optimization. for openCL the developers of a program have to do difficult and time consuming low level work. cuda on the other hand is much simpler for developers. I think it’s like C++ compared to javascript with jquery. so i guesss that’s why most prefer to support cuda and care less about decent openCL optimization. amd would really need to come up with some open source cuda equivalent on top of openCL in order to make life easier for devs. that’s how the situation looks to me as layman.

Hi @jlgrobe,

Join all the meshes on the “Edificos” and the “Edificos 0” layers. Then select the all the curves that are on the buildings and join them together.

The amount of meshes and curves from your import is a bit of overkill when so much could be joined together into a single object. This results in almost 50 thousand draw calls per frame rendered, which slows down Rhino.

-David

2 Likes

nice, that brought my fps from around 20 up to around 100:

1 Like

Which PCI slot is your card plugged into?
Have you messed with any of Rhino’s “Advanced” options settings…particularly the OpenGL options?

From the sounds of it, you’re card is either not running at full speed, or the OpenGL version that’s being used by Rhino is bottlenecking the performance. And before anyone chimes in about that he’s already posted his SystemInfo… there are settings in the Advanced section that can override and/or impact the internal functionality of OpenGL in Rhino.

For starters I would confirm that your card is plugged into the 16x PCI-E slot on your motherboard.
Then I would check Rhino’s Options->Advanced settings and look for the setting named “Rhino.Options.OpenGL.MaxLevel” and see what it’s set to.

There is no way the model you provided should be performing that poorly with RX580…I’d expect something more like what @hitenter’s results are showing.

-J

1 Like

@jlgrobe

I would also download GPU-Z: https://www.techpowerup.com/download/techpowerup-gpu-z/

And check to make sure it shows everything is what it should be… You can also confirm which Bus the card is operating in…(see image)

2 Likes

This is not true. Almost all optimizations we make for display are not targeted toward any specific GPU (I actually can’t think of a single GPU specific optimization in our code). We do have some tweaks to try and work around some specific GPU driver bugs, but they don’t really affect performance.

1 Like

last thing before the weekend, I just ran the testmaxspeed command on the file at my home system for a comparison with a GTX1060. both systems 8xAA, these are the results:

model unjoined with both edificios layers on:

RX580:
wireframe: 25.60 fps
shaded: 21.70 fps
rendered: 6.07 fps
arctic: 6.07 fps

GTX1060
wireframe: 10.27 fps
shaded: 9.38 fps
rendered: 2.94 fps
arctic: 2.97 fps

model joined with both edificios layers on:

RX580:
wireframe: 98.52 fps
shaded: 96.99 fps
rendered: 32.66 fps
arctic: 32.16 fps

GTX1060
wireframe: 60.39 fps
shaded: 60.35 fps
rendered: 33.33 fps
arctic: 39.03 fps

here are the screenshots from the system with the GTX1060.

so the RX580 is faster in most cases, just two cases where the GTX1060 deliveres slightly more fps. to be fair, that system only has a core i5-6500 compared to the 2700X. bottom line is, a GTX1060 won’t make your models behave smoother in your viweport. like @jeff mentioned, check your card with gpu-z. there must be something wrong.
btw, arctic looks strange after the model is joined. true for both RX580 and GTX1060.

1 Like

I would venture a guess here that you have your GTX 1060 configured for vsync, so performance gets capped by the driver automatically. This is something you can change in the GPU control panel.

I have only one slot in a micro ATX - Asus PRIME A320
which I almost sure that its already 16x. but the specs page says only 3.0

going to try this openGL setting. Thanks!

YES! just checked that and you are right. I disabled vsync and here are the results for the scene with the joined geometry:

Display mode set to “Wireframe”.
Command: Testmaxspeed
Time to regen viewport 100 times = 0.67 seconds. (148.81 FPS)
Display mode set to “Shaded”.
Command: Testmaxspeed
Time to regen viewport 100 times = 0.73 seconds. (136.24 FPS)
Display mode set to “Rendered”.
Command: Testmaxspeed
Time to regen viewport 100 times = 2.88 seconds. (34.78 FPS)
Display mode set to “Arctic”.
Command: Testmaxspeed
Time to regen viewport 100 times = 2.50 seconds. (40.00 FPS)

but everything over 60 fps for the viewport isn’t really necessary imo. I’ll check on monday if the amd driver maybe caps at 100, maybe.

board and chipset is really the most low end and cheap you can go for the AM4 platform. yes it has only one PCI-Ex16 slot and since all lanes are wired directly to the CPU and not the chipset, this is not the issue.
that vrm on the board is really weak, your Ryzen7 1700 is probably cooking this poor 4+2 phase vrm. for any high end ryzen cpu with 6 cores + you should at least get a B350/B450 board and make sure you have at least a true 6phase vrm for the cpu. it you put load on your cpu for a longer time, your vrm will probably shut down.
but this is also not the cause for your low fps.

also, PCIE 3.0 is the latest standard, 4.0 will only be introduced in consumer products this summer.

reinstall all drivers i would suggest. do you also have all the amd chipset drivers installed? not just the gpu drivers?
download it from here:
https://www.amd.com/de/support/chipsets/amd-socket-am4/a320

1 Like

If you use -TestMaxSpeed, you’ll see an option on the command line to disable Vsync as well… so if you want to keep it enabled globally then you can…and just disable it in Rhino if/when desired… just FYI.

-J

Then I would check Rhino’s Options->Advanced settings and look for the setting named “Rhino.Options.OpenGL.MaxLevel” and see what it’s set to.

It is set to 45… i think its default?

Finally it was the motherboard, chaged it for a gigabyte AB350 model, and the speed tests are similar to yours. Also it fixed some compatibility issues, like not displaying grasshopper geometry on viewport, blured texts, slugish revit, etc.

1 Like

happy to hear that. so the effort to sawp the mobo was worth it. a new graphics card would have cost more and whatever new card might as well have had similar problems with the old mobo.

make sure you update to the latest UEFI.

1 Like