On AA X8
Also interested in buying a workstation for cad/cam
Budget about 4k €
Working in rhino and also in future catia mostly shipbuilding
On AA X8
all view settings set to none gave the highest result. Pretty good for a laptop. I am going to look into oc the 1080 and see if these numbers can cross the 100k threshold that some of the other users have crossed. I have the processor oc’d to 4.3.
Holomark 2 never finishes and gets stuck at the same spot every time. It just ends the test mid way through. I can start it again but it never generates a report because it never finishes. Andy posted of the exact same problem to this forum awhile back but I could not find any answer. It stops after GPU_13 and says
Your system handled xx units before dropping below 15 fps
Your system handled xx units before dropping below 15 fps.
Hm, can you supply some info on your system?
Windows version, Rhino version, language, plugins, cpu, memory, drivers (Basically the stuff Holomark pulls from your system when it succeeds + every thing else you think can be of help.
Hello. BTW it was in a post by Ando on Aug 30 2016 not Andy. The image he attached shows the exact same spot mine stops. It doesnt always stop at 1 car sometime is show up to 3 cars. As he said it just stops, doesn’t seem to crash or anything it just returns to the command line like its finished. Here are the system specs.
Windows 10 Pro 64Bit
Rhino 5 SR14 64-bit 5.14.522.8390
Nvidia Quadro P2000 v385.08
cool! i dont understand why i have such a low score with my ws!
The Xeons have a history of being slow on the CPU_01 test and also scores slower on the GPU_01. I have no idea why, but it is one of the things that just makes the i-series allways faster than the Xeon, it has at least been like this since 2011 when I first noticed it. My guess it is some thing related to the error checking and the extra stability for 24/7 110% load capacity of the xeon’s. You don’t notice it is everyday real life scenarios though. Also note that Holomark do NOT recognice the real Core ClockSpeed so it will not show if someone overclocks their i7.
Also the Xeon’s tend to struggle with the meshing. Not on all builds though, and if you look through this thread you’ll see that these questions pop up every now and then and nobody posts a fix. So as far as I know it is just Xeon realted. BUT a BIOS update might fix it. I do not know though… (I have an older dual xeon my self as my home machine and it has similar “issues”)
thanks for the reply. so basically, for instance, a 4cores i-cpu @ 3.0 GHz is faster than a 12 cores Xeon (on rhino)
Yup, that’s why serious gamers don’t use Xeons. (And they don’t need that many cores, but historically Xeons didn’t have more cores than the i7’s)
The good thing with Xeons are the workload they can withstand both on short and long term. They are clocked to never peak above a max temerature, their fail rate is very low and their lifespan is long. And that’s why they are not open to overclocking (unless you build the system from scratch).
Why they are slower on some tasks we do not know, and I don’t think McNeel has looked into it either.
Another thing with Holomark is the reducemesh test, that test loads the cpu 100%, but it doesn’t run faster on 12 than 4 cores. So it has a bug of some sort. It is only one mesh, so I guess the bug is that it sends the task to all cores instead of just one, and collects the result from the fastest one. But that is just a guess.
Seems like our GTX 1080 is not getting all the performance it should be - our system has been beating by a couple of laptops around here with 1060 graphics cards. Anyone know how to get rhino to recognize the latest NVIDIA driver and quit using the generic windows driver?
I just found a report by Simply Rhino in UK, dated 4 July 2017, titled
"Rhino v6 WIP NVIDIA Quadro Tests".
It compares a few of the Nvidia Quadro Pascal-series graphics cards released earlier this year against an older Quadro K4000. What caught my eye was that even the very cheap Quadro P400 released in Feb 2017 (depending on country, about 100 or 200 dollars/pesos whatever), beat the uber-expensive K4000 in almost all tests - just in the shaded display test it was worse, but not by much.
So, since Rhino 6 will be faster at graphics than V5, can I draw the conclusion that, if you want the same performance that a top-dog workstation was running way back in 2016 with V5; you could now be using V6, get onto a fast i5 CPU, stuff it with RAM, and bung in a P400, all at about a tenth of the 2016-era top-dog’s cost, and still win?
I’m not suggesting this as a cutting-edge commercial proposition, but in principle, would that be right?
I really doubt it.
My guess is far too many things have changed to draw any linear conclusions.
@John_Brock: Simply Rhino’s tests would tend to bear this out, though. Firstly, just on their baseline K4000 card, they compared V5 vs V6 using the Rhino Test Max Speed command when rotating and regenerating a large model; finding at least x8 speed improvement across a range of display modes (rendered, shaded, wireframe). So that’s an immediate software-based jump in performance, without introducing new hardware.
Then, when testing just in V6 and running Holomark comparisons, they concluded that “The entry level P400 card scores higher than a high end K4000 card from three years ago.”. When they tested the P1000 then P4000, of course there was a dramatic jump in performance.
It was my supposition, not theirs, that an i5 would be as good as an i7 cpu - if it is true that Rhino doesn’t use multithreading. But with an i7 cpu anyway, it does seem the V6 + P400 combo beats the V5 + K4000 combo.
Still, you’re probably right to be cautious, as a balance to me being provocative.
Updated score - by removing one of my two 1080 TIs, I was able to get rhino to recognize the single 1080 and performance jumped by nearly 20k.
No Idea if these are good or bad scores… would anybody have time to give some insight?
I intially ran the test because I always crashed at rendered mode (watchdog violation BSOD), so I re-installed the latest Nvidia Driver. After that the crashed stopped but I still get a slight image freeze when i change viewports. I thought this test might give me some information on whether my graphics card or processor is not performing as it should, but turns out I may need some help in reading this…
This is interesting as I was looking into swapping out my AMD FirePro W7100 for the latest nVidea GTX 1080 Ti only to see that my Holomark score is actually faster! I’m a little surprised .
My workstation is a custom build from early 2015:
ASRock X99M Extreme 4 motherboard with a Samsung XP941 512GB SSD PCIe M.2.
Custom Build for Rhino + VRay | Intell vs AMD?