Holomark 2 Released!

Hi there,
Apologies in advance if I’ll be asking obvious things but I’m just learning about rhino and the type of hardware one should use.

I’ve ran Holomark2 in Rhino 5 and the general score seems to be lower for my configuration compared to other computers with weaker video cards and processors. I’ve used a banchmarking website to compare the hardware (www.cpubenchmark.net) - maybe it’s not the best way.

What could be the source of this?

Thanks for building the app. I think it’s very useful!

Holomark 2 v2,2,03

Total Score: 23978
Total Runtime: 213.15 sec

GPU scores: 16860
GPU_01 - 221.00 fps - Cube 4 tests
GPU_02 - 28.50 fps - UDT Shape
GPU_03 - 47.80 fps - Wireframe
GPU_04 - 32.50 fps - Shaded
GPU_05 - 25.10 fps - Rendered
GPU_06 - 24.70 fps - Block Rendered
GPU_07 - 11 units Nurbs @ 5 fps in Wireframe
GPU_08 - 7 units Nurbs @ 5 fps in Shaded
GPU_09 - 23 units Nurbs @ 5 fps in RenderSpeed
GPU_10 - 71.90 fps - RenderMesh Render
GPU_11 - 163.90 fps - RenderMesh RenderSpeed
GPU_12 - 90.10 fps - JoinedMesh Render
GPU_13 - 227.30 fps - JoinedMesh RenderSpeed
GPU_14 - 5 units mesh @ 15 fps in Shaded
GPU_15 - 6 units mesh @ 15 fps in Render
GPU_16 - 92 units mesh @ 15 fps in RenderSpeed
GPU_17 - 63.30 fps - mesh in Rendered Studio
GPU_18 - 8.50 fps - Nurbs in Rendered Studio
GPU_19 - 21.40 fps - Block Illustration
GPU_20 - 38.50 fps - 2D single
GPU_21 - 5.70 fps - 2D massive (20x)

CPU scores: 7118
CPU_01 - 12.81 sec - Booleans and Contours
CPU_02 - 24.38 sec - Twist and Taper (UDT)
CPU_03 - 7.52 sec - Meshing Mini
CPU_04 - 0.18 sec - Extract Render Mesh
CPU_05 - 0.07 sec - Join Render Mesh
CPU_06 - 16.38 sec - Reduce Mesh
CPU_07 - 2.27 sec - Calculating Technical display
CPU_08 - 6.63 sec - Making Silhouettes


AMD Radeon ™ R9 390 Series - 4095.0 MB
DriverVersion: 21.19.519.2

AMD FX™-9370 Eight-Core Processor
NumberOfCores: 4 NumberOfLogicalProcessors: 8
MaxClockSpeed: 4.4 GHz

TotalPhysicalMemory: 32.0 GB

Microsoft Windows 10 Enterprise

  • None - 64-bit

Rhino 5 sr 9 64 bit

Hi, first try disable plugins not shipped with Rhino.

Hello @thompsonburry

I would like to know how you managed to get your benchmarks with the gtx 1080. I have a gtx 1080ti and my highest score is 67102. Also I cannot handle files 1 gb of size. I have even tried the Rhino 6 WIP which should be using the newest OpenGL and the performance is still subpar… I should note that I don’t see any of these performance issues in other software like 3ds Max or in any of the additional synthetic benchmarks that I have ran (Compubench, Cinebench and 3dMark Firestrike where my card is performing as expected with about a 30% performance gain over the 1080. If I didn’t use Rhino that much, I wouldn’t care but it is one of my main tools at work.My processor is overclocked to 4.3 ghz and my card is the founder’s edition. But even then it should run about 30% faster than the 1080. So it doesn’t make sense. Did you turn off AA? Any tweaks in Nvidia Control Panel. I would really like to discuss this further.

Hey @podzi
I’m excited to see someone with a 1080ti as I have been wondering about the performance.
Your cpu mark seems about right, I had my cpu overclocked to 4.6 and my 1080 overclocked to 2075.
The overclocking helped a bit but my greatest improvement came from disabling several plugins that were
severely crippling my performance. Before disabling these I was getting scores lower than yours.
Hope this helps, let me know if you figure anything out.

Just ran the test again with the same overclocks of cpu-4.6 and gpu-2075 and AA at 8x to make sure it wasn’t anything else.

Hi Thompsonburry can you please tell me what those plugins were?
And have you done any custom settings in the nvidia drivers to get that good results on 8xaa?
(I have a 1070 that i can run some tests on tomorrow)

Something isn’t right… My installation of Rhino is new, so no plugins to disable.Can you run a test on the newest Rhino, the Rhino 5 SR 13? Also what 1080 card do you have? Founder’s Edition? or Reseller? There’s a new card driver that came out today. I am going to try and see if it helps.


Is there anyway you can post your 1070 Holomark results?

I saw a 1060 laptop getting scores similar to mine… even besting a 6-core Broadwell-E… Rhino is not multithreaded at all, is it?

@thompsonburry What model GTX 1080 and brand? Thanks!

I have a Gigabyte g1 gaming 1080 http://www.gigabyte.com/Graphics-Card/GV-N1080G1-GAMING-8GD#kf
I turned off all my overclocks and ran the test again and got these scores AA 8x

Again the above score is with my cpu and gpu at stock. That being said there is for sure something funky happening with your setup.
My rhino says I have Rhino 5 SR12.50810 but says it is up to date and there are no updates available. I have the educational version.
Your 1080ti should be outscoring my 1080 by 30% or so. I know this is annoying but have you tried reinstalling rhino?
The largest difference between our two scores is in the test GPU_16 I don’t understand why yours only got to 108 units? If you look above you can see a post by jimc who had a 6800k and a 1080 who scored around 88768. You should be scoring at least that high.

@Holo I believe there was an expired plugin for clayoo and another expired version of scan and solve as well as a cracked version of a rendering software. Clayoo was especially terrible on my setup, not sure why but it constantly caused my files to perform horribly. Once those were removed my holomark score increased by 40-50,000 pts. No custom settings on my 1080 just a heavy overclock.

FWIW, there is a SR13 that was released on 2016-09-13. For the vast majority of users, the SR13 is not needed and therefore it doesn’t trigger automatic updates. It fixes a few things related to Flamingo and running evaluation versions.


Is there any chance you have access to the Rhino installation file for your version… if you downloaded it thru the internet? I would like to try Rhino 5 SR 12 before I do any hardware changes on my computer.

Also could you tell us your machine specs? Motherboard brand? Did you use the included Gigabyte overclock utility for the video card? Did you do a manual overclock? or used the automated EZoverclock suite in the BIOS to get your processor overclock. Reason I am asking is that I have an Asus X99 II motherboard and the other person that was experiencing problems with @lyall also has the same motherboard and processor as mine. Still if the motherboard was screwing with the OpenGL performance of the new Pascal cards I would imagine these performance degradations would also show up in other software and I just do not see any evidence of it happening. My scores are in line with other 1080ti benchmarks and I have even set up 3dsMax high poly (over 20,000,000 polygons) sample scene to test viewport performance and the 1080ti outscores the 1080 according to this website… I used the P47 scene and duplicated it over 600 times and I am still getting viewport performance of over 69 fps in shaded and edged mode. The 1080 card gets performance of 49 fps in shaded and edge mode.

Could you run a Compubench and Cinebench OpenGL benchmark on your computer? Let me provide you with the links



If someone has the installation file for Rhino 5 SR12, and would be willing to provide me with it, it would be much appreciated.


My scores are above mid 50’s in high quality shaded +edged mode versus what Puget system’s benchmark shows for the 1080, which fall in the mid 40’s at default quality.

This isn’t a problem with the card or the driver… I’ll be posting my Compubench and Cinebench benchmarks, too if you could run those as well, that would be swell.

For my amusement I quickly setup a 1700 Ryzen CPU system with a nnvidia 1080 and Rhino. The CPU is running at 3.6ghz with the memory running at 2133mhz. I did a quick Holomark benchmark.

It wouldn’t surprise me to see a big jump in performance with a BIOS update and faster RAM speeds.


So I guess I am not the only afflicted by low GPU performance in the newer Pascal cards. That score is way too low for a 1080. I am beginning to think. Would you mind running Holomark with the following Nvidia driver?


This is either an issue with Rhino 5 SR13 or an issue with how Rhino works with GPUs tied to systems with processors who have more than 4 cores. Please McNeel do something abt this.

I invite you to take a look at user @thompsonburry who also has a 1080 and scores easily in the 90,000 points in HoloMark.

I just did a fresh install of windows, rhino, and used said nvidia driver.

This was the best score I could get on the 1700 with 1080.

Normally I have this card in an i7-4790 computer, below are the scores normally get. This run I even had V-Ray 3 For Rhino installed.


I was personally more interested in the CPU scores than GPU scores. Most of the frame rates are feel are plenty high.

When AMD / Gigabyte come out with new Bios/drivers I will test again. Performance isn’t poor, but clearly not as fast as the Intel for Rhino tasks.


Thanks so much for testing out that driver… Guess that didn’t make much difference. Have you tried disabling AntiAliasing from within Rhino? And enabling it instead from the Nvidia Control Panel? That makes my benchmark results increase substantially but I don’t actually think it is applying any AntiAliasing unless it is directly enabled witin Rhino.

I think you could also overclock your Ryzen 1700x to 3.9 Ghz or 4.0 Ghz fairly easily and without much voltage increase. That should bring your single core performance, which is what is being measured by the CPU scores up a good notch.


Turning off AA in Rhino and using the nvidia control panel jumps the score up to 70k, but AA is clearly not being done.

I’ll redo this test in a few months once the RAM issues are resolved, I’ll try to overclock higher at that point as well.

I guess that brings up a good question for McNeel, why is AA slowing some cards down so much.

Hi Matt,
this is a GeForce vs Quadro thing done by nVidia to keep pro users at the more expensive (and arguably stable) Quadro range. Both AA and double sided polygons are crippled on the Geforce cards under OpenGL. One might say that this is a “bad” thing, but the same thing is done on software all the time (where one downloads the full version and the serial key unlocks different features, so you pay for what you need, but it is all there just out of reach)

I have found articles (you might even find them linked here in the forum if you search) where programmers made workarounds so those areas are handled in shaders instead at small workload costs, and there is no way for the hardware to detect that. So it would be up to McNeel to put time and effort into making a solid solution for that.

Here’s one article I just googled. I can spend some time and help find more if of interest.