I have tested Holomark on a dual Xeon x5650 that I just purchased, and the Reduce mesh test in Holomark baffled me, it was half the speed of my i7, AND it used 100% of all 24 threads…
And I also had it tested on a i7-4930K Edit: and it is 3x slower than my old i7 950.
No Rhino plugins installed on neither.
I suspect that you get a lot of inter-CPU communication when using the Xeons due to the fact that you are using two distinct CPUs for the Xeon and one distinct CPU for the i7’s. The bus between CPUs will be slower than the on-chip communication.
This seems to be supported by the fact that turning off hyper threading actually speeds up your benchmark. I wonder what happens if you turn off 1 Xeon and turn on hyper threading.
This could be the case, I don’t know if I can turn one off in bios, but I can look next week.
BUT it doesn’t explain why that new six core i7 is even slower than my old four core i7, or do you see it differently?
I just saw this thread. All 8 cores (16 threads) of the xeon are jumping up to 100% on the reduce mesh section of that test. I will try to turn off hyperthreading later today and see what effect that has.
I am more than happy to test some other fixes if you have any.
I’m not able to repeat this with a hyper threaded CPU since we don’t have one at the office here in Finland. I have noticed, however, that the v5 ReduceMesh command is severely bottlenecked by a repeated UI-related function call. This has been fixed in v6, and testing shows that not only is the v6 command 3x faster out of the gate, but gets faster as the number of cores are increased (at least going from 1 to 4 cores).
@pascal, could you or someone at your office try ReduceMesh with a hyper-threaded CPU, and compare to the Commands.rhp plug-in “TestReduceMesh” in v6? What are the differences, and how does it scale?
Holo, do you have access to Serengeti? If so, please try the TestReduceMesh command and tell me what results you get.
@pascal, nevermind testing hyperthreading in general, but if you have access to Xeons then give them a spin with v5 and v6 ReduceMesh (TestReduceMesh in v6). We do have hyperthreaded CPUs here.
@jesterKing, thanks for the links. Definitely try fiddling with the environment variables and see if they help.
@Holo, I’m getting wildly different timings compared to you, but one thing we can agree on is that WIP ReduceMesh is the slowest. This is OK, since we’re going to use TestReduceMesh instead. The 4x improvement I mentioned seems to be between WIP ReduceMesh and WIP TestReduceMesh, at least on computers in our office.