Does Rhino 5 use all 6 cores on 6 core processors?

Does Rhino 5 use all six cores on six core processors? For example the Intel X79 chipset? Is it worth getting a six core processor for Rhino?

No
Rhino is not a “multi-threaded” application. It does split off a few minor processes to other cores but nothing major. That’s because modeling is a serial process. Modeling has to be done ‘in order’. Consider the example of a box with filleted edges in a shaded display. The render mesh needed for the shaded display can’t be generated until after the edges of the box are filleted, and the fillets themselves can’t be made until after the box itself is created. First the box is made, then the edges are filleted, then the render mesh mesh created. You can’t put the box creation in one thread, the filleting in a second, and the mesh generation in a third and run all three processes at the same time.

Some tasks in computer work can be multi-threaded. Rendering is a good example. Since an array of pixels are being generated into an image, the image can be broken into 4 quadrants, and each processor can work on one quadrant independently.

2 Likes

Thanks for the reply.

I understand a good video card such as NVIDIA® GeForce® GTX 780 Ti 3GB is very beneficial for Rhino 5 performance. Is this true?

A fast video card does not help Rhino calculate Booleans, generate meshes, at all. Rhino doesn’t use the GPU for any geometry calculation. What a good GPU does is lets you Zoom/Pan/Rotate the view more quickly. It feels snappier. Rotating a big model in a shaded display mode will be smoother, less chunky, will drop out less geometry when spinning it around. If you use a lot of textures and bitmaps, a GPU with lots of VRAM will be more responsive than one with less VRAM but again, this had nothing to do with actual geometry calculations at all.
Another common misperception is rendering. You graphics card is not used for rendering. There are a few GPU tools now like Neon, the real-time ray-traced viewport display mode tool that is helped with a faster GPU but that’s about it.

I hope some of the other GPU experts chime in with their opinions too.

1 Like

Oh ok. Thanks for quick response. I have one more question then I’m through for now. I have notices Rhino is extremely Memory thirsty. What exactly is Rhino doing with the memory?

Holomark2 used all six cores on my computer for quite a few of it’s tests. This is, of course primarily a display tester but the CPU has to be used to support the rendering.

Also: I think that John’s explanation was over-simplified. While it should be obvious to most that even with just one processor Rhino can’t do anything at all with objects that haven’t yet been created, there are still opportunities, not realized in Rhino5, where some of the elementary geometry operations could benefit from multiprocessing. Drawing a line with a start and end point, or even a multi-control point wavy line will never benefit from multiprocessing for the reasons John stated, but there are many things that would, especially when manipulating very large objects. Choosing, designing algorithms and coding these takes time (months and years, not hours and days) and multiprocessing CPUs with enough cores to make it worthwhile are only now showing up at affordable prices and with enough support software to begin the work. So now some of the geometry tasks can be tackled. I would be very surprised and disappointed if V6 didn’t multiprocess several of the more computation-intense geometry manipulation commands. At least, based on what the amazing Rhino developers have done in the past, I don’t think they are just sitting around patting one another on the back and admiring their past work.

Due to the fact that multiprocessing requires some setup overhead, even tasks that would benefit from it on large objects might actually be slowed on small objects. This means that every command that might use it must first determine whether it would be worthwhile: more overhead. This check could be done quickly compared to the MP setup, but to be done as quickly as possible, may very well require some changes to the object representation in Rhino - a major overhaul. (I’m speculating here, not being a MP programmer.)

So my take on the answer to your question is that while 4 processors seem to be common and affordable these days and advantageous in general computer use, there is probably no big advantage to spending the extra money for 6, 8, 10, 12 cores to use for Rhino5. Maybe it will be a good idea for the computer you buy after the one you buy today.

All true.
I’ve been in Tech support for 25 years.
Sometimes a little inaccuracy can save tons of explanation.

We are working on ways to use multi-threading when possible and it will help in some specific situations to some limited degree. It will never deliver the expectations of users.

For the people that don’t understand the serial vs. parallel processing example. I have a back up example:

The normal gestation period of a human baby is 9 months.
You can’t wire up 9 women for parallel processing and get a full term baby in 1 month.
After that, they usually “get it”.

3 Likes

Some scientist may disagree with you. But I understand you.

Sure it will. After the users adjust their expectations to reality. :smile:

“The Mythical Man-Month” - Fred Brooks

Yes I understand. Thanks for the reply. Computers are just getting better and better, and more affordable.

Sort of.
For many years, computer chips were getting faster and faster AND less expensive.
They’ve hit a technological performance limit and have tried to pass off multiple cores as being the same thing. It’s not. Some applications and processes can benefit hugely from multiple cores while others can’t. The industry has left the education up to the users. Marketing efforts have not helped.

1 Like

If you do a lot of rendering, and I do, then I would also add:

  • During a rendering, every CPU and core (including hyper-threading)
    will be used, including those on a network.
  • The new V-Ray 2.0 has a RT/Real-Time render feature, where it can
    also use the video GPU.
  • These RT previews can then be saved out.

Good explanation John, but you could multi-thread a single operation, just rhino hasn’t done it, AFAIK.

The MOI mesher for example uses all your available cores to mesh a model. It’s unbelievably fast in a multi-core machine.

…Someday?

G

Yes, I think I heard that meshing is one of the candidates Steve is looking at for multi-threading. It would probably be done in conjunction with writing a new mesher instead of trying to modify the current one.

no need, you can always do this…

2 Likes

I think that’s what we’ve been doing, except there should be a few Band-aids on the poor creature’s legs!

2 Likes

No you cannot:
1+1 cannot be multithreaded

However; meshing is not a single operation per se.

If I have to mesh a single rectangular surface it is is probably not efficient to divide it up into multiple operations.

Yet, once that single surface has irregulary trimmed edges it might be efficient to distribute the refinement of the edges over multiple cores.

When there is a polysurface; then each sub-surface can get a base mesh creation distributed over cores and the refinement of joined edges will again be distributed.

If you have a whole scene with multiple completely separate objects then meshing each object can be distributed over the cores.

I guess that making decisions on what part of meshing you distribute and how you prioritize multi-threading over single-threading will never be optimal in all situations. There will always be bottle-necks and special situations where choosing multi-threading is slowing down the process instead of speeding it up.

Disclaimer: I’m no expert but merely trying to think out loud based on my understanding and interpretations.

-Willem

John, Perfect way of explaining multi threaded!

Will it be possible for the redraw to get a little faster? Recently I bought a better video card (nvidia 680 GTX) and I saw no performance increase. I also tried Rhino on my friends new Mac Pro (with the ATI cards) and saw no difference. Some of the time I have so much stuff in a project (that I don’t want to shut off in the layers at the time) that I slow down to almost a crawl (which I’m used to).

Would this be something that could be fixed in the future unlike the CPU problem?

It might be helpful to have an option where the program automatically turns the display mode to wireframe when you move the camera and then turns it back to whatever display setting you previously had … in situations like this. It could be an option you set in advance, for files that are particularly heavy… or global.