Does Rhino 5 use all six cores on six core processors? For example the Intel X79 chipset? Is it worth getting a six core processor for Rhino?
No
Rhino is not a āmulti-threadedā application. It does split off a few minor processes to other cores but nothing major. Thatās because modeling is a serial process. Modeling has to be done āin orderā. Consider the example of a box with filleted edges in a shaded display. The render mesh needed for the shaded display canāt be generated until after the edges of the box are filleted, and the fillets themselves canāt be made until after the box itself is created. First the box is made, then the edges are filleted, then the render mesh mesh created. You canāt put the box creation in one thread, the filleting in a second, and the mesh generation in a third and run all three processes at the same time.
Some tasks in computer work can be multi-threaded. Rendering is a good example. Since an array of pixels are being generated into an image, the image can be broken into 4 quadrants, and each processor can work on one quadrant independently.
Thanks for the reply.
I understand a good video card such as NVIDIAĀ® GeForceĀ® GTX 780 Ti 3GB is very beneficial for Rhino 5 performance. Is this true?
A fast video card does not help Rhino calculate Booleans, generate meshes, at all. Rhino doesnāt use the GPU for any geometry calculation. What a good GPU does is lets you Zoom/Pan/Rotate the view more quickly. It feels snappier. Rotating a big model in a shaded display mode will be smoother, less chunky, will drop out less geometry when spinning it around. If you use a lot of textures and bitmaps, a GPU with lots of VRAM will be more responsive than one with less VRAM but again, this had nothing to do with actual geometry calculations at all.
Another common misperception is rendering. You graphics card is not used for rendering. There are a few GPU tools now like Neon, the real-time ray-traced viewport display mode tool that is helped with a faster GPU but thatās about it.
I hope some of the other GPU experts chime in with their opinions too.
Oh ok. Thanks for quick response. I have one more question then Iām through for now. I have notices Rhino is extremely Memory thirsty. What exactly is Rhino doing with the memory?
Holomark2 used all six cores on my computer for quite a few of itās tests. This is, of course primarily a display tester but the CPU has to be used to support the rendering.
Also: I think that Johnās explanation was over-simplified. While it should be obvious to most that even with just one processor Rhino canāt do anything at all with objects that havenāt yet been created, there are still opportunities, not realized in Rhino5, where some of the elementary geometry operations could benefit from multiprocessing. Drawing a line with a start and end point, or even a multi-control point wavy line will never benefit from multiprocessing for the reasons John stated, but there are many things that would, especially when manipulating very large objects. Choosing, designing algorithms and coding these takes time (months and years, not hours and days) and multiprocessing CPUs with enough cores to make it worthwhile are only now showing up at affordable prices and with enough support software to begin the work. So now some of the geometry tasks can be tackled. I would be very surprised and disappointed if V6 didnāt multiprocess several of the more computation-intense geometry manipulation commands. At least, based on what the amazing Rhino developers have done in the past, I donāt think they are just sitting around patting one another on the back and admiring their past work.
Due to the fact that multiprocessing requires some setup overhead, even tasks that would benefit from it on large objects might actually be slowed on small objects. This means that every command that might use it must first determine whether it would be worthwhile: more overhead. This check could be done quickly compared to the MP setup, but to be done as quickly as possible, may very well require some changes to the object representation in Rhino - a major overhaul. (Iām speculating here, not being a MP programmer.)
So my take on the answer to your question is that while 4 processors seem to be common and affordable these days and advantageous in general computer use, there is probably no big advantage to spending the extra money for 6, 8, 10, 12 cores to use for Rhino5. Maybe it will be a good idea for the computer you buy after the one you buy today.
All true.
Iāve been in Tech support for 25 years.
Sometimes a little inaccuracy can save tons of explanation.
We are working on ways to use multi-threading when possible and it will help in some specific situations to some limited degree. It will never deliver the expectations of users.
For the people that donāt understand the serial vs. parallel processing example. I have a back up example:
The normal gestation period of a human baby is 9 months.
You canāt wire up 9 women for parallel processing and get a full term baby in 1 month.
After that, they usually āget itā.
Some scientist may disagree with you. But I understand you.
Sure it will. After the users adjust their expectations to reality.
āThe Mythical Man-Monthā - Fred Brooks
Yes I understand. Thanks for the reply. Computers are just getting better and better, and more affordable.
Sort of.
For many years, computer chips were getting faster and faster AND less expensive.
Theyāve hit a technological performance limit and have tried to pass off multiple cores as being the same thing. Itās not. Some applications and processes can benefit hugely from multiple cores while others canāt. The industry has left the education up to the users. Marketing efforts have not helped.
If you do a lot of rendering, and I do, then I would also add:
- During a rendering, every CPU and core (including hyper-threading)
will be used, including those on a network. - The new V-Ray 2.0 has a RT/Real-Time render feature, where it can
also use the video GPU. - These RT previews can then be saved out.
Good explanation John, but you could multi-thread a single operation, just rhino hasnāt done it, AFAIK.
The MOI mesher for example uses all your available cores to mesh a model. Itās unbelievably fast in a multi-core machine.
ā¦Someday?
G
Yes, I think I heard that meshing is one of the candidates Steve is looking at for multi-threading. It would probably be done in conjunction with writing a new mesher instead of trying to modify the current one.
no need, you can always do thisā¦
I think thatās what weāve been doing, except there should be a few Band-aids on the poor creatureās legs!
No you cannot:
1+1 cannot be multithreaded
However; meshing is not a single operation per se.
If I have to mesh a single rectangular surface it is is probably not efficient to divide it up into multiple operations.
Yet, once that single surface has irregulary trimmed edges it might be efficient to distribute the refinement of the edges over multiple cores.
When there is a polysurface; then each sub-surface can get a base mesh creation distributed over cores and the refinement of joined edges will again be distributed.
If you have a whole scene with multiple completely separate objects then meshing each object can be distributed over the cores.
I guess that making decisions on what part of meshing you distribute and how you prioritize multi-threading over single-threading will never be optimal in all situations. There will always be bottle-necks and special situations where choosing multi-threading is slowing down the process instead of speeding it up.
Disclaimer: Iām no expert but merely trying to think out loud based on my understanding and interpretations.
-Willem
John, Perfect way of explaining multi threaded!
Will it be possible for the redraw to get a little faster? Recently I bought a better video card (nvidia 680 GTX) and I saw no performance increase. I also tried Rhino on my friends new Mac Pro (with the ATI cards) and saw no difference. Some of the time I have so much stuff in a project (that I donāt want to shut off in the layers at the time) that I slow down to almost a crawl (which Iām used to).
Would this be something that could be fixed in the future unlike the CPU problem?
It might be helpful to have an option where the program automatically turns the display mode to wireframe when you move the camera and then turns it back to whatever display setting you previously had ā¦ in situations like this. It could be an option you set in advance, for files that are particularly heavyā¦ or global.