Maximum # of CPU cores and dual CPU config. in Rhino 8

was testing on this grid

Ultimately all you can and should do is think about what you want the CPU to do, and go from there.

My suggestion of an AMD 7700X or 9700X was purely a balance between a CPU that is overkill for Rhino, and one which will allow you to do some CPU rendering, should you ever want or need to (choice of renderer, poor GPU rendering, or lack of GPU support).

But Rhino as a modeller (not Cycles, or graphically) for most will run fine on a quad-core, modern CPU.

1 Like

if it makes my blend edges faster then i would pick a faster cpu is what i was thinking, but yeah you are problably right and im overthinking it my i73930k is over a decade old anything of today would be faster at this point

thanks for the answer

Hello Jim,

Yes, thank you; and the proof is in the “doing” per the posted example where I BU a 1000 object array and it takes 15ish seconds on a 4 core 3.1 base clock Xeon that is years old.

Thank you,

Andy

I would not say never. When you run into cases where you have to render a 75GB scene and your GPU can do only 48GB, but you have say 256GB RAM you very well may want to render on the CPU.

1 Like

Hello Jacob,

Thank you. While I have received some quite credible indication that in Rhino 8 McNeel has eliminated the previous system requirement that only up to 63 CPU cores [I guess either total or per CPU] were supported.
However, I what have not seen, is what the [if any] “official from McNeel” maximum CPU core count is? The reason I am asking is similar to your Keyshot example where apps that can take advantage of many cores will be utilized, at times concurrent with Rhio running. Have you please seen an “official” statement regarding this? Has anyone?

Thank you,

Andy

There is at least one Rhino/bella user I know of that uses a Threadripper 7980X 64-core (64C/128T).

I think given you are coming from a quad-core Xeon of a few generations, the differential behaviour you would ever notice between your Xeon and a new dual-socket setup of anything over [2 x 24 core] is going to be so high, you could hardly measure it.

Ultimately, you can get an AMD Threadripper 7995WX with 96-cores, 192-threads, if you have the money. There is then no need to have a large dual-socket server class board, when you can have a motherboard that is about E-ATX (workstation) sized. In an almost literal sense, Rhino will be using at most 2-3% of your CPU, for most of its work, on average.

FYI, I often use bella render limited to 20/22 of my 24 threads on my Intel i7-13700K, and keep Rhino open at the same time, and it functions fine.

There is simply no reason right today to buy a computer where this concern matters for Rhino. If you need that much CPU rendering and have that kind of budget you can make a little render farm for less, if CPU render farms are still a thing people do.

I don’t see a file with that example. That’s a case where workflow matters more than hardware, every additional object in a boolean operation slows down the whole thing, so a 5% faster CPU than some other one you could have picked is not going to matter to that exponential growth.

It seems like you have been agonizing over this choice for a very long time while what you have is so old that literally anything picked at random from Best Buy will be better. Just get something and move on.

Hello Jim,

I could post the file but it really is so simple [especially compared to some of the work I have seen of yours]. I just made a poly array of rectangles had them intersect / overlap sufficiently so that I was pretty certain the BU would work [the file tolerance was only at .001 mm]. I did consider that a reason that the BU might fail was because of the large number of objects. So I could recreate the file and post it if of help.

I apologize if my first post of this thread did not articulate clearly the central nature of my inquiry. Currently [and I suspect in the future] the apps being used are “all over the map” with respect to their resource use. It is really quite amazing. Some are multi core / multi thread; some GPU intensive, but given an Nvidia card, not necessarily heavy use of CUDA cores like cycles but rather use Dedicated GPU memeory and no CUDA cores and or Graphics_1 and or Compute_1 and or 3D or a combination of them. System RAM use likewise, app dependent with marked variation. While a starting to get larger Rhino model can use 50 GB of system RAM another app with a modest 30 second video loaded can likewise use just as much RAM. But as I believe was indicated, Rhino will run just fine given it is allocated to it a core / thread and decent CPU clock speed, sufficient RAM and view ports [perhaps not many] in Ray Trace mode. With ample GPU power [ assuming GPU rendering.]

As we appreciate, the term “multitasking” can factually apply to a computer work-flow, while, contrariwise it is pure and extremely effective marketing when the term is applied to a human being. The human mind can only give attention to one task [operation] at a time. The illusion of doing many tasks concurrently [at the same time] when scrutinized closely, shows that there is a masquerade of apparent concurrency when what is actually happening is sequential. The magic “slight of hand” is the ability to toggle rapidly between a multitude of tasks this giving the perception of many tasks done concomitantly.

The “rub” : a computer work-flow can be organized so that a number of compute intensive tasks can be assigned to the computer that are to be executed at the same time by the computer given there is a sufficient quantity of computer resources of [the now] usual wide spectrum of specialized computer functions. Multitasking can now truly happen for the human in partnership with the computer. The computer proceeds with [let us say] four compute intensive tasks. Even as these tasks are in process the human continues or starts a new computer based project. The computer still has ample “reserve”, currently unused, a wide-spectrum of specialized resources “free” so as to allow the human to - without lagging of the work flow on this project. Human multitasking is now occurring as five independent tasks are proceeding concomitantly. This fifth task might be modeling and [ in a four view port config] three of them may be in Ray Trace mode and needing to be rotated with the expectation that real time good to excellent Ray Trace rendering will occur- thus giving the modeler good feedback about the look of the design so that the modeler can work as effectively and efficiently as possible. Including feedback that strongly suggest the wise next step is to scrap the design altogether. Or as one or more of the four in process tasks complete the modeler has finished up with a model and ready to send it down the work-flow pipeline to a compute intensive processing stage.

Hopefully, I did better at explaining the nature of the inquiry [I did try] and the relevance of wanting to know how many cores Rhino 8 supports per CPU and in total given a dual CPU set-up; and how it may function in a dual CPU set-up where one may be able to achieve a better balance of clock speed and core number. [In all of this throughput has for sure not been “forgotten”.]

Tom your observation is correct. I have not only agonizing over these matters for a very long time but also developmentally stifled. Additionally, I have been politely and constructively criticized that the amount of pooled GPU capabilities that I have suggested needed is a marked overestimation of the actual need.

However, as I am deeply appreciative for all of the help that I have been the benefactor of [and along the way try to help others in small ways - and imagine eventually be able to help others more], I have also received quite credible feedback that the system needs and concerns delineated are justified.

So while quite credible folks have correctly assessed my personality as not being all that patient on the one hand, this trait is balanced by the trait of, that, if a barrier exists along one path that will eventually lessen or dissipate, participate in doing what one can along this impeded path and spend the difference engaging in other doable activities, that for me are often tasks I have ignored doing but indeed, need to get done.

Thank you,

Andy

As far as I am aware there is no limit on core count to run Rhino. I have seen it being run on a 128-core machine without problems.

Hello Nathan,

Thank you. Also, for “who knows” what reason a 128 core count would not prohibit opening Rhino files made in versions prior to Rhino 8 ? Particularity files made in Rhino 7 and Rhino 6.

Thank you,

Andy

If rhino starts, all functionalities should be okay including opening / upgrading old files. I’m running rhino (6,7,8) on a 16 core / 32 threads and It runs flawlessly.

Opening older files has no bearing on the Rhino ability to run on high core-count hardware.

Going from a [2 x 4]-core E3-1220v6 Intel Xeon to 128 Core anything modern will be likely between 1000 and comical% improvement for all-core workloads.

Hello Tay and Nathan,

Thank you so much. Appreciated. As is likely apparent I prefer to have the longest hardware life cycle possible.This process works to be cost effective as the indirect overhead cost of from purchase to set-up and testing is minimized. These indirect costs are essentially fixed, so the lower the e.g. 20 year cost is.

Thank you,

Andy

Hello David,

Thank you. As is also likely apparent this is the first time I am working through a new computer system where the apps have such a diverse set of hardware needs to function at a high level and a double system configuration puzzle if the expected workflow is for multiple compute intensive apps to be running concurrently. Expected is that this type of organizing work goes from an “as best as one cn around the fringes” to much more a “best practices” way to organize doing the project work.

Thank you,

Andrew

p.s. Built into this is an opportunity, indeed an encouragement to experiment, take risks and not be afraid at all to have failures in per-suite of innovation etc.