Greetings to all members of the forum. I would like to express my gratitude for the abundance of valuable information that has been shared here, which has significantly contributed to my understanding of the software. As a professional architect, I have encountered challenges while working on a project that involves a complex geometry with many faces, twisting seamlessly in a non-planar environment. In my search for solutions, I came across an old YouTube video discussing the Xeon Phi CPU and its ability to offload certain calculations, freeing up the main CPU for other activities. While investing in a higher-end CPU would be the most effective option, it would require a costly upgrade of several components in my PC. Thus, I would like to inquire:
Is it possible to configure Rhino to offload calculations to another CPU in the PCI-Express slot?
If so, have any forum members experienced this setup?
Lastly, what level of impact does this configuration have on performance?
Thank you for taking the time to review my inquiry, and I look forward to engaging with the community in the future.
I have a machine with a xeon phi card, and while it is possible to give it work to do, it is in no way simple – it runs its own operating system, a flavor of linux, and you have to compile your code to run on it, since it is a different architecture, similar to how apple silicon is a different architecture and requires code to be compiled specifically for it.
beyond this, it is not just that you need to compile your code for this architecture, but you need to have a problem which is amenable to being coded to leverage the particular type of parallelism (vector) provided by the card – I was able to get a renderer to compile for it, which involved a lot of work to get the renderer’s third party libraries to also compile for it, but without changing how the renderer itself worked (and it was well able to use all cores), it proved to perform on the card on par with how it did on the core i7-4771 that was running the host machine.
in other words, the code was not aware of the degree & type of parallelism provided by the card, and so was not able to leverage it, and just ran how one might expect on a 57-core machine with non-hyperthreaded cores derived from the original p45 pentium design, running at barely over 1ghz.
besides all this, xeon phi is now a quite long deprecated technology, even the later standalone knight’s landing cpu versions which were capable of being installed on the motherboard as the sole cpu (being a hardware junkie, I have one of these as well).
Thank you for your response. It appears that the suggested workflow may require a significant amount of effort and may not be as efficient as one would prefer. However, I can see that investing in a higher-performing CPU or even two of them could potentially improve workflow efficiency.
Additionally, we could consider utilizing a separate processing machine to handle final outputs, as described in your previous message. I believe these solutions could help us achieve a more continuous workflow and ultimately lead to better results. Are there any newer alternatives to the PCIe CPU’s that you have come across? Excusing my ignorance, but would the higher end server side Nvidia cards be a possibility?
Rhino itself has no inbuilt capabilities for utilizing computational hardware, be they CPUs on a PCIe board or GPUs (not counting Raytraced/Rhino Render using GPU acceleration).
If however you are developing a plug-in and have control over the computation engine in it you should be able to write code that accesses and utilizes such hardware. As long as you realize that this is something you’ll have to do yourself as it is not provided by Rhino.