That is generally what happens but we’re taking about laptops with Intel’s archaic garbage integrated video here… And more generally, when you’re making more use of the hardware, then you are more at the mercy of the hardware drivers to keep things stable. Having to keep things more up to date is the price you pay.
Certainly. If it happens in Rhino in my code and I am able to reproduce it then I will be able to fix it, which I will.
With respect while you may be, I’m not: my potato uses Nvidia graphics with Rhino.
For performance reasons I usually use raytraced in a smaller floating viewport. I have a toolbar with buttons for various sized floating viewports, so I can always open them quickly when needed for raytraced and close them when go back to designing. like this:
here is the macro:
-NewFloatingViewport _Projection _Perspective _Enter
-ViewportProperties _Size 900 600 _Enter
-SetDisplayMode _Mode _Wireframe _Enter
Thought this might help in this context. Oh and obviously for weak mobile GPUs I would try an even smaller resolution.
Also, the floating viewport method is imoprtant for me because this way I get the aspect ratio that I intend to render later. My maximised viewport that I work in never has the aspect ratio that I want the renderings to be.
Yeah, that is so true! …and why we wished for Rhino to support render safe frame in both height and width all the way back in the 90’s. The view should always fit the render output IF sat to anything other than “viewport proportion” by the user. I remember going from 3D Studio to Rhino and not finding this respect for the output was a surprise. And I wish for raytraced to ONLY render that space too. So I too have a similar setup to force Rhino into behaving like a true rendering app Maybe something for V8?
For folks having performance issues with large viewports, there’s a DPI scaling option for Cycles buried in the Advanced options. It allows you to have the viewport render at a smaller resolution and scale it up to a larger viewport size. It works in Rhino 6 but it doesn’t seem to work in 7 Beta right now.
I think for a lot of folks with 4K or other hi-res displays using this scaling is really handy, even if you have a modern GPU. I’d rather quickly see a slightly blurry version of my render than the opposite. Also the program in general stays more responsive.
I think this would be really handy for a lot of folks (especially laptop users!) to have that setting fixed but also available in an easy to discover place like View Settings/Raytraced/Other Settings.
Blender exposed this feature in a pretty obvious Performance tab and I use it all the time there. Unfortunately Blender only allows for full integer scaling. I do like being able to use 1.5 scaling in Rhino 6, it’s often the right compromise of speed and resolution.
This works indeed only in v6, not in v7. Hopefully I can reinstate it in the near-ish future.
The idea of making powerful video cards, also known as GPUs, to offload the CPUs, made sense before multi-core CPUs were common. This idea does not make sense now. If your CPU can do all heavy work, why would you want to keep it idle and spend lots of money on powerful GPU?
Here is comparison of powerful CPU (Intel Xeon Platinum 8180) with powerful GPU (Nvidia V100 PCIe (Volta)): https://www.xcelerit.com/computing-benchmarks/insights/benchmarks-intel-xeon-scalable-processor-vs-nvidia-v100-gpu/
Intel Xeon Platinum 8180 price: https://www.amazon.com/dp/B0745HMV17
Nvidia V100 PCIe (Volta) price: https://www.amazon.com/dp/B07JVNHFFX
By the way, Roy Hirshkowitz (developer of Flamingo) is betting his rendering business on the superiority of multi-core CPUs.
You are joking, right?
Please tell me you are joking…
Powerful video cards and weak CPUs are useful for rich kids who play video games. They are not useful for CAD users.
Graphics cards should have been integral parts of CPUs in order to improve their graphics performance and to make them easily available for non-graphics tasks. They exist due to historical reasons only. I hope they are gone soon.
The benchmarks of theoretical performance listed there are absolutely irrelevant to real world performance.
The only way to properly measure this with actual renderings. Showing the quality and speed result (rendered images, not numbers or bars) of a GPU solution vs a CPU solution.
Also judging a hardware/technology by association to a bad/inefficient render engine can add a lot more noise (no pun intended) to this comparison.
I can tell you from experience, including lots of render speed testing that we have stopped using any CPU based rendering engine in the last couple of years. We only use CPU on very large scenes where the memory limit of a GPU does not let us complete the rendering. And we do so at a huge speed sacrifice. For example with Vray, where we can compare speed of CPU vs. GPU on the exact same scene very easily.
More importantly, for most of our day to day work we don’t even render anymore, but rather use realtime GPU-based view captures with PBR materials. So this is a whole different way of visual procession superiority where we can get usable images in realtime at about 20 FPS.
I’m not interested specifically in arguing with you and your usually outlandish theories. I just want to set the record straight for people who try to get work done, and separate facts from fiction.
There are two kinds of debaters: those who use insults, and those who use carefully verified facts. If the CPU-based renderer is very slow, it means that it utilizes only one core, rather than all cores.
Hi @nathanletwory ,
Before they get completely lost in the noise, could you confirm that the incidents of Rhino crashing that I posted (with crash reports submitted) have been picked up at McNeel? 'Twould be nice to see something tangible come from this thread.
I have seen the crash reports, but have not been able to reproduce on my end yet. A little over a month ago I had made changes that should’ve prevented this from happening, but maybe I missed some corner case. At any rate it is down to timing, which makes it very hard for me to reproduce.
I have tried many different ways to reproduce, including quick toggling back and forth between Raytraced and other display modes, resizing viewport, tweaking materials a lot. On my end since the aforementioned fixes completely removed the crashes with stack trace like the ones you sent in.
I will always be trying to reproduce these kinds of crashes, but they will often go on the backburner when I can’t reliably reproduce them - there are still other issues to fix.
Rest assured, these type of crashes are not forgotten, and I do look into them.
Never doubted it! If you don’t have access to a Surface Book (i.e. the machine known to produce the problem) are there any tests I could run for you on mine to help?
I think the steps you already gave should help me. I have a rough idea of where the problem is, but to be sure I’d need it to happen on my own machine with all debug symbols and tools available.
If anything, maybe we could do a remote debug session when we find a time that fit both of us? I’ll ask more in a private message.