Rhino After 8 Years (long!)

Hi folks,

I’ve just installed Rhino for the first time in 8 years, after a long hiatus using many other things. This is feedback based on my experience thus far. I hope it’s helpful:

  1. Rhino 8’s visualization is FAST! …until you start applying certain types of Materials.

With modern GPU acceleration, I’m able to do things that just weren’t practical when I last used Rhino (or anywhere else, really, unless you had access to really high-end hardware). For the most part, I’m happily impressed by the speed at which it renders NURBS.

However… this all starts slowing rapidly when PBR Materials with any complex math (noise, fractals, etc.) start getting applied.

So long as I stick with simpler Materials or only use Arctic, things are fine. When Materials with things like Simplex Noise are applied, performance nosedives in Rendered view.

Arctic is great, though, for knowing what the current forms are, etc. Such an improvement over the old Solid render!

It feels like there should be some in-between, where there’s a mode that’s like Arctic (call it “Colored Arctic”, maybe?), but it stores an average of the RGB the values of the procedural Materials behind the scenes, so that we at least have a good sense of the color values, etc. as we’re iterating. Right now it feels like Materials should only be applied at the very end.

I’ve done further testing, and if the PBR Materials are kept to one color, performance is quite acceptable even with a fairly complex model, even when adding in some translucent materials, etc. So this is clearly an issue w/ the performance of the noise, etc.

  1. Thus far, I have hard far fewer of “can’t do this” errors, in terms of Booleans. Not zero; there are a few situations I’ve hit where Rhino still can’t quite figure out what to do. But it’s night and day over how things were. This is a major improvement in feel; I don’t feel nearly as constrained by the boolean solver. Same with Lofts, where I have yet to break it!

  2. I absolutely adore PushPull for quick modification of surface details!

However, being a greedy end-user who just wants more… I wish it would auto-project onto the Surfaces, rather than requiring an intermediate step, with an option to project entirely through (i.e., both sides get the projection) or to project onto only the nearest hit. Having to do the projection onto the Surfaces and delete the Curves I don’t want seems like an unnecessary step.

I’d also really like a way to do PushPull, but create new capped solids when pulling “out” from the Surfaces, rather than cut the Surfaces of the existing solids. This would reduce model complexity somewhat and be easier to manage for larger work, where a PushPull step, once taken, may be a permanent change, difficult to undo later.

To put it another way: if I’m working on a design for something, and I love the basic geometry, I don’t want to have to commit forever to surface detailing merely because I did a PushPull. I want that base shape intact, uncut. I realize that I can probably do this, by storing the base form and using it for a final boolean on the modified form later… but again, this feels like an unnecessary interruption to workflow (and also means I have to plan far in advance, rather than just keep creating).

I’d also like a PushPull that can handle “domed” geometry, by creating a Loft ending in a Point. Think: small pebbly grip details on a shoe; small ornamental work on jewelry, or greebles on conceptual pieces, sci-fi art for 3D printing, etc.

  1. The UI has changed a lot. Some of it’s good, some of it has me typing a lot to find the commands, rather than using my mouse.

In general, I find myself both wanting to figure out how to how tie all my commonly-used commands to the middle-click context menu… and not wanting to, because that means that I’d have to figure out how to add commands to said menu.

When using Rhino, I tend to stick with an old-fashioned modeling workflow- Extrude, Revolve, Loft, the occasional Curve Array- followed by Booleans. Sometimes with all of the intermediate steps w/ Curves and Polylines to get the form right.

This has always felt like a good process for quick iteration, and no longer feeling constrained by hardware is great.

But the process spans multiple commands spread around the UI and for a few things, I can’t for the life of me find the icon… and now I’m typing into the command-line a lot. I’m sure at some point soon I’ll burn the hour or so it’ll take to tie everything to that hotbar, but I’ve had some horrible experiences with that kind of UI in the past (for example, in one application, losing all of it because it was tied to a project that was deleted, for no particular reason, and several where I’ve lost all that work invested when the software updated) that make me a bit worried about committing to it.

5… I dearly miss toon-style rendering like the old Penguin used to do.

This is something I’m sure a lot of people are missing (but not enough, apparently, to save Penguin, which makes me sad). The current rendering setting to draw polylines doesn’t allow exclusion of isocurves, for example, so there’s no way to achieve roughly the same look quickly, even if you mess w/ the PBR system to get something vaguely like the old Penguin blurred Blinn-Phong look.

Simply being able to exclude isocurves would a big change, but also being able to adjust how the edge curves are blended into the final result, via setting a color and a blend type (multiply, additive, etc.) would be very helpful for line-drawing styles.

  1. The post-FX system in Rendering feels like it’s not quite polished; clearly, post-FX are supposed to have user-accessible settings, but the main ones like the denoiser don’t.

I presume there are ways to alter the denoising / AA / etc. in the main rendering settings, but it’s a bit clunky having to go different places in the UI to get things done and iterate.

Bloom feels very all-or-nothing, with no in-Material control over how it’ll respond, other than the Intensity of the light transmission.

The separate renders for things like Glow are neat.

The Normals Channel either doesn’t do what I’m expecting (a view of the normals of the geometry as rendered, for use as a normalmap elsewhere) or it’s borked; all I see is a gray render.

The Depth Channel view doesn’t have any parameters to adjust it to produce something suitable for transformation into a normalmap, either (or objects must be scale to enormous scales to see it).

  1. Bringing in new images to use for Environments feels harder than it needs to be, and the application doesn’t ship with a truly neutral one that doesn’t have any directionality or angle to the light, and it wasn’t immediately clear whether Environments could be tilted, rotated, etc. to adjust quickly, taking a neutral overhead and turning it into a neutral 45-degree spot, etc. Studio B and C are close, but not actually perfect, and my current use case really needs a simple, old-fashioned overhead light that is truly neutral and some edge lighting to keep sides of things from being crushed to absolute blacks.

The Sun light remains somewhat over-complex for this kind of use case, too, and I’m a bit loathe to have to set up a multi-light array of Point lights to get a decent result, but I’ll do what I have to.

The only supported path for Environments requires a HDR image and only has an Intensity value that can be adjusted easily, which I’m not normally working with and struggled to understand how to produce. After a bit of struggle-bus, I managed to get a HDR image built in Photoshop that I thought might work, but it still didn’t give me what I was really looking for; tight bright highlights and a simple overhead light without directional bias.

I want something like the Curves operation in Photoshop, where I can boost / crush white and blacks with a custom curve if I want more brightness or deeper shadows.

In general, I’m having difficulty with shiny materials in the PBR workflow; for mattes and low-reflectivity sheens it’s great, but high-gloss, it’s less easy to get the visual result I want.

I presume I’ll eventually figure all of this out, of course, but after a few days of use, this is one of the little areas of friction.

  1. I’d really like edge-detection Materials, to allow mixing of two results according to how close we are to the Surface’s edge.

For a lot of things where we’re trying to do a little basic weathering, chipping or albedo changes to enhance realism a little, this is one of the primary considerations, and it seems like this might be (relatively) easy to do. Ideally, one would mix two Materials based on edge proximity using , so that, for example, you could do Paint mixed with a Metal in one Material.

This would take the image quality from, “hey, that’s raytraced” to “hey, I almost don’t feel the need to drag it into another application, unwrap it, and paint it” for a lot of things where you want a little sense of real-world behaviors in materials, but you do not want to invest huge amounts of time into building a skinned model or you simply have time constraints that make this impractical.

  1. I presume that there’s no way to export the final results of the Materials directly out to a mapped, textured, PBR-ready workflow, for final adjustment? The current documentation has an example of UV adjustments (and that looks fabulous) but I haven’t seen a way to export a Material out as a texture.

  2. The Mesh tools, which were barely in their infancy when I last used Rhino, are pretty amazing. I loved being able to do NURBs booleans on a mesh to fix up a STL for somebody in 10 minutes.

Anyhow, all of the above is minor stuff and I understand that some of it’s specific to what I’m doing, etc. No major crashes or showstoppers have been encountered.

IDK how many 3D software environments I’ve used over the years, but… a lot… and they all have their foibles and learning-curve problems. I just thought you folks might find this feedback useful.

In general, Rhino’s biggest strength (to me, at any rate) remains the tools I missed elsewhere: fast development of forms for mechanical design, tight CAD-style control over shapes, scales, mirroring etc., and powerful booleans, coupled with classic four-view workflows.

Rhino’s far and away superior to a lot of stuff I’ve used in these areas and the speed of visualization is really impressive; I can mainly just concentrate on making stuff, which (after 8 years of not using it) only took me a couple of days to get back into flow.


wow- thanks for this. A lot to unpack here.

I’m going to do my best to respond to what I can- others will undoubtedly chime in-

Can you send us some of the materials you are seeing slowing down?

Please send us any fails so we can analyze them and see if its a modeling problem or if thge booleans can be tuned up.

Tools like inset, or using auto cplane and drawing directly onto a face can help Push/pull be more direct. have you tried them? New in v8 stuff.

can you clarify this workflow? Maybe zhare a model or some screen captures? Does extrude (dot on gumball) do what you want? It works a little differently than Push pull.

can you share the tools you are having trouble finding? Do you you use aliases or keyboard shortcuts? Do you have a custom pop up menu on your MMB?

Have you checked out Flair? It was trialed in v8 wip, and will come back in the v9 wip-

grab the intel or nvidia, denoisers from the package manager. IMO they work better than the post effect version.

images to see would be helpful here to see your results.

Have you played with hdri light studio? It’s an application that allows you to make custom hdri environment maps.

can you expand on that? How so?

The settings in the image itself allow for more fine control-
the Gamma, saturation and multipliers may get you what you need-

can you send examples? files or images will help

We do not have that capability as of yet, @nathanletwory would have to jump in about it’s technical possibilities

right click the material and choose save to file- does that not do what you need?

@piac did the heavy lifting here, and I agree- they are night and day better. Glad you are enjoying them. Please send us any fails so we can continue to improve them.

we really appreciate your feedback and the time you took to share it.

It may be worth splitting these issues out into individual posts going forward so we can drill down on the specifics of each issue without it getting lost in a super long thread. That way we can attach the relevant info into a tracking item for each feature or request and get it to teh relevant dev resource without them having to wade through stuff that does not pertain to the specific issue.

be well, and thanks for your feedback-

1 Like

Depending on what version you’re on below mentions are somewhat broken. I’m working on these things in what will become Rhino 8.6

Make sure you select in the Rendering panel under Render Channels the entry Custom channels and check from the list Normals.

Depth channel is half working in Rhino 8.4 and earlier, especially when no ground plane is used and camera rays hit the background.

I am currently working on fixing that in Rhino 8.6 - RH-80330 Saving depth channel should not alter information put into it.

It’ll be possible in the future. For Rhino 9 I hope we’ll be able to expose all shadernodes of Cycles (which Raytraced and Rhino Render are now based on) and provide a node-based shader editor, along with all features necessary to do for instance pointiness detection (edge proximity essentially) to do this type of blending.

The built-in denoiser is exactly the same as the Intel denoiser that used to be available for Rhino 7 - it is actually no longer possible to have the Intel denoiser separately installed in Rhino 8.

Neither denoiser has settings to further control the process as far as I know.

I’m sure I’ll come up with more answers, just wanted to get these out here before going to sleep.


We are aware of this, and it’s on the list of things we need to improve. Thanks for the thorough feedback!


1 Like

First, thank you, folks. I feel very welcomed. I’m sorry it’s taken so long to respond to this; I’ve been too busy using Rhino to make myself write a reply. I’ll try and keep this brief.

I will definitely try this out!

Minor things I’ve encountered since I wrote this:

  1. I want direct control over the AO solution, in terms of samples and depth exponential. I’ve written a SSAO implementation before (for realtime) a few years ago. While I certainly don’t know how Arctic’s specific implementation works, generally we’re talking about samples of depth around a given fragment’s location in screen-space, using an exponential to determine how “close” surrounding areas are, the depth difference, etc.

By changing these two variables, one can typically adjust how far AO “spreads” and how quickly it becomes darker. This is massively useful for altering the final look. AO might not be “raytraced accurate” in a formal sense, but it’s quite useful to produce a stylized image that pops.

I like the AO, and am using it for the art I’m producing, but I’d really like better control over it, basically, even if that costs more time to render. Adding sample sizes to the shader is fairly trivial unless it’s using some particularly-expensive or convoluted Gaussian distribution for falloff (my implementation used a linear falloff system using a simple multiplier based on relative fragment positions, which looks almost as good and way faster, as it’s minor trig, if that’s helpful), and the exponential’s trivial.

Due to limitations on the sample sizes, I’m seeing halo artifacts in the AO, which can be seen clearly here in a couple of spots:

I’m getting around all of this without any major problems; don’t get me wrong, but it’s something that could be fixed with some control over the AO’s behaviors.

I understand that it may be impractical to add this kind of control to Arctic, but a Post Effect version surely could do so, as it’s just a run-and-done. Right now I’m making stuff like this via ViewCaptureToFile and compositing in Photoshop, then some very fast, sloppy post to get roughly where I want:

  1. I’m still having some problems achieving a stylized look I want for the production art. Final results for what I’m producing are getting shrunk to tiny pixel scales (this is all for a game I’m programming, probably not a typical use-case for Rhino, lol).

With AO on, I’m getting close to the look I need; with a bit of work with Curves in Photoshop, I’m getting somewhere close to the contrast levels I want. I’m still having some issues with making contrasts feel nice a sharp, though. What I’m trying to get is a look like this:


What I’m getting is this:


I suspect I either need to adjust the render settings (less gamma, maybe? more overhead light? lowered Roughness?) or to try combining different downsample methods to get closer to this stylized look. Here’s a version of the final intended use where I’ve blended nearest-neighbor with Bicubic Sharper, after doing a Curves operation on the AO to get the desired levels of relative shadow (but without any other post):


It’s closer to what I want, but I think I still have a lot of work to do with understanding the Materials to get the look I’m trying to achieve here. I can’t imagine that anybody else is using Rhino for such a strange purpose, but now you know why I’m focused on these areas of the rendering system, etc.; I’d love to have enough control that I’m merely doing a final stage to mix downsampled results, rather than a more-elaborate process that I’ll have to build as an Action in Photoshop to automate the major steps of post. But if that’s what I have to do, I’ve done weirder things…

  1. I’m sorry… this is probably massively stupid… but… uh… can you have a fifth window in Rhino now?

I’ve set up custom Views for flipping back and forth in one view between Perspective and Right View, but when you save that, it apparently doesn’t save the render setting, which means I have to go from Arctic to Wireframe. Meanwhile, the traditional fourth view used for Perspective is Locked, because it’s for final production, needs to keep the Camera and Target exactly where I need them and I don’t want to accidentally screw it up, lol.

Should I just set that View with a saved version when I’m sure I’ve got the View dialed in, so that I can switch back to Perspective while working on things, or can I add a fifth View for doing quick turntable?

Sorry, this is truly a First World Problem: I want more Views to avoid time-wasting steps in the UI; I just want to work on the art and forget as much as possible about how I’m doing it.

Eek. When a View is Locked, you cannot change from it to a saved View. Double-clicking on another View doesn’t work. That’s interesting.

It does work as a method for moving between Views rapidly otherwise. That at least solves the “juggling too many Views” problem, but it requires that the production View be set up pretty early, or re-saved if changes are necessary.

Took a look at HDR Light Studio. I think I can get roughly what I need, in terms of a HDRI lighting that’s top-down and has the lighting where I want it- largely overhead. I’ve tried the other Environments that ship with Rhino, like the Automotive Studio; that ring-light is actually pretty cool and was close to the result I wanted, but the bluish tint (which I presume is some industry-standards thing) isn’t what I want.

$220 to make a single overhead light setup, though (and that’s their “indie” price)… ouch! I’m still mulling over whether to do that or set up a Point Light array instead, to get the lighting I want, or maybe even building an Emissive object to see if that can do what I want. I’ll probably try the latter ideas first, just to save the bucks, as I think I’m going to buy Bongo.

You could also just add a rectangular light to do that, but indeed a plane with emissive material should do fine too. That you can’t hide from camera rays though, rectangular lights are hidden from camera rays.

Thanks! I’ll try that and see if it’ll work. Last time I used Rhino, I stayed away from Rectangle Lights because of their high CPU costs. I presume that’s a non-issue now!

I’m not quite sure why I’m seeing so much lighting on the sides of the “tires” here (feels like that should be darker, as the environment lighting is off and the four side panel lights are turned down to .1 Intensity).

I may try mixing in the Automotive Environment with a low setting and see if I can get the best of both worlds.

Oh, this is odd. Using Rectangular lights somehow alters how Arctic works. I presume it’s because the light quads are getting written to the depth buffer. I’ll try and move the light well-above the Camera. This approach is largely working as intended though!


[EDIT]Tried the above.

So long as the Rectangle Lights are turned on, this issue persists, even after scaling up to a Very Large Size (well-above the overhead camera; should’ve been clipped). From the looks of the results, somehow this broke the exponentiation of depth, producing the stair-steps. It’s getting rounded, or quantized incorrectly… or something.

If you’d be interested in seeing the file to reproduce, I’ll post that, this certainly isn’t something massively proprietary and surely this isn’t a shader bug you see every day, lol. Sorry, you can probably expect me to break things in odd ways like this.

[EDIT2] This occurs even if the Rectangular Lights are turned off in the Lighting control panel. If they’re Hidden, it’s cured, but ofc that breaks the Raytraced / Rendered view, lol. There’s definitely something odd going on there.