Grasshopper but not Rhino users?

So basically you just want to connect boxes with wires, you don’t actually want any geometry.

That is also my understanding.

with the capability to add my own .net assemblies [dlls]

Sure, why not. They should just spin that off into some startup, and continue to concentrate on their core business, which is generating geometry (by various means and interfaces) for making things.

The idea was that instead of the hook to RhinoCommon, they might as well have an IO library that can get any library (engine) for calculating geometry and make use of the power of visual programming in order to provide parametric modeling to any 3d modeling tool. Or to extend the capabilities of 3d software that already have parametric modeling but which is not powerful enough or not friendly enough for non-programmers.

I always find this attitude amusing, that everyone who uses Grasshopper is expected to understand the internals of Rhino and write their own code solutions using Rhino APIs and the SDK, before having the temerity to request basic features (like bake-to-layer, for example).

People routinely use all kinds of products they don’t understand, like cars, appliances, consumer electronics and software tools like Photoshop/GIMP, video editors, spreadsheets, etc. Choices are made based on whether they feel empowered and satisfied or helpless and frustrated by the experience. Their focus is on accomplishing a task at hand without distraction. Programming isn’t for everyone, and there’s nothing wrong with that.


Dynamo Studio is a stand-alone version that still ships with the DesignScript geometry kernel (i.e ProtoGeometry), is .NET, and has an IronPython scripting component:

But also, it is still Dynamo :grimacing:

1 Like

I also only use Grasshopper when designing jewelry, because the Rhino interface looks like Windows 98 to me. In Rhino I rarely create a curve or points and sometimes more reference geometry to use in GH.

Would a modern desktop application use DropDownMenus? or a technical taxonomy instead of a friendly taxonomy? I don’t think so, because there are better user interface paradigms that relate user behavior to Rhino in a less distant and intuitively fluid way. You don’t want to draw a curve, you want to draw something that needs a curve, so I wouldn’t go intuitively to Curve > Free-Form > Interpolate Points, but to Create > default curve. And to customize the views I wouldn’t go to Tools > Options > View > Display Mode > Shaded, but to View > Shaded. And for the command line, to draw the command parameters in nice way instead of using simple text? And it wouldn’t force to use another UI, it would give the user the option to use one or others. Then comes the possibility for developers to create Workspaces, a kind of Skin with their own taxonomy, with support for develop Rhino interfaces.

Even GH has ended up looking slow to me, because it could be controlled with the fingers on a touch screen and with a taxonomy based on user experience to use Grasshopper and Rhino in tablet mode and lie down :stuck_out_tongue_closed_eyes:

1 Like
1 Like

He is controlling a laptop using the Remote Desktop Control Chrome application. What a surprise, thanks for sharing that, I didn’t know it! However, the app did not allow me to use the keyboard or the memory of the phone.

I just made this screenshot from my phone to test it, but I had to post this response from the laptop.

1 Like

I don’t like the idea of “remote desktopping” but if it fits your workflow you can use the mobile app of team viewer. Back when I was using it, there was no problem with the keyboard.

Yes, VNC sucks! It might be good for some cases of tech support or maybe some weird kind of server maintenance (probably Windows servers :wink: ), but Grasshopper on a smartphone through VNC, that seems desperate. Also you should relax on vacation!

Yes, but you do use the Rhino geometry so created to produce the masters for your jewelry pieces for sale (via wax milling, 3dp or whatever), no ?

Nope, a complete misinterpretation of what I said. I’m talking about a basic understanding here of what Rhino geometry is (NURBS curves, surfaces, meshes etc.) and how these things work together to create a 2D/3D CAD model. Like with basic non-GH Rhino where one of the most frequently asked questions is “Why did my Boolean operation fail?” People are in the dark because they don’t understand the manual procedures and requirements behind the higher-level Boolean operation (i.e. intersections, trimming, joining, surface normals, etc.).

I think you are completely underestimating the work required for that part… But again why not, as long as it gets spun off into some speculative operation that does not affect the continued development of Rhino (McNeel’s core business).

Mitch, you have to realize something, as it seems you either can’t or you simply do not want to accept it. A lot of people (myself included) have bought Rhino mainly because of Grasshopper and Python integration. There are free and open-source geometry calculation solutions out there but they do not have Grasshopper (some have Python integration) and because of that they are not used much. Even without RhinoEngine behind it people will still use grasshopper, arguably even more people as it will be lighter and since it will be the main thread it would not depend on Rhino’s stability. And they will have the choice which 3d modeling tool to hook up with GH.

not at all

I’ve no idea about the numbers, but I’ve seen several users launching Grasshopper to draw a simple mesh face. The good thing is that this makes them feel like a programmer :wink:

In my job I don’t prepare models for manufacturing, I make definitions to upload to ShapeDiver and the rest is automated. I also work with batch/procedural design and have to do everything, like saving files, from code using RunScript(). But of course, at some point I use the Rhino interface, I mean, what is not the viewport. But I don’t design with it.

In my previous comment I forgot about the toolbars, which are like a convenient replacement for the DropDownMenu, and I didn’t give the Rhino interface a fair review. Anyway, the feeling or experience they give me is like using old software. But it’s the same with the other Rhino based plugins (jewelry for example). In this kind of environment (3d modelling), you need a smart command search based on knowledge graphs, rather than displaying command lists. The first branches of that graph are the basic actions (create, modify, transform, view…) depending on the current selection, and after choosing, you go through a search tree based on the working context until you choose the desired command. If you can get a person with no experience in this software to make a snowman without having to see a tutorial on his first contact, then it would be a good interface. Same problem has GH, btw.

That’s…not strictly speaking true. If Rhino was designed for people who’ve never used 3D before to make snowmen, maybe, but that’s not the use case that’s most important, it’s not a website. The question is more how long does it take to learn to do the serious stuff, than how easy is it for a clueless newbie who isn’t even trying to hack out some sort of blob, which may be nice for the sales pitch but not relevant to actually using a tool for hours a day.

As for the user interface of Rhino, I actually prefer the v3 interface more than this stupid ribbon/tab one.

About the learning curve. I find Blender more difficult for a new-comer than Rhino. Especially if the Rhino-new-comer is coming from Autocad (the old ribbon-less beautiful practical interface).

But that’s just me. It is funny how software tend to copy the UX from one another without realizing that the UX is in fact deteriorating.

I question the effectiveness of the first contact experience in the Rhino interface because it is something that every user lives, it is important that it is also simple (as modelling webpages), because it structures better the understanding of how to use the software. It does not exclude any other feature. All tools are leaves from the same tree.

It would be easier to learn the serious stuff if from the first minute you already know that, for example, if you select a surface, the interface shows you that you can edit, exploit, transform, analyze, (use it in the way that is convenient for each object in the viewport), with all the tools that Rhino has that you can use there. Instead of a tabbed ribbon of tools, which requires you to learn it and does not offer a fast experience.

The user experience can be represented as graphs that relate what things can be used to what things. Think of all the individual uses as different sub-graphs of actions and operations, and think of this space as the overlap of all these individual instances. How similar do you think they are? Well, at least you know for sure, as a developer, that if the context is “select-polysurface”, you can need edit, exploit, transform, analyze, (all things involving polysuperfaces) kind of tools. If the context is other, you can know which tools are convenient to suggest. You don’t need to predict the user’s behavior, nor to deploy dozens of tools, but adapt it to the context of each action to have a better interface than the “current” one.

Uh huh, except that’s “mystery-meat” Interface Design, and selecting a surface and popping up everything you can do with it would show you…MOST OF THE TOOLS. And what of plugins? What about operations where the workflow doesn’t really suit that?

I don’t know why you’re arguing about the toolbars, of course they’re terrible, Rhino has had too many tools for the concept to be usable since Version 1–“ribbons” make no difference–I use maybe one or two of them occasionally if I really try to keep my hands off the keyboard.