I was thinking maybe AI ,as an example, could be used for rebar detailing. Its exactly a thing where AI could learn from repating patterns in reinforcement and could replace tedious manual (parametric) modelling of rebars. I think its a closed problem which is within realm of AI applicability. You just need to teach it from millions of examples and then combine it with strict rules or definition of reinfrocement amounts per surface per direction and in one click you could get your element reinforced.
I meant exactly what I said in a general way “AI assisted”. And I don’t think this can even be disputed by today’s example of loads of software that already implements AI assisted processes in their functionality. As for you, for me as well, it really doesn’t matter whether I like it or not.
To tell you the truth I never thought about this until today. I suppose my main gripe with GH is that for most of what I do, when using the node network environment execution becomes too slow hence I generally bypass the UI by creating a script or a complete component where I parallelize all the heavy computation. Also some mathematical/algebraic and data sorting operations are very cumbersome to setup with a network of components.
Regarding the AI bit, for me the scripting components could head this way. Writing a script is for the most part a mechanical task that could be largely automated. Frankly, I don’t see why I have to type each letter.
One thing that could improve GH2 is the integration with WolframAlpha API. If it’s not too much bother.
I’m sure AI can be used for different kinds of optimizations, including rebars. But there’s a “no-mans-land” involved also here which is often overlooked. Let me give you and example (but I can’t give you all the details due to business secrets etc).
I once designed a huge transport logistics systems for an end customer. We were evaluating alternative route-optimization tools and strategies and we found that the best (relevant) example in use on the market was a case in which a huge food-chain had achieved some 11% more efficient distribution with the best in class optimization tool on the market. After knowing this I did an analysis of the end customer’s internal business concepts and found that the way they approached transport planning, manually, gave approximately 30% efficient routes than if they’d integrate the tool in their planning process.
They of course didn’t “manually” optimize the final routes in the way a software would (based on existing bookings or even prognosis), they had a different approach altogether. A smarter way of looking at the business as a whole. Their view of the basic problem was “optimal by design” so to speak. No squeezing, just smarter approach to begin with.
The point is: That very “smarter approach” was not a (smarter) combination of existing data which could be optimized by any kind of algorithm or pre-existing pattern. But this company didn’t know this. They had never even thought about the possibility (that they were smarter than optimization algorithms). When I explained why they were so successful (as they actually where) we decided to not integrate any optimization tools in the core system (only external tools allowed, for special cases).
Not understanding the basic problem makes you blind for the potential it hides. To some extent automated tools and algorithms can squeeze out more out of what you have, but innovative thinking may serve you better, sometimes much better. In this case 30% better than the best route-optimization tool on the market.
All this goes in line with what I mentioned earlier about #1. understanding, then #2. Pick the best tool (or approach). Do not presuppose that any tool will replace innovative entirely new ways of solving a problem (which isn’t essentially only an optimization of the way you already approach the problem*).
// Rolf
- Addendum: Look at the earlier posted table again, and spot what is optimization (of that which is already there) and what’s truly novel (creative):
Yes of course, GH1 is like writing with one finger. That’s why I develop Axon. If Axon were intelligent instead of frequency-based, it could be much faster.
Then let’s also remove all vector components because the average user coming from Rhino has no idea about vectors. You believe that because you think it is more complicated than it really is. Perhaps you don’t know what it is useful for and that is why you think it is excessive.
How can you question the usefulness of something you don’t seem to know about?
No dude, do you know how Discourse works? do you know how the browser works? do you know how the hidden Rhino core works? do you think you need to know how QuadRemesher works to be able to use it? That argument doesn’t work, the user has to understand what goes in and what comes out, not how it works inside. Another debate is whether it is convenient to know.
Besides that, we are talking about AI here, not just ML. Besides, they are not as black boxes as you think. Besides, you are generalising the user as a person with no criteria of his own who is not able to validate the result that a program gives him. If you consider that the average user is able to recognise a good result because is a professional or knows about what its design field, then that argument doesn’t work either.
That video shows you that a parameter space can be encoded in features instead of just measurements or options, and interpolated between them, instead of having to adjust each parameter one by one to visually arrive at another style of design. Maybe you are in a more technical design field where the only thing that matters is the measurements, but in more artistic design, clients usually don’t really know what they want, so tools like this are tremendously useful to bias their choice into understandable attributes. Have you tried any online art product configurator that offers a nice and effective UX? There isn’t, because it’s annoying to have to adjust one by one measurements, that’s why all configurators end up being template selectors or using custom GUI to make it a good UX. And you say that feature modelling is the perfect example of why not to use AI? You’re throwing dirt on something you don’t understand dude. And then you argue that the designer must understand what he is doing?
Your scepticism and seeing beyond the hype is fine, but you take it to the point of trying to obscure what you can’t see, and that’s not right because as I respect you many others here will respect your opinion and it’s not right that I have to justify what’s already out there by fighting against prejudices or countercurrent attitudes. 3D data processing is undergoing a revolution of innovation because everything is being rethought using ML, it’s not an opinion but the amount of papers coming out, and consequently, it will influence design tools and therefore design. If this is not already happening in design, outside research projects of very limited application, it is not because there are no algorithms to use, but because there is not the data to make them learn, we need to write design in terms that machines understand, and planting this unjustified rejection only takes us further away from getting there. Although I hope that whoever has the decision-making power at McNeel is better informed.
… I think you are denying me the applications that don’t help you in particular to win the beer night I bet you some time ago about AI applied to design in the next few years.
Perhaps you’re conflating innovation / creative design (true novelty) with pattern matching (needs to be “learned”) and optimization, limited to that which already exist.
// Rolf
Lmao, what is going on here?
I suit my thoughts/ideas better with what Tom is saying, but every counter-argoment of yours, Dani, seems totally on point!
Nobody should be too rigid on opinions.
You both are saying very interesting things…
Sadly… this thread clearly have a dedicated seat which it is not yet used
I don’t know, I wouldn’t say I know much about Neural Networks and Genetic Algorithms. But I’m quite sure I understand the mechanics of it. I even implemented a genetic algorithm by myself, because Galapagos was limiting myself in calling it by code and I was curious to find out if I can reproduce it without reading to much scientific papers. And I succeeded , kind of (I never finished that completly).
BTW My workmate is even mathematician with a master theses about optimisation and he’s also very cautions when it comes to the fields of application. E.g. he used to work at Continental to optimise wheel production and they could use it only for very specific tasks. Basically this matches to what @Rolf was saying… in case I got this right.
The point is, I consider myself as a user reached quite an advanced level when it comes to Grasshopper. I just think of the average joe starting with GH, I don’t know if this person is able to make use of AI in general. The comparision to the vector components is kind of difficult. Because understanding what a vector is and how we move in space is not that difficult and kind of a fundamental knowledge in this domain. Many people learn this fast, because they can visualize it on demand within GH.
This basically is also true when we talk about technology in general. Of course most people don’t have to know how a browser or how discourse works. But at the same time a browser itself is not going to promise you optimizing your work, and if you believe to any comment in this forum, than no mercy to you at all
Sort of, but you know I’ve worked within an automotive design studio for years. And although I could think of applying these sort of things to very limited use-cases, I was just pointing to the fact that a designer usually has totally different problems to solve. Especially those guys who do early designing, they spend most of the time sketching. This also what they love to do. I don’t think they are lacking ideas and need autogenerated variations, optimised or not. For them an AI would rather make sense when it comes to better vectorizing hand-drawings. Furthermore even when we talk about parameterization with Grasshopper, one of first experience in this field was, that for most of the work there is guy doing things much better in 1 day in a totally manual process. Of course there are ambitions to parameterize and optimizes parts of a vehicle. But the reality shows, that if you work in a domain with many professionals, they just outperform you in almost any aspect. Speed and Quality.
We’ll see, sometimes I’m just not right. I generally accept loosing.
Thank you for the clarification. I just googled Simulink - indeed it looks like quite an interesting software.
I gotta say, I imagine directors at Kodak having non-stop brainstorm much like this going on here, and yet…
They were busy optimizing what they had…
Creativity, on the other hand, brings true novelty.
// Rolf
For me designing is organizing editing processes trying to create a solution to something. We do this when modeling, from any interface. And for me, AI is not only about optimizing or learning patterns, but also about things, not strings. Design is a search for me.
We have geometry to represent shapes, grasshopper definitions to represent editing processes, but we don’t have a language of the real objects. There are attempts like schema.org or other product standards, but nothing really close to design as a modeler can understand. Design knowledge is exclusive domain of human beings for now. An object such as a chair can be understood in many ways. The most obvious is to understand it by its formal and material components. But another more interesting way is to understand all the possible chairs at the same time, encoding each of their representations in a vector within latent space that embeds the design space in fewer dimensions. Grasshopper 2 could have varational autoencoders to represent all the solutions of your chair generator in an embedding, which can be edited, measured or parameterized distributions mapping the different solutions of your chairs, allowing not only futuristic GUI (as in the feature modelling video), but also modeling in other dimensions, designing the data with more freedom, operating with distributions of things. For example, bake the 6 chairs that maximize the separation of their most important features to offer your client the greatest variability of your algorithm with a fixed amount of samples, using a K-furthest neighbors algorithm that GH2 could have as well.
Supervised learning is actually quite explicit because you have to make the input and/or output data explicit to make them work. That is why it will take time to see real applications in design and other areas where machines do not understand that content. GANs, for example, are a generative model that simulate statistically acceptable content, more powerful than those that can be done in GH, because the complexity of an explicit algorithm grows exponentially the more attributes are parameterized, it is a combinatorial problem. For example in an AI-based chair generator you could train using the definition in GH (which also returns features as labels) as the discriminator and use an encoder to convert a list of aesthetic chair labels into the definition input vector. This already exists using images, and I am convinced that graphic design galleries/repositories will be mixed with synthetic designs using GANs. When the process of generating chairs is explicit (with hundreds of gh chair definiciones) then a GAN can even learn how to create (or help) definitions of chairs, using graph based machine learning. You can also create images of the results to learn how to categorize images of chairs in a representation that your AI generator of chairs understands, and thus create a converter of images of chairs to 3d chairs. Ok you have to be google to have almost all the chairs, but as a chair producer you have a particular distribution in which you can measure and infer value to optimize your chair production or recommend those that are closest to the customer’s taste to avoid design a similar chair.
Are these examples of deep learning applied to design worthwhile for the designer? Well, if there are tools to make it easy, of course, because it has a low cost to implement it, if not, it depends on whether you can compensate for its development, which is another issue. I’ve only used genetic algorithms a couple of times at work, but graph or agent theory a lot in my designs, which are AI applications worth mentioning. But these deep learning examples from before do solve problems that I face with product configurators (the design interface that the client customizes). Also allows incredibly cool functionalities, and another way of understanding the design that is still to be explored, that’s why GH is worth including something of this, especially if it want to return to host innovation in computational design.
There are a few themes in your question, but I will only follow up on the ones related to Rhino and Grasshopper 2. You might enjoy reading this deep dive product analysis I did on Rhino last year. Killer Product - A Rhino3D Product Analysis. Some of your questions are answered there.
In terms of Grasshopper 2, following my article which got the attention of McNeel staff, I had an email exchange with David Rutten about the status of Grasshopper 2 and what to expect. I proposed that I write a follow-up article about GH2 but never got a complete yes/no answer. So I’ve asked David again if I can release material from our exchange. Many, many people have questions about GH2 and what is going on which leads to both excitement and frustration simultaneously.
Hopefully I can post something in the coming weeks.
Darrel
I look forward to the greater integration of ‘metadata’ workflows, improvement in trees and the integration of lessons learned from the vast experiences of GH1 has provided David and his team.
The best is yet to come.
That article is also a killer elaborate Well analyzed and can be agreed 100%.
I especially agree that Rhino is relying on third party developers too much in some areas and it should be more of a leader in pushing new crucial strategic features such as plugin warehouse, 3d model warehouse, etc… These are aspects which should not be let at the courtesy of anyone else other than Mcneel itself.
In some other thread some users agreed that there is no clear grand vision currently where Rhino should be and where it should go.
Well I guess if you get these features on board, I wouldn’t say this is bad. Is it useful? I don’t know, everybody should decide by him-/herself. If these algorithms help you in designing, why should I refuse this. For me its rather no, but this is also because of my personal experience and my experience with the gh world in general. Very often I see this tendency to overcomplicate. People spend days if not weeks to solve a GH puzzle just because they can, where sometimes 1 h of doing it the old way could have solved it. They give so much meaning to a tool which initially started named as ‘Explicit History’, that sometimes I get the feeling that problems are created to justify its usage.
Regarding GH2, ask 20 people, get 20 different answers… I would like to see ‘less management , more modelling’.
…!!
Raw idea last minute:
Maybe … going back at that?
Empty rhino file, grasshopper “spying” on the background.
The user design in rhino, “the old way”, but every command generates a “mirror” replica of the construction on grasshopper.
Every command/input is put on grasshopper.
By the time the object is done (better not too much complicated) grasshopper have already a definition done.
If some step is broken or missed, this “mimic function” should put anyway a component in the canvas (left>right position is related with time), so manual fixing is possible/needed, but all the components are already there ready to be properly connected.
… anyone can expand this idea at this point…
Design on Rhino, Grasshopper2 mimic you.
Design once, repeat with Grasshopper2.
Random thought.
Edit:
This made me remember something that gh always needed:
more coherence rhino<>grasshopper.
And, as grasshopper has no “click” input, like for selecting edges etc etc… we need the “edge” object (which contains “normal” and face tangency data)
exactly what i proposed while back, to be able to reference subobjects by clicking. i am glad other people think same way
Although std GH components doesn’t provide “clicking”, it does provides automation if using Deconstruct components (Brep, Mesh or whatever) and from the outputs take your pick among the subobjects.
I think manual selection defies the idea of automation, which I think GrassHopper is all about. Manual editing otoh is well supported on the Rhino side of things.
// Rolf
From gh we have a good access of all kind of geometries.
All but edges.
To get a surface edge normal/tangency direction you must divide the curve, for each point find surface closest point, evaluate the surface, pray that it don’t get wrong normals (which happens often and force me to use a small circle + intersection), and build a plane.
On rhino edges are directly something more complex. Probably there are many methods underneath.
But not exposed on grasshopper.