The AI hype train has been driving for some time now, but I have not yet seen a convincing tool/technic that integrate AI with 3d modelling due to the lack of communication strategies between the model and LLM.
With the recent development of MCP (Model Context Protocol), it becomes relatively simple to connect AI models to different data sources and tools, in our case Rhino. This allows us to have two-way communication between a Rhino 3d model and the AI.
Over the last weekend I built a rhino plugin RhinoMCP as a proof of concept to allow AI to directly control Rhino and assist modelling.
There are two demos I made to show itās potential.
By providing different ātoolsā, similar to api endpoints that are exposed by RhinoMCP, any AI model can understand which functions to call to get different type of jobs done. In demo, this includes get_document_info, create_object, modify_object, get_selected_objects etc..
As a minimal variable product, I already see a bigger potential to have AI contribute to 3d modelling. I fully understand that the geometric complexity, parametric relationships and precision constrains in engineering models speak against AI in different aspects, so I donāt expect it to create a full building in one sentence.
As I havenāt been actually modelling for som time, so I would like your ideas to pivot the development of the tool to have it more practical. In the day to day modelling, at what moment do you think a AI assistant would make sense? What I can think of:
meta data management: read/write different attributes
filters: fe.g. select parts that have attribute xx of value xx
combining tool chains / scripts: e.g. run my script1, then select created objects and run script2
Hello Chen, Iāve watched your amazing demo and would like to discuss it in urban design workflow.
In urban design field we donāt need highly precision geometry, we just need a quick building massing in concept stage (floor plates, rough curtain walls).
We expect that input some planning index (floor area ratio, density, lan use type) and site boundries, then a massing model would shows up simultaneously. Also, we could manual adjust those outcome models then planning index would change simultaneously(reversely). The complex part is each land type expect different types of building(residential building naturely looks different compare to an office tower), and each land type might mixup with other function.
Please let me know if you are interested in the topic above, our office has some research funding, maybe we could work close into it. Iāll attach some links below to help you to understand, thanks!
Hello Yijie, thanks for sharing your project. It looks quite interesting. I understand what you need is more of a design generating tool rather than general modelling tool, which I would like to put my focus on at the moment. However based on the codebase we already have, you should have a clear path implement something that fits your requirements. Looking forward to your further development!
Hi, thank you so much for sharing this tool and showing the demo. This looks truly amazing.
I am currently looking into how I could create a generative design with AI with pre-existing families from Revit, exporting them into Rhino and then doing the āgenerative designā part in there by using an MCP interface too. (I think Rhino would be perfect for it and Revit not the right software)
Basically having an area of walls in plan view, and using the pre-existing families to design something for the walls.
What I am not sure yet is if LLMās are enough to help out with or my own model needs to be created to make that happen - still in the research phase for this.
If you have any thoughts, please let me know!
I will test your tool and try and give you feedback and ideas
BR, Christoph
This was such a cool project! We have a Grasshopper AI tool, that uses ChatGPT like intelligence but specifically for Rhino Grasshopper, and itās called Raven!
This was ahead of its time @jessen.ch, have you kept working on this at all?
I wonder whether MCP is going to continue or be replaced with local agents that just use command line tools. I guess the 2026 version of this project would be to give claude code the rhinocode cli⦠so I tried exactly this.
The rhinocode cli isnāt that llm friendly since the āhelp doesnāt really list commands or anything about valid scripts. But claude code / opus is generally aware enough of how to write Rhino scripts that it does OK out of the box. Hereās asking it to model a rhino:
How did it know I love spheres? And inverted horns! Genius. Unfortunately the cli doesnāt get any info about whether the script ran, and canāt see what it made, so it has no idea about this masterpiece at all.
Iāve came across a similar tool experimenting with an AI agent for Rhino (Neospline - https://neospline.ai/), trying to make it actually operate inside the model rather than sit outside generating geometry.
And now I am waiting to try it out in practise, once youāre dealing with messy, real project files instead of ideal setups.
Would be great to hear how people here see these tools fitting into their workflow - helpful assistants, or still a bit too unpredictable?