Local MCP Server to drive Rhino.Inside.Revit workflows with AI?

Hi everyone,

I’ve been experimenting with the new Model Context Protocol (MCP) standard that allows LLMs (like Claude Desktop) to connect to local tools and data.

I have a specific workflow in mind and wanted to get the community’s thoughts on the best architecture to achieve it using Rhino.Inside.Revit.

The Concept: I want to run a local MCP server that acts as a bridge between an LLM and the active Revit/Rhino.Inside session.

The Workflow:

  1. Prompt: I ask the AI (e.g., via Claude Desktop): “Select all Windows in the active Revit view and transfer their geometry to the Rhino file.”

  2. MCP Action: The MCP server interprets this intent and triggers a function.

  3. Execution: That function calls the Revit API (via Rhino.Inside) to select the elements, converts the geometry using RhinoCommon, and bakes it into the active Rhino doc.

The Technical Question: Since Revit and Rhino are desktop applications (not easy-to-query web APIs), what is the best way to “expose” the R.I.R instance to a local MCP server?

  • Option A: Should the MCP server run inside the Revit process (e.g., as a C# Add-in or Python script) listening for requests?

  • Option B: Should the MCP server be an external Python process that communicates with Revit via IPC or a local socket?

Has anyone here experimented with hooking an external AI agent or local server directly into the R.I.R context to trigger Grasshopper definitions or scripts?

Any pointers on where to start with the architecture would be appreciated!

Thanks.

@mahmoud.ramdane Currently we are working on the LLM which uses the GPT and create the Grasshopper workflows.

What you are aiming for can be done using the option A.

2 Likes

First impressions - please keep in mind this comment is written after one 15 min session.
First, big kuddos for tackling MCP and rhino - thank you. This is a monster - so many tools I had to create a notion reference db/page just to navigate. So far, I have not experienced any natural language modeling experience yet, perhaps that was not your intent and I am missing the point. In a perfect world I could say make a salt box house massing of 20’x40’x30’tall and get a simple closed polysurface. But even simple stuff seems hard. For example when I ask for a 10’x10’x5’ box it does not take into account that mcp conveys units only. So if my rhino is in inch units I get a 10x10x5 inch box. The experience is ‘I have to figure it out”, where it might get tripped up beforehand and then tell the llm exactly what I want. Its a little bit like driving my mouse by typing or speaking, I have lost the fluidity. Do you have any tutorial videos? I look forward to seeing how this develops.