Rhino Inside in Blender

from this:

It seems should be possible. Is anyone working on this?

Nathan gave it a quick shot, but had some problems with Rhino’s OpenGL initialization stepping on Blender’s OpenGL state. I think this just got set aside until he gets out from under his remaining list of Mac V6 items to finish off.

1 Like

Hi @nathanletwory,

Does it have a github repo?

But as @stevebaer mentioned I haven’t been able to get it working just yet, and it isn’t highest on my todo list either (which is what @stevebaer also said) . I hope to get working on it before summer.

2 Likes

It also may require us to make some changes to Rhino itself so I don’t think there is much people besides myself and Nathan can really do to get this working yet

1 Like

@stevebaer, @nathanletwory,

How does that work? Do you need to create wrappers for each Blender command to return its RhinoCommon counterpart or a single wrapper and use decorations?

similar to this:

def wrapper(func):
    def foo(*args, **kwargs):
        print("This is the result of the wrapped function.")
        return func(*args,**kwargs)*2
    return foo

@wrapper #decorated
def myfoo(a,b):
    return a+b

No, more like the import_3dm code and the sample code on the page for rhinoinside for CPython

I was thinking about the AutoCAD+Grasshopper real-time link. Whatever you create in Blender to be visible in Rhino. How would that be done without a wrapper? I’m still very far from understanding rhino-inside :frowning:

See my (what seems to be monologue) thread here:

In CPython, and by extension Blender, the point is to be able to access the geometry functions that Rhino provides through its RhinoCommon API. Rhino Inside is the glue to do this.

All the computation and geometry types stay in Rhino, but you can now use the RhinoCommon API to utilize that in Blender. You could for instance extend the existing Blender mesh data type with properties that are useful to feed into Rhino, then get back the new mesh based on the geometry calculated by Rhino and commit that to the Blender mesh instance you’re working on.

You also get to access intersection code, surface evaluation, etc that doesn’t necessarily exist in Blender, especially for those geometry types that are not native to Blender.

1 Like

Using Rhino commands directly in Blender console instead of linking the corresponding types?

That’s cool, could save a lot of coding :slight_smile:

But what if the application doesn’t have a python engine embedded? Do you use wrappers then?

How would you use intersection on blender native mesh type? How would you extend the type to be usable in Rhino common?

Wrappers you’ll use wherever you feel you need to do so. Rhino Inside itself is essentially a wrapper, the Python package needs rhino3dm, which is the CPython wrapper for RhinoCommon types.

Convert the Blender mesh to one that Rhino understands. If the Blender mesh object has a subd modifier one could convert it to a Rhino subd object.

Blender has several NURBS objects, those could be converted to the format Rhino wants.

Results would then be converted back to something Blender understands, a bezier curve that represents the intersection curve given by Rhino, etc.

Extension would be to be able to save properties necessary to make the right decisions (it is trivial to add new properties to Blender data types that are understood by the Blender sDNA format).

Making Rhino understand Blender data and vice versa is mostly just conversion - transform from one format to the other.

is this how it is supposed to work?

This being entered in blender console (hypothetically I don’t have blender installed here)

# at start importing rhino-inside and loading it

def add_mesh():
    import rhinoscriptsyntax as rs
    vertices = []
    vertices.append((0.0,0.0,0.0))
    vertices.append((5.0, 0.0, 0.0))
    vertices.append((10.0, 0.0, 0.0))
    vertices.append((0.0, 5.0, 0.0))
    vertices.append((5.0, 5.0, 0.0))
    vertices.append((10.0, 5.0, 0.0))
    vertices.append((0.0, 10.0, 0.0))
    vertices.append((5.0, 10.0, 0.0))
    vertices.append((10.0, 10.0, 0.0))
    faceVertices = []
    faceVertices.append((0,1,4,4))
    faceVertices.append((2,4,1,1))
    faceVertices.append((0,4,3,3))
    faceVertices.append((2,5,4,4))
    faceVertices.append((3,4,6,6))
    faceVertices.append((5,8,4,4))
    faceVertices.append((6,4,7,7))
    faceVertices.append((8,7,4,4))
    rs.AddMesh( vertices, faceVertices )

def add_blender_mesh(rhinomesh=None):#
    import bpy, bmesh
    obj = bpy.context.object
    bm = bmesh.from_edit_mesh(obj.data)
    if rhinomesh == None:
        rhobj = rs.GetObject("Select mesh", rs.filter.mesh)
    else:
        rhobj = rhinomesh
    vertices = rs.MeshVertices(rhobj)
    if vertices:
        for vert in vertices:
            bm.verts.new(vert)
    
    bm.faces.new((bm.verts[i] for i in range(len(vertices))))



if __name__ == "__main__":
    
    
    rhinomesh = add_mesh()
    add_blender_mesh(rhinomesh)

This won’t work from inside Blender I think.

You’ll be using RhinoCommon API directly.

Converting goes like so

import rhino3dm as r3d
from . import utils


def add_object(context, name, origname, id, verts, faces, layer, rhinomat):
    """
    Add a new object with given mesh data, link to
    collection given by layer
    """
    mesh = context.blend_data.meshes.new(name=name)
    mesh.from_pydata(verts, [], faces)
    mesh.materials.append(rhinomat)
    ob = utils.get_iddata(context.blend_data.objects, id, origname, mesh)
    # Rhino data is all in world space, so add object at 0,0,0
    ob.location = (0.0, 0.0, 0.0)
    try:
        layer.objects.link(ob)
    except Exception:
        pass


def import_render_mesh(og, context, n, Name, Id, layer, rhinomat):
    # concatenate all meshes from all (brep) faces,
    # adjust vertex indices for faces accordingly
    # first get all render meshes
    if og.ObjectType == r3d.ObjectType.Extrusion:
        msh = [og.GetMesh(r3d.MeshType.Any)]
    elif og.ObjectType == r3d.ObjectType.Mesh:
        msh = [og]
    elif og.ObjectType == r3d.ObjectType.Brep:
        msh = [og.Faces[f].GetMesh(r3d.MeshType.Any) for f in range(len(og.Faces)) if type(og.Faces[f])!=list]
    fidx = 0
    faces = []
    vertices = []
    # now add all faces and vertices to the main lists
    for m in msh:
        if not m:
            continue
        faces.extend([list(map(lambda x: x + fidx, m.Faces[f])) for f in range(len(m.Faces))])
        fidx = fidx + len(m.Vertices)
        vertices.extend([(m.Vertices[v].X, m.Vertices[v].Y, m.Vertices[v].Z) for v in range(len(m.Vertices))])
    # done, now add object to blender
    add_object(context, n, Name, Id, vertices, faces, layer, rhinomat)

(from the repository)

1 Like

I think I understood the approach.

I have never scripted blender before, I don’t know the api. Perhaps this is a good opportunity to take a look. :wink: making some fluid simulations on Rhino objects is a tempting challenge.

But I fear this is not something ot be done using rhino-inside, I need blender capabilities done inside Rhino viewport.

You can let Blender do the fluid sim. The frames are written to disk as (compressed) OBJ files. You could write a plug-in that loads and displays those objects, swap them based on a timer so it looks like It Just Works Inside Rhino

:frowning: I have to do this using csharp :woozy_face:

Don’t worry, it’ll only hone your skills.

In case you feel like reading the fluid simulator code used in Blender (GPL, so don’t read if you feel you then can’t write fluid sim code stuff, but you know, IANAL):

That means if I use any piece of the code whatever I develop have to be opensource right?

essentially, but… (you know the acronym)