How to retrieve correct UV coordinates


Recently I was implementing export geometry plugin in Rhino 7. All works fine except retrieving UV coordinates. Values, that I’m getting, don’t correspond to what I can see in Rhino editor.

I’m retrieving UV coordinates via TextureCoordinates property in Mesh class. Firstly I thought that I missed some texture transform like uvwTransform in Texture, but that transform doesn’t include right values. Maybe I missed something and I have to multiply this UVs with some transform or scale or something.

For example, how It looks:
I assigned texture to primitive cube and then I could see two repeats of my texture on top face of that cube, so It seems like UV coordinates place in range (0, 2) or (1, 3) and etc
But UV coordinates, that I retrieve from API, looks like bottom left point = (0, 0), top right point = (0.23, 0.71)

That’s frustrated. And I can’t figure out what I do wrong. Maybe coordinates are shuffled or something but It seems unbelievable.

If someone works with that and knows solution, please help me with that.

P.S. At some moment I thought that this bug comes from Rhino, then I checked all Rhino 7 change logs but still didn’t find anything related with my issue. Or maybe this bug wasn’t reported… Could it be true?

If getting UV coordinates were broken our rendering would be broken. It works just fine, at least Raytraced and Rhino Render are getting it correctly.

Here how Raytraced/Rhino Render are getting this data - (essentially using the render mesh of an object):


Note that uvwTransform of texture means an extra transform to apply to the uv coordinates to be able to sample the texture correctly.

Thanks for your answer.

I’m extracting UV coordinates exactly as you showed it in the code, but still values are wrong.

Now I noticed that UV coordinates are incorrect only if texture mapping is set as WCS/OCS like here

I’m thinking that probably I should take into account this OCS transform for UVs, do I?

When using WCS/OCS (both types) you should not use UV coordinates. Instead you use the vertex locations.

In Rhino geometry is always in world space (with the exception of geometry in block instances where you have to apply the block instance transformation still), so WCS (world coordinate system) means using the vertex locations.

OCS means that if you have an OCS frame applied to your object, then you transform the vertex locations with that OCS frame to get the final UV coordinate to use.

So when a texture is set to use WCS/OCS you don’t use the UV coordinates. You don’t use UV coordinates either when screen is used.

UV coordinates are used when a texture mapping is set to a Mapping Channel.

For a 1x1x1 box set between 0,0,0 and 1,1,1 the default surface parametrization gives the UV coordinates. You can see with _UVEditor what this looks like:

Okey, I got your point.

But still how can I calculate UVs? because I still need to export all geometry and better that all texture will be mapped rightly after exporting.

So I understood I should use vertex positions to calculate UVs, do I?

P.S. Thanks so much for trying to help me!

For WCS/OCS you don’t. If your target software understands “generated” texture coordinates then you need to carry information on the texture that it needs to be projected like that. In Cycles (which is used by Raytraced/Rhino Render) this is the “Generated” output from the texture coordinate node: Texture Coordinate Node — Blender Manual

So the two questions to ask are: what are you exporting to? what does your target support?

So the main problem is that target software, that takes exported data, (also I can’t say name of this application) can’t produce “generated” texture coordinates. So all that I can is calculating UVs during the export. For that, as I understood, I have to know how this “generated” UV works and how I can calculate them manually, is this right?

For the WCS/OCS box case this is how we do it:

For the WCS/OCS version the code starts here:

1 Like

Thanks a lot!

It works perfectly!

I have only one minor question. I’m rookie in working with C++ and my plugin for Rhino I’m writing with C#, so can you explain a little bit more about kernel_tex_fetch method? and ocs_frame is kind of transform of mesh or other object, am I right?

kernel_tex_fetch method is just a way to get to data. This is inside the Cycles renderer where data is encoded in textures so that this all can work also on the GPU. ocs_frame indeed is the transform for OCS. I’m not sure where it elsewhere is - I get to it in the ChangeQueue where OcsTransform is a property on a MeshInstance. (RhinoCycles/ChangeDatabase.cs at 08a2123cc493c09034605276083e74674c9be753 · mcneel/RhinoCycles · GitHub).

In both WCS/OCS and WCS/OCS (box style) the point being shaded on the mesh is transformed with that OCS frame (transform). In the WCS/OCS (box style) case there is some further code to determine on what side of the box the point is on so that it can be mapped properly.

Yeah, I saw that OcsTransform property, but I couldn’t find any entries to get this or MeshInstance collection. But anyway, I tested UV calculation with your suggested approach from wcs_box_coord method, and It works exactly as I want, so I guess my current issue is solved now. And I wouldn’t solve it if you didn’t help me with explanation and code references. Thank you so much!

As mentioned MeshInstance is part of the ChangeQueue mechanism. Primarily intended for integrating render engines with Rhino. But it can be used also to write an exporter. If you are exporting to a mesh format then this may be actually a good way to do that, since the ChangeQueue mechanism will handle many of the intricacies for you, like handling block instances and all that.

I suppose that the OCS frame should actually also be available through RhinoObject, but it currently doesn’t appear to be. I logged an issue for this: RH-64491 Provide access to OCS transform outside of ChangeQueue.

That’s awesome! It will be so great if they provide API for getting OCS transfrom from RhinoObject
So, I will look forward resolving this issue. Thanks again.

Hi @mamarchenko ,

it took a while for me to get back to this issue.

After some further investigation I realized that it is actually already possible to retrieve the OCS mapping.

The OCS mapping is actually just a planar mapping on mapping channel 100000 (one hundred thousand). The following Python code shows you how to query objects for the OCS mapping:

import scriptcontext as sc

for ob in sc.doc.Objects:
    tm = ob.GetTextureMapping(100000) # OCS mapping is always on this channel)
    if tm:
        print "object", ob.Id, "has OCS mapping:"
        print "\t", tm.MappingType
        res = tm.TryGetMappingPlane() # the mapping is based on a plane
        if res[0]:
            plane = res[1]
            print "\t", plane
        print "object", ob.Id, "has no OCS mapping"