I’m using OpenNURBS to write an importer for Rhino files which extracts both NURBS surfaces and triangle meshes. Ideally, these will look identical in the end product (depending on the quality of the tessellation). Geometryically, I’m good, but I’m having trouble maintaining texture mapping for NURBS objects.
For meshes it’s typically pretty simple. The m_T
array on an ON_Mesh supplies me with UV coordinates for each vertex which accurately represent the texture’s position on the object.
For NURBS objects, I don’t really have this same information. Of course, a NURBS object does trivially have UV coordinates on its surface that I can use to display a texture, but these do not correspond to the active texture the way the mesh’s UVs do.
For instance: I make a 10x10 square on the XY plane, set material to “By Object” and add a Color texture. By default it maps to the surface stretched so that the texture exactly fills the square. So the uv coordinates in the m_T array will be something like: (0,0), (1,0), (0,1), (1,1). That is, UV values lie between 0 and 1 in both directions.
Now, on the NURBS surface the UV points lie between 0 and 10 in both directions, which of course means that the texture is repeated 10 times in both directions. This corresponds to the m_S array on the ON_Mesh, incidentally.
If I change the mapping of the texture around (change mapping type, rotation, scale etc), the m_T array changes to correctly reflect the mapping while the NURBS domain remains unchanged.
Of course, I’m not expecting the parametrization of the surface to change due to the texture. I’m just looking for some information on how the texture is deformed and hopefully some way of mapping the NURBS domain to “texture space”.
I’m not even sure what form this would take, so I’m afraid this question might be a little broad. Maybe I’m approaching this in a completely backwards way! Any input on the topic would be most helpful.
Thanks!
- Sean