There are no objects close to origin so this shouldn’t happen. Is it a bug? The mesh handles just fine at this location. IMO it’s important that Rhino handle this with so many architects and landscape architects as customers. Sometimes we just need to work with real world locations. Could double precision be added to texture handling too?
@Holo I’ve logged this bug: RH-68449 Texture distortion increases when object moves further away from origin
If you’re seeing this with objects that have planar, spherical, box or cylindrical mapping there is a work around that is not in any way ideal. You can convert your primitive mapping into a custom mesh mapping by running _TestRenderMeshMapping. This makes objects heavier and therefore using it on thousands of meshes might cause too big increase in file sizes and reduce performance.
The fix has been re-scheduled to the next Rhino version.
This bug has to do with the texture mappings that are handled by shaders in the display. So switching to Surface or Custom mapping fixes the issue. There’s actually a new workaround available in Rhino 8 that uses light weight Surface mapping instead of the heavier Custom mapping:
I’ve added recently a command that sets up mesh object surface parameters from another texture mapping. You can run SetMeshSurfaceParameters on those meshes and then switch their texture mapping type to Surface.
By “Custom mapping” I meant the mapping that is called “Custom”. Planar mapping uses a plane as the mapping primitive and Custom mapping uses a mesh that can be of any shape.
For historical reasons mesh objects in Rhino can reliably store per-vertex texture coordinates only in the vertex surface parameters. Surface parameters were originally added to store surface evaluation data for render meshes created from NURBS surfaces. Therefore they are called surface parameters. Surface mapping is a texture mapping that takes the surface parameters and creates texture coordinates - and this happens per-vertex without any additional mapping primitives.
I can help you set up Surface mapping programatically as well. Are you using RhinoCommon or Rhino C++ SDK? There should be a way to do that for Rhino 7 as well.
I believe it was postponed for a reason. @DavidEranen would probably know more.
This is an unfortunate side-effect of doing the primitive mapping projections in the shader. The precision is simply not the same as when we calculate per-vertex mappings due to the shader using floats (32-bits) and and per-vertex mapping do the projection calculation using doubles (64-bits).
I don’t see any quick-fix to this problem, except for programmatically setting the UVs on the object and removing the planar mapping, which should be fairly simple.
Hmm… so what you are saying is that you could actually programmatically replace Rhino’s planar mapping function with a “Custom mapping” object that behind the scene instead uses a rectangular mesh to map the texture? And that way trick Rhino into using double floatingpoints?
Wouldn’t that solve these issues for us so we wouldn’t have to do this to workaround manually? Or does double floatingpoints take so much more time and resources to handle that this will cause issues in other scenarios?
Oh, and since you are talking about future updates:
You really need to stop calculating things IN 3D space and start doing it in local spaces. This should be done on a system level so everything will be affected.
Take a look at this. These two patches are made from the same curve:
the left patch is calculated far from origin
the right is calculated close to origin by moving the curve by a fixed vector and then the result is moved back with the inverted vector.
The results SHOULD be the same, but are affected by floating point numbers, which frankly should not be an issue at all.
XYZ = 1 500 000, 500 000 ,0
I understand that X = 1 500 000 is already 7 digits so not much room for decimals, but there are no reason for doing the calculations all the way out there, are there?
@DavidEranen and @Jussi_Aaltonen I just made a remesher that uses TextureCoordinates instead as that seems easier to do, but when I use sc.doc.Objects.Replace(mesh_id, newMesh) then these are discarded, so I started adding the data I needed instead. But is it possible to use sc.doc.Objects.Replace with out loosing the texture coordinates? Or is replace ONLY for geometry?
import rhinoscriptsyntax as rs
import Rhino
import Rhino.Geometry.Mesh as mesh
import scriptcontext as sc
def planarMapping(mesh_id):
rs.EnableRedraw(False)
### MAKE NEW MESH
mesh = rs.coercemesh(mesh_id)
newMesh = Rhino.Geometry.Mesh()
for vertex in mesh.Vertices:
newMesh.Vertices.Add(vertex)
for face in mesh.Faces:
newMesh.Faces.AddFace(face)
### GET BOUNDING BOX
bbox = rs.BoundingBox(mesh)
length= bbox[1][0]-bbox[0][0]
height = bbox[3][1]-bbox[0][1]
### CALCULATE TEXTURE COORDINATES
for vertex in mesh.Vertices:
pt = vertex
pt.X -= bbox[0][0]
pt.X /= length
pt.Y -= bbox[0][1]
pt.Y /= height
newMesh.TextureCoordinates.Add(pt.X,pt.Y)
### FIX MESH
newMesh.RebuildNormals()
newMesh.Faces.CullDegenerateFaces()
newMesh.Vertices.CullUnused()
newMesh.Vertices.CombineIdentical(True, True)
### REPLACE MESH
newMesh_id = sc.doc.Objects.AddMesh(newMesh)
layer = rs.ObjectLayer(mesh_id)
rs.ObjectLayer(newMesh_id, layer)
materialsource = rs.ObjectMaterialSource(mesh_id)
rs.ObjectMaterialSource(newMesh_id, materialsource)
material = rs.ObjectMaterialIndex(mesh_id)
rs.ObjectMaterialIndex(newMesh_id, material)
color = rs.ObjectColor(mesh_id)
rs.ObjectColor(newMesh_id, color)
### DELETE OLD MESH
rs.DeleteObject(mesh_id)
rs.SelectObject(newMesh_id)
rs.EnableRedraw(True)
mesh_id = rs.GetObject("Select Mesh to planarmap", rs.filter.mesh, preselect=True)
if mesh_id:
planarMapping(mesh_id)
@Holo Here’s C# sample code how to set up surface parameters and surface parameter mapping:
bool SetSurfaceParametersAndSurfaceMapping(RhinoDoc doc, Guid objectId, int mappingChannel)
{
if (doc.Objects.FindId(objectId) is MeshObject rhinoObject)
{
Mesh newMesh = rhinoObject.MeshGeometry.DuplicateMesh();
if (null != newMesh)
{
// Get the texture mapping and object transform from the given channel
TextureMapping mapping = rhinoObject.GetTextureMapping(mappingChannel, out Transform objectTransform);
if (null == mapping)
{
// There is no texture mapping on the given channel, nothing can be done.
return false;
}
else
{
// Apply textrue mapping to the new mesh - this sets up the texture coordinates
newMesh.SetTextureCoordinates(mapping, objectTransform, false);
// Set surface parameters from the texture coordinates
newMesh.SetSurfaceParametersFromTextureCoordinates();
//Replace mesh object geometry with the one that has surface parameters set according to the texture mapping
if (!doc.Objects.Replace(objectId, newMesh))
{
// Something wrong
return false;
}
// Find the modified mesh object
if (doc.Objects.FindId(objectId) is MeshObject modifiedRhinoObject)
{
if (0 == modifiedRhinoObject.SetTextureMapping(mappingChannel, TextureMapping.CreateSurfaceParameterMapping(), Transform.Identity))
{
// For some reason the mapping could not be set on the object
return false;
}
}
else
{
// Something wrong
return false;
}
}
}
}
else
{
// No object found with the given id.
return false;
}
return true;
}
Hm, I don’t really understand that. It seems like a good approach though, so what doesn’t work consistently?
I download 3D height data from a server and aerial images from another and then map them onto the meshes based on geroferenced info. Planar mapping was an OK approach, but it has the danger of being altered by the user, so if there is a better way of mapping the vertices then that would be interesting.
I thought the unwrapping of a mesh changes it to uses texture coordinates for the vertices, isn’t that right? Is there something else I should do instead? (Is that what you tried to explain to me in the post above?)
The ON_Mesh::m_T array that is exposed as Mesh.TextureCoordinates in RhinoCommon has not been used the way it was meant to. This may have started happening already in Rhino 4 where texture mappings were introduced and texture coordinates were no longer always based on the NURBS surface paramaters. It’s still a very common assumption that “correct texture coordinates” exist for all objects and they are always available in ON_Mesh::m_T. However objects may use multiple texture mappings and materials can specify other projections as well.
The principle these days is that a texture mapping defines vertex texture coordinates. By default all objects use surface parameter mapping where texture coordinates are computed per-vertex from so called surface parameters. Unwrapped and UV edited objects use a custom mesh primitive mapping where texture coordinates are projected from a mesh that is stored in the texture mapping. Special types of texture mappings are projections like WCS or environment.
If you’re working on mesh objects and want to apply light weight custom texture mapping then the best option is surface mapping.