Hi everyone,
I’m working on a project where I need to project images (JPGs) onto walls (Rhino surfaces) and then “unwrap” them to export as high-resolution bitmaps for printing.
Right now, my approach in GHPython is:
-
Take a surface (the wall).
-
Define a projection plane (the projector position/orientation).
-
For each pixel in the output image (based on the physical dimensions of the wall × DPI), I:
-
Sample a 3D point on the surface (
Surface.PointAt). -
Project it onto the projector plane.
-
Normalize U,V to [0,1].
-
Look up the corresponding pixel from the source texture.
-
Write it into a new bitmap (using LockBits for performance).
-
This works, but it is very slow. Even at 20 dpi, for a wall of ~3.4 m × 1.5 m, that’s a bitmap of ~3 million pixels, and iterating point-by-point in Python takes minutes to run. For higher resolutions it would be completely impractical.
My questions:
-
Is there a simpler way in Rhino/Grasshopper to bake the projected texture into a flat image directly, without looping pixel-by-pixel?
-
Does RhinoCommon or Rhino Render SDK expose something similar to “Bake Texture” (like in rendering engines)?
-
Or would it be better to mesh the wall and rasterize triangles instead of sampling the NURBS surface at every pixel?
-
Any tips on using existing Rhino commands or .NET APIs to speed this up?
I’d really appreciate any suggestions. Right now my method works but is too slow for production use. If there is a more “native Rhino” way to export projected textures, that would be perfect.
Thanks!