I’d like to know if there’s a logical way for Grasshopper to understand which faces (of meshes) in a model, and which objects, are visible and which ones are occluded (by other meshes).
The goal is to have an input model made out of many parts, and create a definition that extracts and splits away all the mesh faces that are not visible directly from outside the model.
So if I have a model like this one:
I want to cull away all the faces that will not be visible in a rendering at any possible angle, like this:
Is there a way to achieve this computationally?
Rhino file attached for reference. I also duplicated only the desired output, for clarity:
removing_hidden_mesh_faces_Q_gf_200918.3dm (3.6 MB)
Per mesh face sounds pretty difficult, but if I had to solve this I’d probably start with the openGL depth buffer. Draw the scene once into a z-buffer, then see if the corners of each face are on or behind the buffer.
As for visible objects, use a similar approach. Draw each object in a specific solid colour, then afterwards see which colours are present in the image.
These aren’t exactly accurate methods, but maybe good enough? They’ll be much faster than performing gazillions of mesh ray intersections.
your approach of Zbuffer seems to be view-specifi, correct? because I´m asking for a solution that will work from any view, in fact thsi is only to streamline our optimization of meshes for realtime use: AR and VR, where we have to deal with polygon budgets like we did in static rendering 2 decades ago!
I tried an approach similar to the ‘gazillion rays’, using voxels, I can get points to touch only the outer surfaces:
but I’m missing small crevices like part splits with small reveals/gaps:
even upping my initial UV of the sphere launching closets point I don’t seem to get there
We want to see now if we can start with this, and then add some logic to say: keep all the polygon faces that are the closest to each projected point (closests points sphere CP to voxel), PLUS all faces that are closer than X distance from those ‘closets faces to the closets points’. The challenge will be seeing if we can define that X value. It will be a combination of parts wall thickness, minimum mesh edge length (or both our input mesh and the UV sphere). We might even need to add to this logic voxel density setting too.
Another completely different approach would be starting with some kind of Shrinkwrapping approach, like what @DanielPiker has been sharing lately.
My voxel approach with internalized input data is here in case anyone wants to play too:
voxel_projection_outer_surfaces_01_gf_200918.gh (1.6 MB)
I assumed “visible” implied looking at. And looking at implies a place to look from.
Sounds like instead you just want to check if something is contained within another shape? Or do you want to find out whether any ray coming in from infinity can hit an object without colliding with another object first?
Yes, sorry for the confusion. I want to at first pass identify and omit all objects that are not visible from any possible angle if you where to spin the model around a viewport and look.
Then do the same for any internal/hidden faces of the visible parts.
Take a look at these examples in web viewer here:
Now, imagine if we had to start from a complete product design/engineering assembly that has full part design details, internal ribs, bosses for screws, actual screws, internal components, etc.
The total polygon budget for these applications is pretty low so we want to cull everything we can before we start mesh reductions (and sacrificing surface quality).
Yeah makes sense now. In this case I’d probably switch to a voxel approach too, despite it possibly giving you false positives. Or maybe create a bunch of slices and try to solve it in 2d, but that might give you false negatives.
Lastly, drawing the shape not once but many times into an opengl buffer and testing for pixel colours is still an option, just way more involved than I initially thought.
This may be a feature worth adding to Rhino directly, we may be able to get a pretty decent performance using some clever shaders.
@stevebaer what do you think? Could we use opengl to test for occlusion of some meshes by other meshes for a bunch of viewing angles?
We have what is called a “selection buffer” that draws the id of objects into OpenGL’s color buffer. It could be used from a bunch of view angles quickly to eliminate completely occluded objects, but it won’t slice up objects and only give you sets of faces.
If this is for simplified assets downstream, there probably are other tools out there that would do a better job of processing meshes for optimizations.
It might be a useful feature to have in solar and shade calculations, or other data which relies on visuals. I’ve got 2d isovist components in grasshopper already, perhaps it makes sense to add 3d ones in the future.
I’ll try and remember this potential solution for when that becomes relevant.
The spokeshave in your example disassembles as it spins - that seems like it would add huge complexity to the elimination process. Can you live without that disassembly element?