Hi guys,
I wonder if anyone can replicate the following behaviour. I’m having trouble finding the first ray-mesh intersection on the surface of a round hole. The intended behaviour is reflection but as you can see below eventually all rays go inside the solid. This does not happen when the hole is comprised of planar faces.
What is going on:
-> Meshes are created from Breps and seem to be correct (no gaps);
-> Different meshing parameters make no difference to the result;
-> Rays are created along a specified direction;
-> Intersection with each mesh is calculated;
-> If a ray hits multiple meshes, only the closest intersection is considered;
-> Normals are checked at each intersection and flipped if necessary;
-> A new ray is drawn at the closest intersection with the reflection direction;
-> Process iterates;
-> Rays are counted on the other side of the hole.
While debugging, I realised that when the ray goes through the mesh, the code is not actually finding a hit on the round mesh, so I take it there must be a problem computing the intersections. Furthermore, this mesh “transparency” happens seemingly in a random way, some rays reflect, others go through, but once the ray finds its way inside the solid, it doesn’t come out.
It appears that this effect occurs with any kind of curved surface, the round hole case is where it is more noticeable.
Hi Riccardo,
Yes, I’m working in C#. The code is long and spread across different classes, I’d have to make it palatable for anyone to read through it. I’ve been meaning to isolate the case to work it out detached from the code base, so maybe at that point I am able to share it here.
To my knowledge there is no triangulate method in the API, If that’s the case I’ll write one up.
Hi Peter,
My main objective was to find out if anyone had experienced this odd behaviour, to determine whether it’s a bug in my code or an API bug, but I’ll take all the help I can get.
Regarding your final remark, yes, thanks, I’m dealing with t=0 intersections without shifting the points.
Some times odd stuff happens (avoid quads anyway).
I’m far from base … but I found some screenshots on that matter. The input GeometryBase could be Brep (and if to toMesh is true then a Mesh is made otherwise the Brep is used - hope dies last) or Mesh. If the Brep has BrepFaces with inner loops … well … you know what happens don’t you? (see the reflections related with the test Brep below).
Note: in real life (case Mesh) you should check the pt per reflection a bit more: if is “near” an edge (say dist < tol) then both the adjacent normals are invited to the party.
Writing a basilar loop to reflect a ray on a mesh should be simple. I’ll put something up this evening…
Also to @PeterFotiadis , wouldn’t using breps be generally better? Mesh have always finite amount of information, nurbs “no” …
Even a 1° meshed object (high amount of faces) will give 0.5° errors on reflections.
As you can see from the last Image (block with hole as Brep) the reflection doesn’t take into acount trimming info (that’s the reason that the hole reflects the Ray as it was a Face with no hole).
And speaking about accuracy: are you at NASA? if not why bother?
“Brep Closest Point” component and Curve-Brep intersection on grasshopper do take into account trimming info…
On c# that is impossible? That would be sad!
No
Dunno, i like always to aim to “perfect” results at start and later downgrade if it is not working or too much complex…
The ConvertQuadsToTriangles… haha ok why not just Triangulate!? I’ll try and see if this is enough to fix it.
I’m using Brep input but convert right away to mesh because computing Ray-Brep intersections is at least 10x slower than Ray-Mesh. When we scale from 10 to 100k rays, it does slow down beyond usability level and not all interactions are as simple as specular reflection.
I can tell you that using a mesh at the lowest density level differs only about 3% in the results comparing to the highest density mesh. This is because for meshes hit with many rays the results average out. If you’re looking to find the path of a particular ray then it’s a different story. So averaging normals near an edge can be needed … or not.
What about downgrade from the first step? (that could shorten the work you know).
Anyway do the following (Breps): For a given step in the loop, compute the RayShoot for one reflection, get the ccx point, get the BrepFace and test for inclusion. If there’s thin air advance the point a bit and redefine the ray from that point (using the same direction) etc etc. Kinda:
Thanks Peter, I finally solved it. It had something to do with what you mentioned here and that kept me thinking even if I thought all was taken care of.
Looks like one of the conditions I used to deal with the order of surfaces/meshes in the case of coincident boundaries (one Brep fits inside the other) was excluding the mesh from the computation, because parameter t became indeed 0 when no coincident boundary was detected. My bad!
Wouldn’t use a rendering engine a better alternative?
I don’t know what is your target, but i remember earlier versions of vray lets you only display samples and rays collisions…
I’m curious about what are you doing now.
Given the opportunity, here’s a challenge for you: do this (on Breps). By this I mean compute reflections AND refractions (Use snell law, materials for the solids etc etc) on Brep lists.