I'm sure Mesh display could be optimized

When you have the bad luck of receiving a large DWG file exported from Sketchup, you end up spending hours exploding blocks, joining meshes because the mesh faces are all individual.
This brings me to wonder why Rhino is so slow at displaying lots of meshes when it can display the exact same amout of mesh faces super fast if they are joined into a smaller collection of meshes.
I remember getting a reply along the lines of “If each face is a mesh, there has to be a complete list of properties for each face and that takes up memory, bla, bla…”.
I would agree if each face had a separate property, but in fact, most of the times, there are huge collections of faces that have the exact same properties.
They could just be bunched up in memory for as long as they have the same properties.
I feel that it’s just a matter of being more clever in the way of managing mesh collections.

It should be doable to have a reference on a MeshFace to a mesh containing common property values, but if it would provide any real runtime gains isn’t certain. There are a couple of problems involved with two-way references in compound objects, and some of them could make Meshes worse to deal with than they are today regarding both memory consumption and performance:

  • Faces are members of a Mesh which the MeshFace itself doesn’t know anything about. This is saving the Face from one property, meaning several bytes of memory ( =means less memory in most cases). Adding a property referencing the owning Mesh thus would ALWAYS require more memory for any Mesh with one or more Faces.

  • Any “common Mesh” would not have the same Index list for Faces, and Vertices, if the Face isn’t an exact copy of all the other Faces & Vertices in the “common Mesh” (meaning all Faces would have the same location) since it is the Mesh that holds an Index list of such members.

  • Mesh Faces are referenced by Index from its owning Mesh (see the paragraph above). But many Mesh algorithms manipulates the mesh (Faces, Vertices and it’s Edges) only by modifying the Indexes (its position in a list), and so the algorithm never touches the Face, Vertice (and/or Edge) itself. But if these Mesh members need to know which Mesh they belong to, all such existing algorithms would stop functioning in many cases (a two way relation must then be maintained/ensured in the code), and if adding code to ensure that the item gets its “parent mesh” updated, then algorithms would become slower. Apart from each member ALWAYS requiring more memory (the reference to the owning Mesh). In summary, more memory ALWAYS required, and often the Mesh processing would be slower.

  • Fake Meshes would be yet another reference to keep track of even in the case of a standard Mesh as they are structured today.

There would probably be more problems which I didn’t come to think of as of writing this.

But perhaps a clever solution could address the problem in the import stage instead, at least to some extent, perhaps via an option suggesting the import algorithm to “guess” which faces should go together (into a genuine Mesh) etc. If that could reduce, although not eliminate entirely, the need for manual work after import.

In any case, adding two-way references to a Mesh with potentially millions of members (F, V, E) would be the last thing I would consider due to the permanent need for more memory if doing so.

// Rolf

  • MeshFace currently doesn’t seem to have any attribute referencing its owning Mesh. Meaning that only the Mesh knows about the MeshFace.

Wow. Some people are really good at pointing out why solutions can’t be found.
Meanwhile, most mesh-oriented software manage collections of meshes orders of magnitude larger with ease.

Go figure.

Did I say that? I think I pointed out potential problems with the solution you suggested (a common Mesh).

// Rolf

There’s always “potential problems” , but those who figure out stuff are those who look for solutions.
And the fact of the matter is that some folks out there have found them, but McNeel not yet.

The same can be said for shelling, interactive 2D projections, fillets, booleans,…

What exactly are you referring to? I explained why a “common Mesh” could permanent the memory problem instead of as it is now, only when all Faces are discrete.

Can you elaborate on the exact benefit you would gain with your suggestion and how that would not permanent the memory problem?

// Rolf

“Looking” for a solution doesn’t imply that you found one.

When you “looked for a solution” I pointed out “potential” problems with that specific solution. Potential, as in, “perhaps there’¨s another approach to this which you didn’t mention and which I didn’t think of.”

But you made that to something negative instead of realizing that you need to “keep looking”.

That, exactly that, is how you solve real problems (avoiding the bad ones, that is), instead of piling them up.

// Rolf

Or you start in the other end and say: I HAVE TO SOLVE THIS so how can I think differently than anybody else. And then you end up with realtime meshediting like Zbrush or realtime fluid simulation ala the game a teenager made for a school assignment, because he didn’t know it was impossible to do that in realtime :wink: Or a large forest of variations of instanced trees, moving in the wind, that you can gun down, while the sun generates god rays among the mist while travelling the sky into a sunset that is different every time due to realtime generated clouds. You need madness in the ambition and THEN stand on the shoulders of your brothers and foremothers, harvesting their knowledge.

I am struggeling with the exact same thing as Olivier. I have a “simple” building that I meshed at low settings to let rhino not work on massive amounts of solids, but still it is sluggish as heck. On a new i7 and a GTX2070. This is just 29000 meshes and should fly, but Rhino isn’t optimized for that amount of objects yet. Hopefully soon though.

There are a zillion ways to address the problem. I commented on a specific approach, with its “potential” problem.

Using words like “potential” is based on the idea that 1. Avoid predictable problems, or at least give them weight or cost value. 2. Accept it or avoid it. 3. Depending on the urgent need, look for other approaches. 4. Accept reality when no solution is found which doesn’t make other problems worse.

Memory competes with performance in Meshes (I assume you know this) simply because more (pre-computed/aggregate) properties enables faster processing, but to a cost of more memory required. (This is one of the reasons there are so many varietes of Mesh structures out there).

I have not seen a “very good” solution that solves BOTH performance and low memory consumption. (what is “good” depends on what you need the most; if having XX millions of faces “good” may men easy on memory, while others would think “good” means processing speed.

There are approaches to try to solve both, by making the Meshes “smart” in how they represent the original Mesh (like automagically reducing faces on larger areas, while increasing “density” near edges and curvatures etc) but this again is only a different kind of problem than discussed here.

So, how would one automatically reduce memory for zillions of discrete Faces which you don’t know how they relate to each other (meaning, no optimization of the Mesh structure can be done without knowing connectivity)?

To this problem I suggested that perhaps that (automagic stuff9 can be inferred from the data while the data is being imported by “guessing” which faces could go together (while having more info about the original model/data at hand during import) . But without this context = no intelligent optimization can be done (not even in principle).

Others have solved the problem? Well, which problem? All of the problems I have mentioned?? Not a chance. You would have to make compromises. As for Rhino meshes they were not designed for Mesh modelling (we all know that). They have been optimized for display, but not for processing (if I understood it right). Daniel Piker’s Plankton (HalfEdge Mesh) addresses this problem (easier & faster algorithms). And so on. Only if knowing what problems to avoid, and what is meant with “good” you can tailor your Mesh to do what you want in the way you want.

== Comparison ==

I had links to three major types of Meshes which compared for different features and benefits (ease of use, performance, memory consumption) but I don’t seem to find the three links right now. They would illustrate quite well what I mean. But one of them is OpenMesh, which excel in being powerful for smart algorithms, but there are mesh structures which provides faster processing, aso, aso.

What I am trying to say is that different requirements will by necessity require different mesh structures. And Rhino has a very wide variety of use cases which will be in conflict in this regard. Architects will need huge meshes while mechanical engineers may want super fast mesh processing (for analysis and stuff).

If all of us have 128 GB RAM, then one huge constraint would not have to be regarded when looking or the “best” Mesh structure (whatever “best” means to … whom).

I’m all for superfast huge meshes (because I need both). But I don’t know of any alternative to Rhino that has it both ways, without losing the benefits of Rhino.

// Rolf

I didn’t intend to argue with you Rolf, I just agree with Olivier’s initial frustration :slight_smile:

I fully agree that there are conflicts with sacrifice to be made, for me those are fast meshes vs editable meshes. The file I work around now has 30.000 objects, but once joined based on color and layer then it is only 33, and THOSE FLY, while the initial 30.000 are so slow that just selecting them makes Rhino think for 5 seconds, updating one viewport every two seconds.

For me an option to tag a file as STATIC vs DYNAMIC would be great.

(Sorry would love to read and write more, but have to spend a few hours on updating the file I am talking about and it is tedious work…)

That’s one very interesting idea, which perhaps could be applied to any existing Mesh (and hopefully result in better speed). Very interesting.

// Rolf

1 Like