Select similar geometries and assign them into the same layer

typical select 2mcneel.gh (18.3 MB) Hi all,
I’m working on finding how many typical parts in a model/ group them in layers with different colors.
I follow these steps:

  1. Find parts volume (if having same volume => hopefully they are same parts/ I’m not sure is there any better way to make sure 100% they are exactly same geometry?)
  2. Create set of volumes
  3. Create set of colors
  4. Using Elefront to assign layers to objects and bake it.
    However, there is some points I am not sure what’s wrong.
    After checking the list I figure out that at branch {0;6}, there are different number from the lists.
  • Create set from volume: The number of input geometry to set is 626 but output indices showing 710 (why?)
  • Create set from colors: Both are correct at 626 indices (which is make sense) but the result seems not right since this branch (this typical part) should not be 1.
    Any explanation to make it clear would be appreciated.

Dear Loc
18MB .gh is way to big too big for any standards. The problem: the volume is way too small (e-10). Thus the equality does some weird stuff. I would suggest you take this approach(Refer image). Hope this helps!
Best,
Mr A.

1 Like

Thank you Mr.A. I tried your way but it does not work correctly too.
I attached here a simplified gh which having objects that have same volume/ area but different.
If you have any other way to classify same objects into set would be great!

typical select (simplified).gh (47.9 KB)

@kts.nguyenhoangloc are trying to do this for all generic forms or the ones that you have created? Is there a larger goal by placing them into sets. Mind explaining what exactly are you after by creating a set of (say similar geometries).

Yes, I try to manage this gh. for all forms. The main purpose of this gh is to select similar geometries => put them in set to assign color/ layer for each set.
Let say I have a model with steel pipes/ connection plates => find similar pipes/ plates and assign them into same layer.

what are the conditions/rules of these similarities (length, surface area, type of connections, cats, dogs) etc. Let say that the holes inside your connection plates area differently placed- does that make any difference? That is a whole new script on make 2d for all connectors and then looping it inside to verify dimensions of all curves along with the position of the connector hole ( this! In the real-world would ensure exactly similar geometries).

By saying “Similar” I mean exactly same shape/ volume/ area… like what we find duplicated objects in Rhino model but they might not need to sit at the same place.

  • If we copy an object into 10 pieces, whatever we do to these objects like move to another place/ rotate (not mirror). These object should bel in the same set.
    What I could think of now is using volume and area (I works if the plates have different shapes ) but still need another value to fix the issue (i.e plates having different holes position)…

It looks to me like you’ve done everything correctly (though I haven’t bothered to figure out why you have two MIndex components instead of just one), and are running into a rounding precision issue caused by the extremely small volume values: 0.0000242 To 0.0000343

This is evident when you examine the panel labeled “set” (below) and see duplicate values.

The purple group offers a switch between “raw” (as it was) and “fixed”, which multiplies all volume values by ten thousand and rounds to ‘rDecimal’ places. As you can see, the ‘total’ changes from ‘raw’ = 710 to ‘fixed’ = 626, the number of parts input:

MIndex_2019Oct6a
MIndex_2019Oct6a.gh (4.7 MB)

Oh, and by the way, by changing the ‘rDecimal’ slider value you can reduce the number of set members:

  • ‘rDecimal’ = 5, number in set = 24
  • ‘rDecimal’ = 4, number in set = 18
  • ‘rDecimal’ = 3, number in set = 13
  • ‘rDecimal’ = 2, number in set = 9

Rather arbitrary, eh? Instead of volume, you could compare surface area or sum of edge lengths or other criteria that better define distinct differences? I just tried a few of those ideas and there really isn’t much difference for any of them. Chasing wild geese today…


Sweep1_2019Oct6b.gh (15.7 KB)

P.S. Note that since I don’t have EleFront installed, the GH file is MUCH SMALLER!

missing

Duplicate thread alert. Adding links so the work done in this thread won’t be entirely wasted.

Starting new threads on the same topic/question makes it likely that others will waste time duplicating earlier efforts. Bad form brah. :man_facepalming:

In this particular case you cannot rely on any topological test as there is no topological difference between the hole-in-the-middle and hole-in-the-corner shapes. You will have to rely on geometric differences such as distances or lengths. The volume, area and edge-lengths of both cases are also identical.

Sadly there isn’t a native way to create a 3d-convex hull, otherwise you could test the distance from the hull volume centroid to the shape volume centroid. This distance will be zero (or very nearly) in the hole-in-the-middle case, but non-zero for off-centre holes.

Instead, you may now test the distances between the volume and area centroids. The area centroid will be biased towards the hole as there’s more surface there, whereas the volume centroid will be biased away from the hole.

1 Like

Adapted my version ‘b’ above to your suggested algorithm (white group) as I understand it?
As written, the set size still seems to have a granularity subject to decimal place rounding?

Standby… error… please ignore MIndex_2019Oct6c.gh, sorry about that.

Good granularity this time (version ‘d’ below), better separation between set values?


MIndex_2019Oct6d.gh (4.7 MB)

@kts.nguyenhoangloc, how are the “part numbers” lost in the process? The ability to assign persistent attributes (“Part #”, “Class”, “Subclass”, etc.) is a better way to do this, eh? R6?

2 Likes

Thank you all, I think I found the way by using Shere Fit form centroids of explode breps


In yesterday’s version 'd’ above, I got an Area centroid for each brep by averaging the area centroid of each of its faces, like this:

Now I’ve tried using the entire brep as input to Area instead of each face separately and get very different results (version 7a’ below):

The set values from version ‘d’ are an order of magnitude larger!? Meaning the distance between volume and area centroids is larger. I wonder why?

This all looks like an exercise in exaggerating minor differences (anomalies?) rather than lists of parts with clear and distinct differences. Trying to make mountains out of molehills…

Well … this (the general case) can’t being solved with components: it’s a classic hard flat clustering task from queries in Lists of some custom type that contains the properties that you want to query (from vertices to volumes and from edges to memory usage … or anything else imaginable/suitable).

It’s difficult to explain what all the above mean if you are not familiar with coding … but I’ll give you a small example: Assume that all the “normal” comparisons are used (vertices, edges, faces, areas, volumes, object connectivity (the likes of vv, ve, vf …) , memory usage etc etc) and you arrive into some Breps (in the attached demo having just one face for clarity) with BrepFaces that are “almost equal” … but in fact they are not. So have some fun with the attached to see what I mean.

Clusters_BrepFaceEquality_V1.3dm (232.3 KB) Clusters_BrepFaceEquality_V1.gh (121.2 KB)

Notify if you want a full C# demo on object equality clustering:

1 Like

It’s impressed. Yes, I need a full script if possible please.

I’m not in the practice right now so get the update on that abstract Brep single Face demo clustering (is rather way faster).

Clusters_BrepFaceEquality_V1A.3dm (443.6 KB) Clusters_BrepFaceEquality_V1A.gh (126.8 KB)

That said the isoEquality on faces should being used as the last frontier in a given query since is expensive time wise. So in real-life we use an aggregate (user controlled) collection of equality checks (that are ordered according their expected impact on the Elapsed time).

BTW: do you have mixed GeometryBase stuff or just only breps? And how many are they ? (on average).

Thank you Peter, it works great with Brep single Face.
I have only breps without GeometryBase - all about 650 pieces.

Hmm … on the usual very slow on purpose i5 we get 15 milliseconds for 50 simple BrepFaces (and for a min iso resolution of 6) … meaning that 700 breps (that in the worst case have anything else “equal” plus they have -say- 20 “almost identical” faces) is a 14000 faces task => 4000++ milliseconds => out of question by any means (my personal limit for waiting a stupid computer to answer is 300 milliseconds).

This brings us towards // stuff plus some fine cigars (a must) plus Vodka (ditto). All that because a solution is nothing: a fast solution is everything.

BTW: Post here the full case (meaning: NOT simplified) in R5 format.

BTW: I’ll test Elapsed times on the worst case senario (using only iso checks: that’s the definition of pessimism but better safe than sorry) on some breps that look “almost identical” soon: that could outline the general strategy on that puzzle.

Moral: May The Force (the dark option) … blah, blah

Here is all the breps that I need to put in sets: https://mega.nz/#!0ip2XApS!KBCiyKsLbPdFxDOVTHiiMgkgdbqQ-3bmejb-YxmBDS8
Thank you for your time

Update:

Still out in the wild (it’s windsurf time - purely in the Name of Science).

Good news: well … in theory is very easy (but I couldn’t care less).

Bad news: for 500 “almost identical” test Breps building the custom type List (in full extend/depth/detail: better safe than sorry) that tells you everything (?) about equality(???) takes a million years. Then is the clustering thingy but that’s very fast.

Moral: we need to slash the Elapsed thingy by 400% (min)

On the other hand if we skip Area/Volume calcs we get 1/70th of the Elapsed (wait: if so … then why bother with anything “detailed” criterium other than Iso Curves???).

Moral: hope dies last.

BTW: this reverse engineering thing is 100% pointless … but as a coding performance challenge is a good thing. I mean in real-life this situation never happens since you should control (via Instance Definitions) every clone object … blah, blah.