[python] searching for a layer name using wildcard doesn't seem to work

Is this something new or it was always like this?

here’s my code:

import rhinoscriptsyntax as rs

import time

def hide_layers(layer_list=None):
    if layer_list is None:
        layers_to_hide = ["*fault"]
        layer_list = layers_to_hide
    for layer in layer_list:

if __name__=="__main__":
    ts = time.time()
    te = time.time()
    print "Elapsed time is {:.2f}".format(te-ts)

it seems to have worked before…

Was that using a filter rather than a selection?

no selection so I guess yes. I don’t know the difference but I don’t need to select the layers just to pick them by name and use them.

I created the sub-layers as the name of their parent+some_string. And I wanted to search for *some_string. I could not.

Then I named them exactly the same but then this happened:

This one?

1 Like

Thanks @Dancergraham,

I never thought I’d have to go so deep in RhinoCommon’s api to get out “on the other side” into MS C# api documentation :crazy_face:

Maybe I missed something here, but if you have a list of all the layer names, why can’t you just iterate through the list and search for the string fragment with python’s native “in” (contains) or “endswith” or “startswith” functions?

Hi @Helvetosaur,

I did not know about startswith and endswith, but they are not really Python methods, are they?

I wanted to search for a layer name with * (asterisk).

I don’t know if there’s any difference in performance but picking a layer by name and looping through all layer-names searching for a substring do have different complexities.

I have 750 layers in my smaller models.


I don’t think checking 750 entries will take very long.

I see, thanks my bad

I want it less than a second :stuck_out_tongue_winking_eye:
Optimization is what I seek, always.

My current solution is to rename all sub-layers the same and fire operations based on different parent-layers. that seems pretty quick

Did you even try it? As far as I can tell, it takes no time at all…

To test:

import random,time


for i in range(750):

matches=[phrase for phrase in main_list if to_find in phrase]
#matches=[phrase for phrase in main_list if phrase.startswith(to_find)]
print "{} matches found, elapsed time={:.10f}".format(len(matches),time.time()-st)

>>> 750 matches found, elapsed time=0.0000000000

Note the number of matches will change every run as they are generated randomly from the four choices - this is just an example.


That is definitely not correct scenario because you’re not testing layers’ names.

and not, i did not test it yet as my code works right now I don’t wanna mess it up.

also, try adding at least one print phrase and you’ll see the time expand significantly

I would expect the native Rhino approach to be quicker for large numbers of layers.
FWIW the pure Python approach would probably take a significant amount of time at above ? 100,000 ? 1,000,000 ? elements, assuming one single search, returning a list of matches and not printing each failed / successful match.

What do you mean by this?


1 Like

You have a file with 100,000+ layers? I think you’ll have other problems… :stuck_out_tongue_winking_eye:

I may have such cases, I am importing STEP into Rhino and then transferring the whole nested block structure into layer-sub-layer structure.

The model I’m using for prototyping has 900+ objects and 735 layers. And this is one of the smallest models I have. The shitty thing is when exporting from 3dexperience the name of the object (which contains important metadata) is assigned to the block in rhino and not the object. In order to get that info I need to create layers otherwise when I explode the block I get thousands of objects with name “PartBody”

Well for 100,000 searches it takes about 0.09 second here:

import random,time,copy
#create a list of 100K random 26 letter strings (by shuffling the alphabet)
alph=[item for item in "abcdefghijklmnopqrstuvwxyz"]
for i in range(100000):

#now do the search
matches=[phrase for phrase in main_list if to_find in phrase]
print "{} matches found, elapsed time={:.10f}".format(len(matches),time.time()-st)

>>>162 matches found, elapsed time=0.0910720825

How much time is “significant” relative to the task at hand? I’m willing to bet that with a hundred thousand layers, doing anything in that file will be slow, and the search time of 0.1 seconds is going to be the least of your worries.

For 1 million searches, it takes about 0.9 seconds, so it’s linear…


If there are 900 discrete objects (not blocks containing other objects) then you can’t have more than 900 layers that actually contain an object…

Generally speaking this is correct.

But if a polysurface is exploded and with each sub-surface a layer to be created :wink:

Another case (which is my case):
these objects are inside nested blocks one block usually contains more than one object but there are also blocks within blocks. A block may be just a container (becoming a grandparent-layer) no objects inside, hence the possibility to have more blocks than objects. More layers than objects