import rhinoscriptsyntax as rs
import time
def hide_layers(layer_list=None):
if layer_list is None:
layers_to_hide = ["*fault"]
layer_list = layers_to_hide
else:
pass
for layer in layer_list:
rs.LayerVisible(layer,False)
if __name__=="__main__":
ts = time.time()
hide_layers()
te = time.time()
print "Elapsed time is {:.2f}".format(te-ts)
Maybe I missed something here, but if you have a list of all the layer names, why can’t you just iterate through the list and search for the string fragment with python’s native “in” (contains) or “endswith” or “startswith” functions?
I did not know about startswith and endswith, but they are not really Python methods, are they?
I wanted to search for a layer name with * (asterisk).
I don’t know if there’s any difference in performance but picking a layer by name and looping through all layer-names searching for a substring do have different complexities.
Did you even try it? As far as I can tell, it takes no time at all…
To test:
import random,time
phrases=["wheninthecourse","nowisthetime","thequickbrownfox","jumpedoverthedog"]
main_list=[]
for i in range(750):
index=random.randint(0,3)
main_list.append(phrases[index])
st=time.time()
#search
to_find="the"
matches=[phrase for phrase in main_list if to_find in phrase]
#matches=[phrase for phrase in main_list if phrase.startswith(to_find)]
print "{} matches found, elapsed time={:.10f}".format(len(matches),time.time()-st)
>>> 750 matches found, elapsed time=0.0000000000
Note the number of matches will change every run as they are generated randomly from the four choices - this is just an example.
I would expect the native Rhino approach to be quicker for large numbers of layers.
FWIW the pure Python approach would probably take a significant amount of time at above ? 100,000 ? 1,000,000 ? elements, assuming one single search, returning a list of matches and not printing each failed / successful match.
I may have such cases, I am importing STEP into Rhino and then transferring the whole nested block structure into layer-sub-layer structure.
The model I’m using for prototyping has 900+ objects and 735 layers. And this is one of the smallest models I have. The shitty thing is when exporting from 3dexperience the name of the object (which contains important metadata) is assigned to the block in rhino and not the object. In order to get that info I need to create layers otherwise when I explode the block I get thousands of objects with name “PartBody”
Well for 100,000 searches it takes about 0.09 second here:
import random,time,copy
#create a list of 100K random 26 letter strings (by shuffling the alphabet)
alph=[item for item in "abcdefghijklmnopqrstuvwxyz"]
main_list=[]
for i in range(100000):
shuffled=copy.copy(alph)
random.shuffle(shuffled)
main_list.append("".join(shuffled))
#now do the search
st=time.time()
#search
to_find="abc"
matches=[phrase for phrase in main_list if to_find in phrase]
print "{} matches found, elapsed time={:.10f}".format(len(matches),time.time()-st)
>>>162 matches found, elapsed time=0.0910720825
How much time is “significant” relative to the task at hand? I’m willing to bet that with a hundred thousand layers, doing anything in that file will be slow, and the search time of 0.1 seconds is going to be the least of your worries.
For 1 million searches, it takes about 0.9 seconds, so it’s linear…
But if a polysurface is exploded and with each sub-surface a layer to be created
Another case (which is my case):
these objects are inside nested blocks one block usually contains more than one object but there are also blocks within blocks. A block may be just a container (becoming a grandparent-layer) no objects inside, hence the possibility to have more blocks than objects. More layers than objects