Moving lots of layers to be sublayers of a new layer seems to take forever

Hi guys, I just got a 2D DWG for a project with lots of data and tons of layers and I wanted to put them in a “2D parent layer”, but selecting all layers (except the active one) and dragging them to the new parent layer takes for ever and Rhino freezes. (new computer with lots of ram and all the shebang)

Do you know what causes this?

eh… the file has 17.531 layers… what can I say… architects… :wink:

1 Like

To your information: opening the file is fast and running purge is quite fast too, which shows that there are only 83 empty layers. The file is 28 MB when saved as a Rhino file.
I can upload the file to a developer if you like, but it is confidential so I would like to send it directly.

i love layers but thats a freaky high number. or is that a joke? if not who can find anything there anyway!!

Sadly no joke…
I made a script to count the number of layers. And take a look at the size of the scroll handle… :expressionless:


(the image is cropped here, so click on it to see the full length of visible layers)

Well, here, the town planning administrations have their codified layers that are in all files - they are numbered and color coded and each layer represents a specific type of object. This is actually handy if you know the coding system, you only turn on the layers you need. There could be several hundred, but I’ve never seen thousands. Looking at the image, it seems to me like someone made one layer per object or something…

1 Like

Could it be a typical list-problem? What happens if you work through the layer-list from the end instead of from the start?

Another candidate for problems could be the Undo mechanism. It can cause an exponential undo-list to be created while processing/modifying the list. If so, the file could be saved as a temp-copy while processing the layers while Undo-mechanism is turned off (if that is possible, I haven’t checked), and then do your thing with the labels while you hide yourself under the workdesk.

// Rolf

1 Like

We have the same problem here. Our work-around was to write a script that moves all layers onto a new parent layer. Also, the undo function does break with so many layers.

1 Like

I love that!! a handy PRO approach I will use in the future! :smiley:
Working backwards with the list sounds interesting but I am not sure how I should approach it.
That said, I made a simple script that sorts all objects into new layers based on object color. I only need the file as a reference, but I need to select objects quickly so just having it as a referenced file isn’t handy due to all the layers.

Thanks, I did that too, but handling 17K layers is superslow, so maybe something isn’t working right within Rhino. (Selecting and hiding all objects are done in a blink though)

This may be an example of a situation that McNeel developers thought would never happen. We sure haven’t heard much here in the forum about files with this many layers. Could it be that this is an emerging trend? What kind of info is this, i.e.: why so many layers? Can your data supplier explain their motivation? This might be something that McNeel will need to address if it is starting to show up because Rhino is being used for more challenging projects than previously.

I recall once having the mentioned “exponential list growth” (undo) problem using a certain framework. It is very natural for a framework to record changes, one by one, as they happen, but it is also generally recognized that such exponential undo-lists can choke the system if many items in the list are modified in a for-loop for example.

The typical solution is then usually to provide a manual “transaction” which the user can strat before starting to modify the list, and then “commit” after the list is done modified. And between the “StartTransaction” and “CommitTransaction” the system simply skips recording the individual modifications and instead records only the total “batch” and saves that single record on CommitTransation.

It may well be that the McNeel developers has foreseen this phenomenon, and provided a means to handle just that (but McNeel would have to confirm that because I don’t have time to test it right now), but have a look at this command (BeginUndoRecord) and it’s complementary command (EndUndoRecord). To me it looks suspiciously like a similar concept as the above mentioned StartTranscation 6 CommitTransaction:

Perhaps this will resolve the performance problem by simply recording one (1) undo-action, doing away with the “exponential” undo-problem. Who knows…

// Rolf

1 Like

Just for testing, is the file as slow with the layer pane (and/or the properties pane) hidden or closed?

There was a previous issue I had reported with huge numbers of objects selected for a script being very slow, but only when the properties pane was open. I don’t believe it ever got resolved in V5.