Hi I have a few files that get extraordinarily large and I cant seem to find out why.
I’m looking for a way to parse length of serialized data in my 3dm file to identify large objects.
Something that could output a piechart og textures, user data, saved block definitions etc etc. Eventually also size of content pr layer etc.
I might give it a shot to script it myself - but throwing it out there in case other people have done something similar, or eventually if enough people would like this to pitch as a feature request in V9.
If I’m to look at it myself i was thinking to go through the Rhino.FileIO namespace.
I’m working on a utility to help with this for Rhino 9. It’s not quite ready for prime-time but should be testable in the not-too-distant future. It’s a sort of Audit3dmFile (Graphically). There are loads of glaring bugs I’d like to fix before announcing this for broader testing, but you’re welcome to test it now on your files with TestAudit3dmFile. Please test with the latest RhinoWIP only.
I would really appreciate it if you could share your file with me. It would really help to have samples of “in-the-wild” 3dm files that necessitate such a tool. Can you privately share it here: Upload to McNeel ? (If you can, please add dan@mcneel.com as the recipient address).
I didn’t know about the current Audit3dmfile - looks like many of the clues I was looking for are hidden here.
I’ll try out the command when I get WIP installed - also sending you my annoying real life file through the upload system. It has a heavy mesh on my walls layer - so heavy that there’s quite a lag when i change clipping planes - but that’s for another thread .
If you’re curious then it’s a file I use for calculating daylight through some rule of thumbs in a grasshopper definition and it will print pdf reports like this:
Yes, we sometimes use Audit3dmFile and it’s quite useful…but also lacking if you just want to know where the bloat lies. The successor command hopefully will show you without having to parse a bunch of text. As you’ll see when you test, I’m currently using a TreeMap (I’m a fan of WinDirStat), but I also think a radial TreeMap could work (I’m also a fan of DaisyDisk on Mac).
Thank you so much for the file and the background info! This is already tremendously helpful. Your file immediately exposes a bunch of flaws in my current approach, which is great. Exactly what I’m looking for…files with TONS of stuff:
I’m curious what conclusions you come to with vanilla Audit3dmFile and if they would be the same with TestAudit3dmFile. As it stands now, with your file, I don’t think it would be helpful…yet.
You mention my 2 favorite disk apps . Wow it looks great so far, exactly what I was looking for.
Few ideas:
Might be nice to have ‘cleanup’ buttons in the interface such as “delete all userstrings (save 943kB) / plugin data (save 2MB)”, materials etc (a bit like purge but allowing to delete stuff in use)
You’ve grouped it nicely but maybe it would be to see another distribution of the objects: Perhaps instead of measuring bytes per object, it could be split in geometry, userstrings, plugindata, and pool that across of objects. Especially its nice for developers to double check if something is added to objects by mistake.
I’ll let you know when I get WIP installed after current deadlines.
Yep, we are definitely on the same page here! I have some ideas and approaches to try. I think first I’d like to see if column sorting alone can solve the problem…with both count and size.
If not that, some sort of options for grouping by. Here’s an early mock-up: