My most used workflow is to receive 3D data mostly as STEP files (which originate from different CAD systems, such as Solidworks, Catia, Think3, ProE) and importing them into Rhino for Mac.
I then have to re-order items into a strict standard layer structure to prepare the data for work.
During import I often end up with having Block definitions in my Rhino file.
I do not like block definitions, as they have messed with me separating certain subjects onto different layers and that way having caught me badly on few occasions.
How does the process work, that triggers these block definitions?
Are these already defined within the STEP file data originating from the CAD system of creation?
Are these blocks created during Rhino’s import?
Can I define a different standard behavior that say these are not created as block definitions but instead behave differently (separate on unique layers)?
These blocks do cause me now a bit of extra work.
If there are only a handful of blocks created this is no big deal, I will just:
select all block items
explode block definition
delete block definitions (unfortunately this can only be done manually one at a time in the block properties window (right pane, bottom) as I see it.
When I receive complex files it can be that I have dozens and dozens of block definitions in a single file which causes a big amount of redundant preparation work.
So far I have not used block definitions with 3D data (I do understand and see their use when working in 2D drawings though, which I do not do in Rhino).
If I could entirely prevent blocks in 3D work with Rhino, it would be fantastic.
ExplodeBlock, and Purge should help speed things up . I believe that in Rhino Blocks in STEP files come from ‘assemblies’ in the originating software - I cannot test that to verify right at the moment though. I also can’t tell you for certain that the commands mentioned above are implemented in Rhino for Mac, at the moment, but worth a try.
Pascal, thanks a lot for the hint!
The command “ExplodeBlock” is what I currently use to explode all block definitions of all marked blocks (I simply use SelBlockInstance to select all items, defined in Blocks).
My main issue is with cleaning all block definitions, listed in the block definition list (this may be a OCD thing, but I prefer to clean that list out).
I will learn what the “purge” command can do for me.
A question about block behavior:
Can I “re-glue” blocks after I exploded all blocks and have moved around objects to different layers that were formerly within block definitions by using the left over old block definitions, listed?
If that would be the case, I may just leave them there and could later on re-use them should I need them.
Hi Menos - yes, if the block definition is still there, Insert will add an instance of it to the file. Keeping block definitions around if you don’t intend to use them again just bloats the file though.
Hey Pascal, Ok I will experiment with this and learn how the objects behave.
Lots of the data I work with is technical components and tool sets.
When using Rhino on my MacBook Pro when traveling performance is already an issue with some of the data.
Will say one block definition be equally as performance taxing as if there would be the entire component present in the file? Will these definitions hinder performance or will it just enlarge the file size wise (which would not be a big deal for me)?
Hi Menos - the block definition will be taking up ram but should not make any difference, in itself, to performance unless you’re at the limit. Instances of the definition will have an effect and the exploded version possibly more so, depending on the number of repeated components that once were defined by a single thing in the defintion (e.g.many screws of the same exact type).’