Sure thing @kike ,
For most workflows I do rely on baking, blocks, and manifesting the geometry into the Rhino document itself with dynamic baking.
However, when a model gets very large/complex the file size can get out of hand and performance can suffer OR I simply am interested in interoperability to other advanced programs where Grasshopper acts as the “geometry creation engine” and another program may take over with additional functionality after it knows what kind of geometry is being created.
-Because GH can handle lots of complex and interesting geometrical relationships without having to commit geometry to the Rhino Doc, I think it’s well suited to create the geometry/meta data relationships.
-I’m interested in the ability for Grasshopper to run a script of all the “instructions” of geometry instantiation and generate all the plane locations/rotations and meta data for each “object” for example a chair or 10,000 chairs would be planes with different meta data such as “Type”, “Name”, “Material”
5,000 other planes might be tables or doors or windows or a palm tree even.
In this example, all of these objects are instantiated from a single point in 3D space in Rhino and the meta data “Type” would drive certain algorithmic functions to handle it’s positioning.
“Tree” would snap the point to the site topography for instance.
“Chair” perhaps would orient to face the nearest “Table”.
Point being, the creation method of point is the same regardless of the type that it will become.
What varies is the meta data and then the algorithms being ran an the result they produce, in this case lots of planes with meta data attached.
In this example, the Rhino model is essentially just a point cloud where the user can “Preview” the geometry but nothing is baked yet.
The workflow would then export all the planes and meta data as .csv, json, or another data format into a database and also a virtual engine such as Unreal Engine (UE) where the plane information and meta data would be used to spawn in all of the unique objects at runtime, leveraging the power of Nanite geometry or Hierarchical Instanced LODs, Material Instancing, etc.
-In example, the Chair object type could now be manifested as a chair model of 2 million+ polygons and be instanced 5,000+ times without a performance hit.
-The Tree object type, might get imported specifically into a procedural tree generator and then back into UE with it’s location & species meta data in tact.
In this application UE would handle the heavy lifting and advanced visualization of said geometry and enable advanced interactions such as VR interaction with objects.
The long story short is that having data rich “objects” that can be represented geometrically simple can empower lots of different ways to make use of the data/meta data of said objects.
Planes seem to be the lightest weight “object” in GH that can have meta data, location, and rotation information attached.
I guess you could use point3d for this and have location, rotation, and scale embedded in user text meta data as well but since most (all?) 3D software requires a plane/XYZ axis location/rotation position for an object, Plane seems well suited for this.