Just asking, have you succeeded in isolating the issues, regarding Data I/O, into a minimal definition others can test? I’m a little bit interested here.
Yep, thats what I wanted to propose. Create a new thread with an minimal example and maybe someone has fun solving it for you.
I really don’t like this statement. Almost everybody in the Western World has time. Eventhough working fulltime and caring about a child after work, I frequently find time for some reading and playing around.
Of course you have to invest a lot of time to get better in something. But essentially creating GH definitions and solving coding puzzles is not so different at all. Just that writing code gives you more freedom. In the end you might save time at work if you implement a taylor-made solution.
I don’t like this statement either however I agree with osuire. ![]()
ps: it’s my 1st time to use nested quotes!
All my data between serializing and deserializing is generated in grasshopper, hence why I was wondering if there is something about that brep could be blocking the reading writing process when saving.
The data may seem fine going between components like my case, but when it comes to saving, there was issues with the data.
But essentially creating GH definition and solving coding puzzles is not so different at all. Just that writing code gives you more freedom. In the end you might save time at work if you implement a taylor made solution.
I agree of course. But if a feature exists, I rather use it than re-do it myself, specially if the one who made it is David.
If you have made something that works better than the OUTPUT/INPUT components, you might want to share it here, since this is the “Managing large projects” thread, instead of patronizing.
There’re indeed other (de)serialization solutions, don’t know if you’ve tried some.
The data may seem fine going between components like my case, but when it comes to saving, there was issues with the data.
Christopher, did you send this example to David ?
That was a custom Param so I found the error in code. And fixed it.
The error meant it worked in code when moving between components. But the data could not be saved when opening and closing the file.
Why do you feel the need to add more irrelevancy here ?
It’s not a size issue, of course. I just mentionned the size of my projects because Proterio seemed to claim he was passing lots of data through the INPUT/OUTPUT.
If you can READ, I specified :
my my my… someone needs a hug …
If you have made something that works better than the OUTPUT/INPUT components, you might want to share it here, since this is the “Managing large projects” thread, instead of patronizing.
I was contributing here before and sorry this thread is too generic for your particular problem. There is not always a generic solution to something. The initial thread was about dealing with definitions which contains a lot of components and not big data! I don’t know if my solution fits your use case. This is why I’m asking you to define it properly in a new thread with a minimal example.
my my my… someone needs a hug …
You are right. I’m just kind of pissed off by the folks who come out of the blue to tell you everything is working fine on their end.
You are right. I’m just kind of pissed off by the folks who come out of the blue to tell you everything is working fine on their end.
Been there )))) see my thread on Surfacing ![]()
my thread on Surfacing
Care to share the link ?
I was contributing here before and sorry this thread is too generic for your particular problem
Well, you are right. Large projects and data flow are related but they are not exactly the same thing.
One could deal with large geometry and properties data sets without using a generative workflow.
Yet, we are in the Grasshopper part of the forum, and it makes sense to use of dataflows on large projects involving teams working in different locations.
So I guess anything related to the INPUT and OUTPUT paradigm is relevant here.
Of course, there are many ways to do this, and Frontinc even gave it a name : Building Information Generation , wisely presented as a more mature form of BIM.
I tend to dislike naming oridinary things with fancy names.
It makes me think everything is not really progressing.
It makes me think everything is not really progressing.
Well, if you are referring to BIM, I can assure you that progress is laughable here in France !
Well, as one of the BIM mananger in my company, somewhat I believe BIM, the concept or the practice itself, isn’t specious. But the practice behind Building Information Generation is what AEC industry has been doing for a long time and it’s useless to invent the term to make your work sound cool.
This thread helped me tremendously, thank you for making it @gerryark . For me, using Telepathy, Human and Elefront has turned GH into a serious production tool, other tools have helped too but those are the big three in my case.
I’ve only been using grasshopper in professional production work for 2 years. Early on, organizing large data sets and then presenting them had been an issue. SketchUp is still a dominant production tool in my immediate area, so I rely on this forum heavily. SketchUp and most other design software are so dependant on the concept of Blocks that transitioning over to GH has some real bumps in the road. Human/Elefront helped tremendously in this regard. In fact, I almost never use blocks when direct-modeling in Rhino, but almost always use them with GH.
Here is what helped me organize and present work in the 100,000 to 500,000 SF+ architecture range. I’m sure I’m just scratching the surface but as a new-comer these things were easy to pick up and yet made a huge difference:
-
Clusters. I wish I started making clusters sooner - as in from like day 2. Pretty much all my definitions have a one-off cluster, just because Architectural problems seem to carry a bespoke investigation so often. Kind of like Telepathy, it lets me have a “constant” of sorts in the document. With Architecture, lots of times I’m taking an idea, then duplicating it, then mutating it…Clusters make these bespoke moves manageable.
-
GH Groups (not Rhino Groups). I’ll group often when I don’t need the help of a data tree but just want to move/transform/etc a bunch of things together and then ungroup them back to their lists. Data trees are great, I just don’t need them every time.
-
Data Trees: @andheum has a great Youtube video on Data Trees. It was the turning point to help me grapple with them, especially the importance of using Shift Paths. Later @Joseph_Oster had a great tip on using Flip Matrix in order sort of shift items in a tree by X. (Shift list equivalent for tree branches - #4 by Joseph_Oster) So far I’ve needed to use Data Trees on every large project and most small ones.
-
Telepathy for organizing a definition almost as easily as writing Python code. It lets me simulate having “constants” and name-spacing is like pseudo object-oriented programming. Renaming keys is like having a tiny piece of an IDE in there. It also makes picking up a project from 2 months ago easier since the name-spaced keys are like documentation.
-
Human for creating layers, attributes and blocks. Elefront for inserting and baking those blocks. The mix of these two is the easiest for me at this time. Human doesn’t force me to immediately bake the block in my file, and Elefront makes placing/baking very easy. Getting GH geometry into Rhino as blocks with attributes is what makes the definitions “real” so they can be understood as windows/panels/walls etc. This means that I can work with the GH canvas preview turned off and just let the baked blocks update in realtime. Epic.
-
Human for exporting NamedViews. This has been a life-saver for large presentations showing many options/iterations/views. All I have to do is put together a few template pages in Indesign and the replace their frames with the exported linked images. GH became a serious production tool in that moment, getting the ideas/designs off the screen and onto a PDF was a big deal.
-
LayoutManager and FabTools for creating many layouts algorithmically. Quick fabrication drawings, diagrams, etc.
-
A python script by @dharman for batch exporting layouts as PDF. Doing this manually is borderline impossible with 100+ sheets unless time is of no concern. Setting up contours for export to laser cutter, batch printing is slow - #2 by RD3
-
OpenNest for packing/laser cutting. This is such a time and materials saver that some things wouldn’t have been considered as even worth discussing on small teams/tight timelines
-
Watching that Mobius Hotel video from Front and Zaha.
Maybe at some point I’ll go all-in on Elefront. But during early design phases, where GH is almost like another sketching tool, the extra step of baking the blocks to the actual GH document just kinda gets in the way. If (like Human) Elefront could save the block without baking it until the moment I actually want to place it, then I’d probably just use that for blocks+attributes. It is pretty convenient that I can define the block in Human and place/bake it in Elefront. This kind of explains: Seeking some advice on my current human + elefront workflow
Hi Dharman,
The link to the paper you mentioned in your reply is broken. Any chance you still remember about that paper, two years on?
Thanks!
You can think of each gh file as a node in your project graph as this paper describes.