Wish: Datadams to not pass on any data upon startup

Currently datadams act as relays upon launch of the gh file. That makes GH non-responsive upon start in huge algorithms. Because the whole algorithm is getting re-calculated.

Is it possible to add an option to datadams so that they don’t pass anything on start before the user presses the button?

I do not know the initial purpose of datadams but I use them right before the bottlenecks in my gh-algorithms. As such I need them to not get triggered upon start. Then I don’t need to wait and I can start working with just part of my algorithm. I only click them when I need them.

Thanks.

I use “False Start Toggle” in a similar way. I think it’s part of the GH plugin LadyBug. They work like regular toggle buttons, but are always set to false when you open your GH file.

Thanks Anne,

but installing an unnecessary plugin and wait for it to load everytime (ladybug was for sun-shadows and stuff not useful for me) just for a single component is kind of pointless.

I currently use a ghpython component with a toggle button to do the trick, but I have to not forget to set them to False when I save the file :smiley:

Yeah, another problem is that the dams do not store their recorded data. So you have a file, save it, close it, open it and it’s not the same file any more. If the dam would deserialise its data, it would also solve your problem wouldn’t it?

How plausible is it if datadam detects the location and the name of the .gh file and use that path and name (without extension) to store it? Then coming back it will look for that file and if not there then re-calculate?

Should be optionable (toggable). Or a setting in Preferences :wink:

Perhaps I can serialize/deserialize using cPickle inside my ghpython script. I remember @AndersDeleuran mentioning this is how he does it.

So far I could not find an application of cPickle in GH. Perhaps this is it.

Sounds like you need to break your definition into several smaller ones. Step 1.gh, step 2.gh, step 3.gh etc.

I used to use lots of data dams, but I now consider them a code smell
For example a

Might:

  • Reference some input geometry
  • Remesh with mesh machine
  • Constrain/rationalise with k2
  • Create detailed model
  • Create fabrication model

The frustrating way of doing this would be to chain everything together in one file. Maybe with datadams between each part.

The more flexible (and collaborative) approach is to break it up.

You can either save the result of your first step by baking into your rhino file (elefront is great for this) you can also use the built in data input and data output components for this.

Each step automatically references the geometry or data result of the previous step. The nice thing about a modular approach like this is that you can completely change the logic in any of the steps, and and long as what you pass to the next step is still the same type of information, you don’t need to touch the other steps.

1 Like

Hi @dharman,

Thanks for the suggestions.

Perhaps I wasn’t that honest when I said “huge algorithms” I only said it this way because usually it’s the size that makes the algorithm processor-heavy.

In my case it is not big just that I have a couple of bottlenecks in the definition that really hit the processor hard. And I don’t need them that often.

Datadams are a great concept, just that perhaps needs a bit more functionalities.

Breaking the definition into modules is only really useful if multiple users are working on it.

I’m curious though. Why would I use mesh machine when I work only with NURBS surfaces?

k2 - is that kangaroo? Kangaroo has a narrow field of applications in my field it’s worse than useless. It is so unreliable that I can just use it to make blooper fake physics simulation videos with it. I can trash it more but there are other threads in this forum where I do that. :stuck_out_tongue_winking_eye:
I do not think kangaroo deserves to be delivered along with grasshopper. There are other plugins much more useful and reliable.
Not to mention it is processor-heavy, a thing I wish to avoid.

I’m in initial design stage. I don’t need detailed nor fabrication models. I usually would break my project files for different design stages, even though it should be possible not to do that in order to have it all in one place to troubleshoot easier.

data input and data output components needs to be extended to reduce the time of updating the values. I’ve tested them before and it is better to use sticky than them. If you use them to store and retrieve data in different parts of the same gh-definition there is a time you have to wait until the value converges.

As for elefront, kangaroo, mesh machine or whatever, I really think users should be able to choose which components to be loaded by GH and not whole plugins. There was a point where I could not find a component I need with so many tabs I never use. Another thing is I prefer using OTTB components.

Additionally, GH is not enough integrated in Rhino baking GH geometry then retrieving is currently a very bad solution.

There are multiple users working on it. There’s you today, you tomorrow, you next month…

No man ever edits the same algorithm twice, for it is not the same algorithm, and he’s not the same man.

4 Likes

:see_no_evil::hear_no_evil::speak_no_evil:
:shinto_shrine: Dalai Lama David Rutten :nepal:

Sand_Mandala-2

2 Likes

Breaking the definition into modules is only really useful if multiple users are working on it.

I’ve started breaking my definitions up lately and it’s def made things easier. What I do is at the end of a logic chuck of process I use a data component. Internalize that data. Copy that into the start of the next definition. The nice thing is that if I need to edit something in step 3 of the process I can just open that step because the internalized data from previous steps is already made and doesn’t need to calculate again. Bit more manual than other ways but it works reliably.

That for me breaks the workflow and it is difficult to troubleshoot or maintain updates if you comeback to that project after a while. Or if someone else tries to use it and work from an out dated data in that data component.

There are systems you can implement, for instance, date the data. Name each step.

Unless McNeel implement database-based Rhino to track all links I’m better off with a single file per project.

Go for it. It is after all your file :smiley:

Come on Michael, a little support in my pushing McNeel to implement database Rhino :stuck_out_tongue_winking_eye:

I don’t disagree with that. Just saying what people do now.

1 Like

I wish we get some OTTB GH components to organize Worksessions. That would be acceptable for the time being.

Another option is to save .gh files along with .3dm files in a new project zipped xml extension say .3dmx

Btw then you can reduce size of the 3dm and store all graphical properties and materials rendering scenes etc. separately. Just like .Epub .Xlsx .docx .pptx .3dxml

Devs, don’t forget to mention me when you implement that :wink:

Yes I agree we really need that. It’s essential for work until on large/complex projects. I’ve already requested more worksession support in rhinocommon here I’m currently using a python component that scripts the worksession command for this as a workaround.

I’d be really keen for the elefront baking with attributes and referencing with data structure to become OOTB functionality.

Have you tried elefront?

I was giving a generic example of a workflow I’ve worked on, where a modular process really helps i’m not familiar with exactly what kind of work you do. [quote=“ivelin.peychev, post:7, topic:78686”]
in my field it’s worse than useless
[/quote]

Now I’m curious to know what your field is. My AEC centered mindset find lots of use for kangaroo. I recently used it to optimize asset tag legibility locations (~20k objects) on Revit drawings for an infrastructure project. I think it has very broad applications.

Honestly I avoid the data components too, mainly since it can’t take file path input

1 Like

Naval Architecture / Ship Design.