SERIOUS "temporary" bug question

Hi,

I dealt with a nasty bug last night - I’d like to report or find an explanation for.

Probably wasting my/your time.

I’m working on this project.

I’m using Rhino 8.

I was working on it using one computer, saved it. All great.

I threw it on a memory stick to open it on another computer.

Same exact Rhino 8 SR.

Data gets f^%$@p up.

Object order, so meticulously sorted and kept a certain way, changes. Components working 100% fine go ‘red’.
Wires that were solid go dashed, and vice versa.

Aborted the mission, went back to computer 1, things looking normal.

Next morning (today), opened it again on computer 2, things are as they should.

Simply…Why!!! Why freak someone out like that?
For real.

Fighting Witness A Rhino Rumble GIF - Fighting Witness A Rhino Rumble Stomp - Discover & Share GIFs

Probably wasting my/your time.

'Cause I won’t share this file…


It’s nice and fast, but can’t clean it up right now.

Not asking for any solutions, just asking if this is ‘normal’? Lol.

I’ve see this behavior when/if I jump between WIP and 8, but never before on same SR, different computers.

That’s all.
Thanks.

1 Like

I guess the real questions are: how do I avoid this in the future? What file management skill do I need to learn? It’s only fair to inquire upon simply venting. Jokes aside, what file can I provide? The conundrum was that if I internalize data from the file open on computer 1, save a file, then do the same on computer 2…what errors can you look for if each file would now come in with ‘different’ data structures internalized? How would you compare? Are we just subject to whatever fate here?

Why would a grasshopper file open on WIP GH behave differently if no new components are added nor new changes are made? Having to relegate ourselves to R7 sometimes is kind of sad and inconvenient.

I feel fortunate than upon saving the file again on computer 1, with no new changes, then reopening next day on both computers again, the bug was ‘gone’. But, what ‘temporary’ state causes this?

Had the data stayed mangled my work team would have been SEVERELY affected and delayed in their workflow. I can admit my own human errors without a problem, but this one remaining mysterious is unacceptable, don’t you think?

Thanks for reading, McNeel.

The description of your experience isn’t enough to repeat, can you provide an example?

1 Like

Sure…I am assuming you don’t want the entire file…so internalize it twice from both PCs?

Thanks

please bypass any plugins if possible as well, even a minimal example should suffice.

Rhino SystemInfo on both machines would be good to see if there are any variables there. Thanks

I think we are lacking a couple of information. GH itself should be deterministic and I personally newer encountered this behaviour. Of course if such a rare problems occur, it is very hard to debug.

Anyway, what I wanted to contribute instead is that you are obviously not practicing any from version controlling. Any form of work I do, I usually version control from beginning on. It is good practice to at least commit work on a daily basis. With a decent version control system and enough self-discipline you can revert changes quickly. It also takes mental load from you in the end, because there is no need to worry about persistence anymore.

On top of that, if you implement a form of a self-test, you can always validate that your are not altered your work by accident. Now in RH/GH this point is hard to apply. But usually if I commit my work, it automatically runs tests in the background informing me about any problems via Teams or Mail. This is usually done by creating a Continuous Integration (CI) pipeline. This can be quite an effort to setup, but once you done it, you can really benefit from this. I wrote it a bit abstract, not naming any concrete tools, because there is no golden way for the Rhino ecosystem yet. But at least there are nowadays many possibilities to get this form of assistance.

1 Like

:rofl:

:rofl::rofl:

Thanks @TomTom

I deserve this, then.
Of course it’s me, not Rhino/GH.

No sorry, I was missing to mention an important information after reading this again. I‘m not saying its your fault, but you cannot rule out that you might did something odd in between.

There could be a couple of things you might accidentally did without noticing. If you version control, you get the exact copy of what you have checked in, without having the pain of using usb-sticks etc.
And if you modified it you see a diff instantly. This lowers the probability that you are comparing apples with oranges. I saw in your post history that you had issues with autosaving last week, and somehow I saw a connection to this topic. In any case non-deterministic behavior can be caused by various problems, such as concurrent computation, localization problems (sorting in differ order), fancy 3rd party code, malicious behavior etc…

But if you know that whats on your side is exactly the same as on the other machine, you know your manual intervention did not cause the problem, and this is a valuable information. Its really hard to investigate problems if you likely did 10 unmentioned steps in between. Did you copy the exact file? did you delete an object shortly after loading the file? (…)

Can you explain in very simple terms what version controlling is, and what it would look like for me when working in Rhino?

Are you talking about controlling versions of my work, or something to do with the different versions of Rhino?

1 Like

Apologies for being both facetious and bad at explaining

Understood, however that wasn’t the case. If copying it to an external SSD counts as an unmentioned step, then sure…rhino SRs were still the same, just different computers, both windows, if that matters.

I literally worked on a file, copied it to an external drive, brought it back. Always saved.

Detailed summary:

  1. I worked on original file using computer 1. Saved it.
  2. I copied the file to an external SSD and worked off of it on computer 2. Saved it.
  3. I brought back the SSD and opened the file on computer 1 again:

malicious mysterious bug manifested
(decided not to save and closed it, opened it again, bug persisted, didn’t save and closed it again)

  1. I freaked out, said “nevermind” and went back to computer 2 - file was fine again - kept working off the SSD. Saved it.
  2. Next day, I opened the file on both computers 1 and 2; it was fine.

I gave up on uploading anything - the file is HUGE - no plugins, but internalizing data from both computers where things were now ‘normal’ again seemed rather pointless. Not sure how to replicate the issue.

I guess a better example I can fish is in other grasshopper files where operations are behaving differently on WIP versus R8 versus R7 even if no new changes are made.

Cheers

There are

Version controlling means to use a 3rd party application which stores your project folder inside a database. This folder is then monitored by that application, and as soon as you change a file, you are out of sync and basically creating a new version of your project.

Whenever you check-in a change into this database, you get a new version of your project. These apps are usually smart enough to only store the diff and can therefore always show you which file has changed. Text-based files are even diffed by its content. So you see the difference in the text.
The big advantage is that you can dismiss these changes or go back and forth in history without losing your work.

Ideally this db is hosted in the cloud, or stored on a local server, or a least on a drive other PC‘s have access to. Because of this you can checkout the project version on multiple PC‘s without the need to manually exchange the data. This eliminates sources for error and ensures everyone works on the same version. It also improves collaboration, because you can merge data from different users etc. You further automate tedious works of manually saving your file. But dealing with covariants is another topic and differs by the system you choose.

Examples of such apps are Git, SVN, Mercurial, Starteam, Windchill etc.

They work very different, but all of them have in common that they solve the same problem in a very generic way. It doesn’t matter if its a .3dm file or .jpg. So all of them are suited for your work.

There is one difference to note. Git is state-of-the art, but it is designed for software developers using text files. There is something like git lfs to store larger binary files. But in general, for CAD work any of the others might work better. Git is also a bit harder to get into it, but is the most modern and most powerful system to choose.

I just want to mention, that I very often notice accidents by having a look into my git diff. So its a great way in ruling out own error.

Hope this was simple enough, just watch some videos and you get a better idea…

1 Like

That was great! Thank you very much.

Happy national holiday!