Mac vs Windows difference between results


We (me and @Unfold) wanted to test few plugins if they also can work for the Mac version.
Plugins don’t use external libraries beside rhinocommon.

When Dries opened the unit tests for Megarachne only 1 test failed, and we cannot find the reason, why we have different results on different machines. The file of that one difference is here (8.6 KB)

My result:

Dries result:

Looks like values are just the same, but the order is different.

We both have the same settings of tolerances and units.
If you have the idea what can be the reason or can you check on your machine which version of file you have - I’ll be really thankful!

If you want to know what’s inside: it’s the one of the results of A* algorithm. For some reason different vertex is chosen to be first to visit on Windows, and different on Mac. It doesn’t affect the end result, and both visited points are equivalent, but still it will be good to know why this thing is like it is :slight_smile:

1 Like

That’s a pretty strange formatting mask. You’d usually only add zeroes (always a digit here) or hashtags (only a digit here if it matters).

I don’t see how that would cause this problem though. Looking at the screenshot, are you sure the two index sliders are hooked up in the same order? You can’t tell just by looking, but you can see the order from the i input disconnect menu.

Formatting mask was added only for now, to make sure that these values are excactly the same, so there won’t be questions about the rounding differences.

This is the same file, it’s only opened on a different machines. I doubt that Dries is trolling me and changes the inputs of sliders order each time. What we were actually testing was the list of visited vertices, and test failed, after further examination it turned out that few vertices that were equivalent in the sense of A* sorting algorithm are flipped on both computers. I just took the part from the whole test and show one of the flipped pair, so it’s easier to understand. But I can also upload the file with all of the tests if you want to check it out.

Sure, I didn’t mean to imply either of you are doing this on purpose, but it would explain the difference. So if there’s a bug that causes the order of the wires to be different it’ll have to be checked on both machines. I can’t reliably check it myself because it seems to behave differently on different computers.

Given how one of the files had been opened on Mac, it’s possible there are differences. Grasshopper uses a lot of core functions (such as random number generators), so unless those have been implemented exactly the same on both Windows and Mac there may well be a difference in results.

Definitely not trolling here :wink:

To be double sure I didn’t do anything accidentally, I downloaded a fresh copy of @w.radaczynski test file and ran it and checked the order of the index values as you mentioned. The input on both our machines is the same so the issue is not there.

Mine (Mac, Rhino7):

@w.radaczynski (Win):

So I guess its happening somewhere in the core functions being different?
I checked another Mac with Rhino6 and the result was the same as my main Mac.
Would be good if we could trace it back.

Bit of backstory: I’m helping to test Wojciech’s Brontosaurus plugin (and some of his other plugins as a side effect) for Mac compatibility as he picked up my offer on this thread. I’ve grown used to the fact that soo many ‘Mac unsupported’ plugins actually just seem to work. Wojciech was pretty sure his plugins wouldn’t run on Mac but off-the-shelf his test file reported 99.5% passed tests, only one in 201 tests failed (the one above). But this test does point to caveats with my assumption that if a plugin runs w/o throwing errors that it is fine. If, due to differences in core functions, they give different results that aren’t easily detected from the surface I should thread with more caution. Hope we can get to the bottom of this so Wojciech can add that little Apple icon soon on Food4Rhino :slight_smile:

ps. Kudos to Wojciech for crafting such an extensive test file that made it easy for me to run it & report back. If anyone else needs volunteers for testing, feel free to send me test suite definitions!