Rhino Grasshopper Run Error


Hi guys,

Been bothered by these issue for a while.
I am doing a view analyses in which the site scale is around 40KM2.(It`s kind of BIG…)
I did bit of simplification and it is OK.
But now the RES of the model effects the accuracy of the analyses so I increased the RES.
and the problem pops out.
I guess that is because the heavy algorithm and cause the rhino system to crash?
Any idea if
1> update my hardware ?
2> any setting up either in GH or RH5 that could solve the problem?

Thanks!
Best,
Lei

If you’re modelling in kilometres, your file units should be somewhere near there as well. It’s not big if 1 unit equals 1 kilometre, or even if 1 unit equals 1 metre. The problems start happening when you’re modelling in millimetres and your geometry is therefore huge.

I have no idea what the error is about though, it doesn’t look like there’s much information to be gained from it. I don’t even know what part of Rhino would publish such an unfriendly error message.

Can you share the files so we can try and reproduce the problem?

Hi David,

Thanks for the respond.
The change of unit actually will not work because the simulation is based on divisons of current model. So even if the unit scale up to kilometer, as to maintain the same resolution, the division unit is also scale up to kilometer, which means the resoluation grids remains same, so won`t decrease the algorithm.

I am trying use VOXEL logic to divide the algorithm into seperate parts so maybe that will work.
Let`s see what will happen.

Thanks again.
Lei

Sometimes you don’t have a choice of course, but the point is that computers are not anywhere near as good at maths as people think. There are inaccuracies in floating-point-mathematics and those inaccuracies gets more and more severe the bigger your numbers get. Basically, you should think of digital numbers as scientific notation with a fixed number of decimals. The least significant digits usually cannot be trusted because they will have been corrupted by inaccurate arithmetic operations. Therefore the bigger the number, the bigger the absolute errors. Rhino uses double-precision-floating-point arithmetic, which provides ~16 decimal places, and I typically only trust about 8 of them.

Thanks David.
It`s good to know that.
I will post the voxolized result later to show whether there is a difference.

And also question BTW,

for a galapogos simulation, I have gene pool range from 0 - 700. the count will be 375.
however when the sliders shift, there will always be some repeated numbers so the actual numbers of which differs is around 300 - 310, which cause the result cannot reach my goal count.

I was wondering is it possible to create a gene pool that every time a number was choosen, it will be culled from the LIST so that later sliders selection will not be repeated any more. can we do that in python or any other method?

Thanks so much.
Best,
Lei

i’m not entirely sure this is even related to your problem of not arriving at a closer/perfect result but…

you can make longer decimals which will allow Galapogos to search in between the steps… (like-- it might not necessarily be repeating numbers… it’s just that those repeating ones are the closest ones that your slider is allowing)

stacked_sliders.gh (15.1 KB)


[and if i totally mis-guessed the problem you’re having, apologies for cluttering up the thread :wink: ]

You can remove duplicate values from collections using Set components. Create Set will remove all duplicates Delete Consecutive will only remove duplicates if they are neighbours.

However such an operation introduces a fair amount of discontinuity into the configuration space. Slider #41 used to be associated with property Banana, but now suddenly sliders #12 and #36 got the same value so #41 is shifted down one position and is suddenly controlling the amount of Kumquats. This makes it very difficult for Galapagos to understand what’s going on. Just a moment ago slider #41 was doing fine, and now suddenly it’s wildly off the mark.

I will say that 375 sliders is a massive amount to begin with. Sure, stochastic solvers like Galapagos are designed to navigate large configuration spaces, but there’s large and then there’s whoa

1 Like

Hi jeff,

Thanks for the reply.
The numbers put in is the index number of each item, so i`m not sure if we get the dicimals will solve the problem? because you still have to get the int of these decimal numbers, so in this case, there is a increasing chance which the outcome repeated numbers will be more, because these numbers will be getting similar for bring in decimals?

BTW, I use Octopus. beforementioned that I use Galapagos is misleading, sorry for that.

Thank you.
Lei

Hi David,

Thanks for the reply.
I am using Octopus for multi-objects optimization.
I understand that we can use ‘Create Set’ or ‘Delete Consecutive’ to remove the repeated int index numbers.
But the problem I am concerned is: The Goal is to evaluate 375 zones, so we get 375 int index for each of them. If we delete the repeated ones, the actual input of these index is decreased to 300-310, maybe. So as a result, instead evaluating 375 zones, we actually ONLY evaluate 300 zones.

That is why as beforementioned, if we could have a script that we have input 375 ints, all these ints will not be reapted, so the come-out numbers is 375 ints, wihch will fit perfectly to the goal.

Lei

If that is the only problem, then generate 500 integers, remove all duplicates (you’ll end up with 400~500 or so values), and finally remove the superfluous values.

If that is the only problem, then generate 500 integers, remove all duplicates (you’ll end up with 400~500 or so values), and finally remove the superfluous values.

Hi David,

That`s exactly what I did.
will it cause mismatch between the input index int(Gene Pool) and outcome index int(Create Set)? we got 500 sliders will change, as a result, we only got 350 silders value valid. will it affect OCTOPUS simulation accuracy?