Fast divide curve based on graph mapper

Dear Grasshoppers,

I am trying to place roughly 100.000 points on a curve.
The distance between each point should be controllable with e.g. the Graphmapper or let’s say a sinus function.One way to do this is to use the “evaluateLength” component together with range. Unfortunately, it takes a really long time > 15 minutes," divide Curve" or “evaluate curve” only takes a fraction of that, but I can’t figure out how to use them correctly.
thanks!


divide curve graph mapper.gh (37.6 KB)

Simply reparameterize the C input of your evaluate curve component (right click C, and select reparameterize). This will make the curves domain be from 0 to 1.

Eval length will always be slower than eval because the calculation to get a length is more computationally expensive (and also deals with tolerances), where as eval uses the curves domain parameter space which is already a property of the curve.

Also make sure you understand that eval which uses domain is not the same as eval length, but since you want random I think the difference won’t matter for you.

P.S. on your eval length component there is no need for the remapping you are doing, just change normalized to true and use the output from the graph directly.

thanks Michael!

does it make a difference where I reparameterize the curve? I am asking since the curve itself is allready reparameterized (see screenshot).

I see I misunderstood your thing, and missed that you already reparamed. Essentially you are trying to make the sine wave of params linear rather than going back over itself as points on the curve correct?

Well atleast you know why one is slower than the other.

I am trying to space points based on a sine function on a curve. So the distribution gets denser and than again less denser. The distance between the unshifted points should be roughly the same. (eg divide curve is sufficiant - divide distance is better, but again to slow for 100k points)

Well because divide curve is an easy calculation, divide distance is heavier in nature. Often with precision comes slower speeds. The reason eval vs eval length are different can be seen in these two articles and it is important to know the difference in computational design (many use eval when they should be using eval length).

https://ieatbugsforbreakfast.wordpress.com/2013/09/27/curve-parameter-space/ and https://ieatbugsforbreakfast.wordpress.com/2013/09/28/curve-parameters-an-analogy/

eval is based on a curves parameter space domain, length on distance. A curves domain and its length have nothing to do with each-other. Unfortunately finding curve lengths at specific points is what it is, the only way I can see it getting faster is if someone multi-threads it which wouldn’t be so hard but that “faster” would then depend on the cores of your computer.

thanks again, Michael.

But I have to say I know all of this, and that’s not the Problem I am trying to solve. Probably I didn’t expained it good enough, (also not a native speaker)

Divide Curve is in this case sufficient and it gives curve parameters which result in “good enougth” spaced points. Is it possible to shift them anlong the curve to make the distribution denser/less denser without using computational intense operations like evaluate length?

Not really I think, because the shifting along curve would be the same as eval length (you would be finding new lengths along the curve). It would probably be even slower to do that.

For what its worth if you take my suggestion of using normalized length instead rather than remapping the values to the curve length, the results are the same and it is almost twice as fast.


1 Like

This seems to be reasonably fast? 100,000 points in 1.6 secs. on an old laptop.

sinePts_2019Jun012a.gh (11.1 KB)


1 Like

I thought the point was to use lengths?

Yeah, fast enough :wink: thank you very much for this.
There is still a problem in the spacing of the points though: in the beginning of the curve the overall distribution is tighter and in the end it is more loose. This is the exact problem, that arises due to the curve parameter not having a linear “translation” to the geometric (Cartesian) space.

My thought was that one maybe could use the t-parameter output of divide curve and manipulate it somehow ( eg. substract a value range )to get the the desired effect

My thought was that one maybe could use the t-parameter output of divide curve and manipulate it somehow ( eg. substract a value range )to get the the desired effect

But you would have to then turn parameter space to distance space and somehow measure the Parameter space in specific locations and convert to distance space as it is not constant. This won’t be faster than just using eval length.

Was it? I missed that point, sorry. I blame it on being stuck at R5 and unable to run even simple R6 models. I understand now from @Konrad’s reply. Should have taken a walk on the beach instead. :sunglasses:

First, a change to my previous post that, I believe, distributes points more correctly within a single sine cycle.

sinePts_2019Jun012b.gh (13.1 KB)

Next, I modified it to use Evaluate Length instead and see that it’s painfully slow indeed. 10,000 points takes 7.5 secs. I’ll update the benchmark for 100K later… Oh! 1.3 minutes. That’s not bad for 100,000 points, eh?

sinePts_2019Jun012c.gh (16.2 KB) (deprecated, see below)

P.S. Updated… note “-2” instead of “-1”. Minor.
sinePts_2019Jun012c2.gh (14.7 KB)

2 Likes

Hm, I just wondered if components are calculated in paralell?


( 8 core cpu)
turns out: no …

They are not. Gh+Rhino run on one core, unless explicitly multi threaded. If you notice the divide curve component has two little dots on the upper left. That means it is multi threaded. Even then, component per component run one at a time.

so there is an edge case: since the curve is a spiral it there is a fixed relationship between curve and distant space. In that case it is parameterspace^0.508=(roughly) distant space. It caculates all 100000 points in half a second


divide curve graph mapper fast.gh (30.1 KB)

I still think it should be possible to generalize this a bit. of course it wont be perfect, but maybe close enough.
Look at these Graphs: shouldn’t it be possible to get close to the “target graph” from two other graphs?