I am trying to place roughly 100.000 points on a curve.
The distance between each point should be controllable with e.g. the Graphmapper or let’s say a sinus function.One way to do this is to use the “evaluateLength” component together with range. Unfortunately, it takes a really long time > 15 minutes," divide Curve" or “evaluate curve” only takes a fraction of that, but I can’t figure out how to use them correctly.
thanks!

Simply reparameterize the C input of your evaluate curve component (right click C, and select reparameterize). This will make the curves domain be from 0 to 1.

Eval length will always be slower than eval because the calculation to get a length is more computationally expensive (and also deals with tolerances), where as eval uses the curves domain parameter space which is already a property of the curve.

Also make sure you understand that eval which uses domain is not the same as eval length, but since you want random I think the difference won’t matter for you.

P.S. on your eval length component there is no need for the remapping you are doing, just change normalized to true and use the output from the graph directly.

I see I misunderstood your thing, and missed that you already reparamed. Essentially you are trying to make the sine wave of params linear rather than going back over itself as points on the curve correct?

Well atleast you know why one is slower than the other.

I am trying to space points based on a sine function on a curve. So the distribution gets denser and than again less denser. The distance between the unshifted points should be roughly the same. (eg divide curve is sufficiant - divide distance is better, but again to slow for 100k points)

Well because divide curve is an easy calculation, divide distance is heavier in nature. Often with precision comes slower speeds. The reason eval vs eval length are different can be seen in these two articles and it is important to know the difference in computational design (many use eval when they should be using eval length).

eval is based on a curves parameter space domain, length on distance. A curves domain and its length have nothing to do with each-other. Unfortunately finding curve lengths at specific points is what it is, the only way I can see it getting faster is if someone multi-threads it which wouldn’t be so hard but that “faster” would then depend on the cores of your computer.

But I have to say I know all of this, and that’s not the Problem I am trying to solve. Probably I didn’t expained it good enough, (also not a native speaker)

Divide Curve is in this case sufficient and it gives curve parameters which result in “good enougth” spaced points. Is it possible to shift them anlong the curve to make the distribution denser/less denser without using computational intense operations like evaluate length?

Not really I think, because the shifting along curve would be the same as eval length (you would be finding new lengths along the curve). It would probably be even slower to do that.

For what its worth if you take my suggestion of using normalized length instead rather than remapping the values to the curve length, the results are the same and it is almost twice as fast.

Yeah, fast enough thank you very much for this.
There is still a problem in the spacing of the points though: in the beginning of the curve the overall distribution is tighter and in the end it is more loose. This is the exact problem, that arises due to the curve parameter not having a linear “translation” to the geometric (Cartesian) space.

My thought was that one maybe could use the t-parameter output of divide curve and manipulate it somehow ( eg. substract a value range )to get the the desired effect

My thought was that one maybe could use the t-parameter output of divide curve and manipulate it somehow ( eg. substract a value range )to get the the desired effect

But you would have to then turn parameter space to distance space and somehow measure the Parameter space in specific locations and convert to distance space as it is not constant. This won’t be faster than just using eval length.

Was it? I missed that point, sorry. I blame it on being stuck at R5 and unable to run even simple R6 models. I understand now from @Konrad’s reply. Should have taken a walk on the beach instead.

First, a change to my previous post that, I believe, distributes points more correctly within a single sine cycle.

Next, I modified it to use Evaluate Length instead and see that it’s painfully slow indeed. 10,000 points takes 7.5 secs. I’ll update the benchmark for 100K later… Oh! 1.3 minutes. That’s not bad for 100,000 points, eh?

sinePts_2019Jun012c.gh (16.2 KB) (deprecated, see below)

They are not. Gh+Rhino run on one core, unless explicitly multi threaded. If you notice the divide curve component has two little dots on the upper left. That means it is multi threaded. Even then, component per component run one at a time.

so there is an edge case: since the curve is a spiral it there is a fixed relationship between curve and distant space. In that case it is parameterspace^0.508=(roughly) distant space. It caculates all 100000 points in half a second

I still think it should be possible to generalize this a bit. of course it wont be perfect, but maybe close enough.
Look at these Graphs: shouldn’t it be possible to get close to the “target graph” from two other graphs?