I’m trying to connect all of the points from the first layer of my collection of points to the second, third, and fourth layer. Something like the green line in this image.

My goal is that I would have a line going through a point in each of the four layers. No line is the same, but multiple lines can go through the same point. The minimum and maximum number of vertices that the line must go through is 4: one for each layer I have.

Well … assume that you have nl number of “layers” (i.e. Points GroupedBy their Z ± delta value). Assume that each layer has Nx pts … then the lines are:

N0 * N1 * … * N(nl-1). This could be a very big number of Lines yielding a chaotic Graph. For instance for 200, 300, 400, 500 pts in 4 “layers” you’ll make 12 B lines (4 nested Loops).

Or there’s some addicional rules around? For instance: proximity rules, min/max slope per segment, prev-next segment min/max angle, valence per layer Pt (that makes some sense), average Line Length, segment min/max Length etc etc.

this is a pretty random solution it just connects random points from each layer
it does not guarantee that all the points are vertex of a polyline, but it tells you how many point you’re using for each layer

if you’re not happy of a given result → go next seed, and if you’re looking for more connections just increase the number of polyline (and prey god )

My problem is that I want to connect each of the white lines to each of the blue lines to each of the green lines. Something like this image but for all of the points in each layer.

However, when I used the join curves command, my Rhino would crash. If I extract a subset of each of those lines and then run join curves, it would join random instances of curves, oftentimes making a V-shape like this image, which is not what I’m looking for.

The first (existed stuff written for other purposes) does random points without a “uniform” distribution (like David’s thing) meaning that their creation is real-time fast. Then it finds Clusters of points where their Z is within a given Interval (± delta). Clusters are sampled in a DataTree (and Pts indices in another). Plus a List (ZIDX) is made with the path indices ordered by the pt Z coord. This ZIDX thingy is a must to have on hand (since everything is random … etc etc) when we do the connections in step 2 .

Of course for these rnd points one could use various LOL ways: like clouds “around” random spots … or in some push/pull mode using attractors , blah, blah.

The second (just a couple of lines - no big deal) does the connections and - most importantly the VV connectivity - with 2 modes (combine == true means all to all - per pair of Clusters). That said a Graph without VV connectivity is kinda a man without a Ducati.

Notice that there’s Chaos (with some sort of rational manner) without that Cray.

Noboby could bother to count the Clustered points (no big number in fact) while the … er … hmm … Graph Chaos is assured. Obviouly the N of “layers” is random … meaning good quality (LOL) Chaos, what else?

This is pretty close to what I’m looking for. The lines are in the “right” direction, and what I need to figure out now is how to use all the points in D.

it uses a first jitter to random the duplicate data, and a second jitter to random the connection points
this will work only if the most points are on Parameter “Point D”

What’s the point? Why are your points so far from the origin?

I started to go in this direction but stopped when I figured the total number of polylines needed (3175 * 2032 * 8112) is 52 BILLION?! Can that possibly be correct?

400000 permutations touches 8 points on the bottom layer.
One million permutations touches 20 points on the bottom layer and takes less than one minute. 18 seconds for ColorJ which is strictly cosmetic and could be eliminated.

Apologies. The points came from my research project where I’ve mapped out all of the places that my subjects have visited. The layers are different categories of places (Shops, restaurants, parks, bus/subway stops). I then overlaid that on top of a CAD mapper file of the site. I’ve also changed the Cplane a few times. What I then plan to do with that is to make a surface to interpolate the combinations. Think of it as a bounding box “shrink-wrapping” the whole thing.

According to my rough math, I should get about 25 million combination of lines. I’ll have to take a closer look at your definition and report back.

The CrossRef component reports 25,755,600 combinations (permutations?). Hard to believe that many polylines is useful in any way? Ditch the ColorJ component, it’s slow and useless.

I guess it’s obvious that I’m using SubSet to avoid 25.8 million results.

If so a similar Graph has no meaning at all - just a stupid BIG combo of connections that nobody can understand not to mention interpret. Don’t follow that route: it’s just a naive/pointless/amateurish trip into a very big rabbit hole.

Node/Node Connectivity (as exposed in V1 above) has the big meaning in this case: subject i visisted j shop … then k park etc etc. This could extract statistical “habits” and the likes, Node valence evaluation (i.e. ordered popularity) … etc etc. Obviously N of “layers” should be a user configurable var (not just 4).

For similar problems - in real life - we use heuristics/genetics as well (why bother dealing with zillions of connections ?) and various other freaky things like Steiner Graphs etc etc. These are the core of a generic MOO approach (Multi Objective Optimization).

For AEC/Building Design techniques “like these” allow us to design, say, a hospital where the sum of all movements is the statistical minimum (that’s a chimera … but anyway). For Urban stuff … well … we can outline a vast variety of “average” habits and then pass/sell the data to some Company in order to mastermind some promition policy and the likes. Or pass the data to some politician/party.

But in order to do it (100% impossible without code) you need to define suitable Classes (with the right Properties) and then issue parallel queries (PLINQ) searching this, this or that.

Ugly news: This is far and away from what you can do given the fact that you are not in the coding path.

Tip: find a friend who knows a thing or two about coding (the more the better) and ask him a brief/medium/long description about data mining/fuzzy sets/etc.