Looks like an answer to me! Joseph shows how you can create point cluster of desired sizes by grouping/clustering closest points together. This should also work for bigger point collections.

Yeah but it creates death corners in some situations.
Maybe I have to do it in Python but maybe it can be faster done with just the regular components.

I was also thinking of closest points but it sometimes create death corners.

Do you want to compact the points together, which would mean that the distances between clustered points would change/shrink, or do you want to cluster the points in such a way that useful ground plans would emerge, like shown in your diagrams? For the latter, would you be gunning for square, rectangular, or polygonal groupings, basically straight walls?

My guess would be that either way, you’ll have to introduce clearer parameters!!
For instance, the diagrams above show where exterior circulation meets/enters into the building. These points could be where the “clustering” starts. The same role could be played by the interior circulation.
Another parameter could be your spacial program. What kind of spaces or rooms (e.g. café, bicycle storage, etc.) do you want to distribute/generate? How big does each space at least need to be in terms of its area? These areas could define how big a cluster gets.

With these simple parameters or even more, it’s surely going to be easier to come up with a strategy! But is this what you want?

Like this. And it has not to be blocks of rectangles, circles, or triangles. What I am trying to prevent is functions which forms towards strokes. The yellow stroke is very stretched out which makes the space very difficult to design with interior.
I have the sizes of the spaces and translated that to the amount of points. @p1r4t3b0y

But now it is about having a good strategy for the formation of the spaces: the forms of the spaces connected to the size of the spaces formed not as strokes but as more useable stretched area.
That is my problem and am not able to solve it.
Parameters for example. External, internal circulation, but then it circulates?
I was thinking of more something like oil stains. But maybe that is impossible.

If I understand you correctly, you don’t want too narrow, long point clusters with small-ish areas?
After the clusters evaluation, which seems to work already (?), you could re-evaluate the clusters, identifying the unwanted ones, and for instance equally redistribute their points among neighbouring clusters.
You could identify the long, narrow clusters by getting the bounding boxes of the clusters. Clusters with a long bounding box diagonal and a short bounding box x- or y-size should give you the narrow, long ones.
Neighbouring clusters could probably be found by searching for closest point clouds.

Or maybe refine the clustering in the first place.

So where do the equally spaced points come from? A mesh?

If I understand you correctly, you don’t want too narrow, long point clusters with small-ish areas?
More rectangular.

Or maybe refine the clustering in the first place.
Yes but I do not know how.

So where do the equally spaced points come from? A mesh?
Generated with python.
It al starts by taking a point on the edge of the field. Then it sorts the fieldPnts by distance to the startPn; after that it selects the fieldPnts based on the needed amount for the funcion’s space; and repeats itself like this.

But, how to direct the form stays difficult for me.

I do not know about the right strategy to solve it.

@Joseph_Oster@p1r4t3b0y@Dancergraham
Space Syntax does not work here, I think.
So, to be clear, I want the function forms more concentrated to a center of the function itself.

However, as you can see I am not able to direct the functions to a more ‘circular form’ or ‘more rectangular form.’
What could be a strategy for that?

1.It takes a point of the edge, the startPn.

1.It takes the points closest to the startPn.
2.Search for a nextPn.
3.Repeats itself.

Problem, it let create point cluster structures which are formed as a stroke and is by that not useable as a space.

I am trying to find a way wherein the spaces are more ‘circular’ and less like a ‘stroke.’ The problem is, when doing so, there might will come blind spots.

Sounds like a good job for a trained neural network : the human brain is very good at this type of task Could you script it such that you can click on any area you want to ‘explode’ and it will go back and redo the loop from that point so if you click on the 8th colour in your loop it will explode from 8 onwards, try the 9th at that point instead and continue looping to the end, putting colour 8 back in at the end of the list…?

Also if the aligned bounding box of your points is very sparsely populated or very elongated then you could explode it automatically…?

I’ve experimented a little and managed to come up with a similar result than your above sketch.

Each cluster begins at a start point (closest point to each black point) and has a certain desired size. At each iteration the search radius is expanded by a step size and thus new closest points are found that are then added to the clusters. If a cluster has reached a desired size, or doesn’t find any new, closest points within a threshold, it stops expanding.

This has the unfortunate byproduct that some points (grey) don’t get allotted to a cluster, because all neighbouring clusters have reached their maximum size.
However, the script allows you to add these points to their closest, neighbouring cluster, if desired! This obviously changes the cluster size, surpassing the desired maximum size from before for the concerned clusters, so it’s optional.

import Rhino.Geometry as rg
from ghpythonlib import treehelpers
# Get the diagonal length of the points' bounding box
bbox = rg.BoundingBox(Points)
bbox_dlen = bbox.Diagonal.Length
# Get the start points from the points
start_pts = [Points[i] for i in StartIndices]
# Temporary point cloud keep track of unchecked points
temp_ptcloud = rg.PointCloud(Points)
# Temporary indices keep track of unchecked points inital indices
temp_indices = [i for i in range(len(Points))]
curr_step = StepSize # current step size
count = 0 # iteration counter
# Intialise an empty cluster list for each start point
clusters = [[] for _ in start_pts]
active = [True for _ in start_pts]
while temp_ptcloud.Count > 0 and curr_step < bbox_dlen:
for i in xrange(len(start_pts)):
# Skip inactive clusters
if not active[i]:
continue
# Get the current closest point indices
rc = rg.RTree.PointCloudClosestPoints(temp_ptcloud,
[start_pts[i]],
curr_step)
closest = list(list(rc)[0])
if len(closest) < 1: # no current, closest points
continue
# Get current cluster points (if it isn't empty)
curr_cl_pts = None
if len(clusters[i]) > 0:
curr_cl_pts = [Points[k] for k in clusters[i]]
misses = 0 # counts closest points beyond threshold
for j in xrange(len(closest)):
# Skip index out of range errors
if closest[j] >= len(temp_indices):
continue
# Skip clusters that have reached the desired size
if len(clusters[i]) == ClusterSizes[i]:
active[i] = False
continue
# Get the current, closest point
closest_pt = Points[temp_indices[closest[j]]]
# Check whether the closest point is near the cluster points
if curr_cl_pts != None:
dists = [pt.DistanceTo(closest_pt) for pt in curr_cl_pts]
if min(dists) > Threshold + Threshold * 0.05: # 5% leeway
misses += 1
continue
# Save the closest indices to the current cluster
clusters[i].append(temp_indices[closest[j]])
# Delete the visited points and indices
temp_ptcloud.RemoveAt(closest[j])
temp_indices.pop(closest[j])
# Deactive the current cluster if no valid closest point were found
if misses == len(closest):
active[i] = False
# Grow the search radius and iteration count
curr_step += StepSize
count += 1
# Check whether all points are included in one of the cluster
if AllotRest and sum([len(c) for c in clusters]) < len(Points):
# Loop the remaining point indices in reverse
for i in xrange(len(temp_indices)-1, -1, -1):
rest_pt = Points[temp_indices[i]]
# # Identify the closest cluster
min_dist = float("inf")
closest = None
for j in xrange(len(clusters)):
cl_pts = [Points[k] for k in clusters[j]]
dist = min([pt.DistanceTo(rest_pt) for pt in cl_pts])
if dist < min_dist:
min_dist = dist
closest = j
if closest != None:
# Add the remaining point index to the closest cluster
clusters[closest].append(temp_indices[i])
# And remove it
del temp_indices[i]
# Outputs
Clustered = treehelpers.list_to_tree(clusters)
Unallotted = temp_indices

You’re welcome. It is an interesting topic!
Try moving the black points around. The script should be quick enough to see live updating of the clusters.

Just to exhaust the simple, Voronoi (and dual graph) is quite good at space divisions (of course you cannot dial it in to point count like @p1r4t3b0y). I use mesh rays for the point inclusion test as it is much faster than the Point In Curves components.

@ForestOwl, what I forgot to mention yesterday is that the StepSize input should be set to a value smaller than the minimum distance between closest points in Points. The StepSize is the increment of the search radius, within which to iteratively search for close points to clusters.

Also, the Threshold input - which defines the maximum distance between close points and the cluster they are evaluated to join -, should be something like the average distance between closest points in Points or even a little bigger.

The quality of the clustering really depends on these two values to be dialed-in to model scale. Otherwise, the clusters may seem to have less clear boundaries and intermix more.

By the way, the script also works if you define some interior points as start points. Start points don’t have to be on the periphery.