Joke, it was a joke. Looking forward way powerful GH2!

Don’t underestimate your job, I am sure GH2 will brings efficient algorithms, but sometimes there are some lacks that are taken by plugin (offset …). So if you have made good improvements to your script publish it on this forum and food4rhino.

I think also populate 3D geometry (you have done 2D populate) could use Heat Method from Keenan Crane …

What algorithm(s) did you use to get such results?

I attached a **human-readable** code and the strategy already But thanks for having interest.

I would remind again that the goal is to achieve O(N). Not to make the code only faster.

The key point in the algorithm is if you divide the region to the cells, there are only one, or two, or maybe three points in a cell. The number of points in a cell cannot go high even if you increase the number of points.

So if you compute the forces between points that are only in the adjacent cells, the total amount of computation is nearly proportional to N.

.

Sorry, I must have skipped that somehow.

Sounds similar to the Poisson disc sampling algorithm.

You could try switching from Dictionary to HashSet. Lookup goes from O(log n) to O(1) in that case. And I think in the newer HashSets it’s now possible to retrieve an entire value stored in the set if you find a colliding key. This wasn’t possible before making HashSet a lot less useful.

On the other hand, it seems you’re creating points *everywhere* within some boundary, so why not just store your points directly in an array? Or am I mistaken in believing this array would not be particularly sparse?

Hi David! Thanks for popping out!

I simply didn’t know HashSet. I’ll check it out. Thank you.

I assume your “array” is an array of the cells. The reason I’m using dictionary instead of a static array is, simply I was not able to come up a good strategy to cover an arbitrary region by cells.

Theoretically the initial points may be covering the entire region almost perfectly. So the cells computed from the points may cover the region perfectly. But there is a chance there are some missing cells, and when points move to those missing cells, we need to add a cell. If you want to avoid this, you need to compute the cells without the help of the distributed points but only by looking at the boundary curve. I simply didn’t come up any idea…

.

Are you sure about this? I for instance know that in Python dictionary lookups are also O(1), though sets are also considered a little faster still for different reasons, maybe because there is no referenced data?

Absolutely, I for instance stored the points in a one-dimensional array, which is faster and cheaper than multi-dimensional ones, when I implemented Poisson disc sampling for a project of mine. You can still easily loop through it with a two-dimensional row-column-logic, like so:

```
#include <tuple>
int columns = 10;
int rows = 10;
// Array of pairs representing two-dimensional points
std::pair<double, double>[columns * rows] points;
for (int x = 0; x < columns; x++) {
for (int y = 0; y < rows; y++) {
points[x + y * columns] = std::make_pair(x, y);
}
}
```

I guess you don’t even need “real” cells, when each position in a one-dimensional array is simply considered to stand for a cell. If you for instance only allow one point per cell, a vacant cell could empty - it’s position in the array would be `NULL`

, `None`

, `-1`

, or whatever -, whereas occupied cells would be defined by simply a point at that index in the array.

If you have a non-rectangular region, you could for instance test not only if the new point is in a free cell, and at a tolerated distance from neighbouring points, but whether it is contained within the region in question. The grid could be simply a rectangular grid filling the bounding rectangle of the region, if that makes sense.

Here’s a real-time demo of my Poisson implementation creating 7,300 points in a 400 by 400 unit region:

It’s somewhat slower than yours, but I haven’t optimised it.

Though I agree dynamic generation of cells is not the best idea, this animation explains well the algorithm.

I’m by no means an expert on all of this, but what is always a “problem” with dynamic object creation, is the reallocation of memory (copying of memory), if your dynamic data structure changes in size, since this is probably the slowest memory operation that can be done.

Potentially, using a quadree spacial division could even be faster, especially since points are only inserted and not moving iteratively (which would mean that the quadtree would need to be rebuilt at each iteration).

Other than that your stuff looks amazing and your times are mind-blowing!!

Maybe, I’ve missed this, but you use multi-threading right?

ummm, maybe I’m too thinking in Python way, you have lists and dictionary. It’s a world people casually create and dispose objects dynamically.

Only a big bottleneck I see in this algorithm is, when points move out from a cell and enter to another cell, it is removed from the original list and added to a new list (let’s stop talking about dictionary inquiry cost! For me it’s fast enough). This happens everywhere actually. For instance, by ensuring a point array of 20 items for every cell, you can avoid dynamic allocation of memory. (or maybe this is what David initially suggested…)

Yes, its using parallel-foreach

No, forget about this post. I don’t know how to remove a point from an array.

I wrote a wrapper class for a C++ library written by David Coeurjolly which refactors a low disc blue noise point sampling algorithm. The algorithm is based of this paper. The gist is that the blue noise component that I wrote can create an even distribution of 5,000,000 points in 2.1 seconds. The source code for this can be found here.

For those that want to skip the source code, you can test this component out by copying these two files (see zip file) into your components library.

Blue Noise Point Generator.zip (131.8 KB)

Seems like I happened to open a Pandora’s box…

Some questions generate more answers than other. Populating is a useful tool for many things.

I played with some of the generator that are on this page

here some test, populating a square with 1000 points

Using Proximity 2D with Group 1, 2, 3 and 4

difference between sampling.gh (11.6 KB)

Interesting paper never saw that earlier - spectrum is distorted but its indeed super fast.

All this talk about algorithms, is super inspiring for a python noob, thanks.

Regards.

Yeah, for (in?) the first place, I didn’t even know this is a common topic in Graphics. Thank you!

My final code (25% faster than the last one) attached, if anybody is interested. Copyright waived.

sampling_C#.gh (9.2 KB)

Yup, adaptive uniform sampling is one of the base ingredients for Monte Carlo approaches used widely by rendering engines.

No matter what would be considered about performance I must admit that your approach is one of most uniform I saw so far