Diffusion Limited Aggregation help

Hi, I’m fairly new to using Grasshopper! I’m trying to replicate 3D Diffusion Limited Aggregation in Grasshopper/Rhino. I’ve found a number of definitions that replicate the process with points, but i’m aiming to do it with meshes … I’m hoping to have a mesh ‘walk’ around and then join the centre meshes when it comes into contact.

Here’s the definition I had found:

A video of what i’m trying to achieve:

Just wondering if anyone had achieved something similar or could suggest relevant plugins!

Thanks,
James

I was not aware of Daniel’s stuff on that matter. Other than that doing what the animated sequel does is rather easy using recursion (and obviously code). However don’t expect blitzkrieg results if the objects are many (not to mention the “packing” rules if these differ from simple spheres, cubes and the likes: for instance given irregular blob type of meshes … what could the the rule to pack them? [ maybe some face to face contact IF no clash issues occur ???]).

Out of curiosity can you post some representative meshes that you have in mind? (plus the pack rule).

I did this some years ago with anemone and horster. I use anemone to create a loop that tests mesh collisions. If mesh hits other mesh then start a new particle. Horster was used to shoot particles from camera position.

1 Like

Hi, thanks for the reply,

Here are the three types of objects I’m looking to use. The rule to pack would likely be face-to-face and avoiding clashes ideally.

Here’s the kind of form being generated simply using grasshopper and Wasp:

Hi, Thanks for showing that!

Would this work with any other form of mesh?

If you want to connect some other type of shapes, especially face to face, things get much more complex, that’s why you have plugins like wasp and will need to define specific rules.

I’ve been using WASP up until now, but looking to add this diffusion growth to it. Face-to-face isn’t so much a priority, but is there a way to apply the process you did with my meshes?

If the code for the main goal would require x lines (more or less easy as I said), the code for packing could require 10+ x lines since the objects are random (or should be) and you are in fact after some kind of sparse 3d Tetris … meaning that face to face isn’t a sufficient enough condition/rule.

Plus you’ll need an object generator (but that’s not nuclear science) - for instance a very fast C# that does classic voxels [as boxes] out of meshes or breps (I have a similar def, mind). Voxels (as Lists) are far better than a voxelized (so to speak) polysurface because you can move some in order to achieve better aesthetics or a more dense or sparse result (and obviouslly noboby could tell the difference).

On the other hand in real-life some interactive (on the fly) option should being provited: suspend the computer from doing things and modify interactively the current generation/state of the whole packing in order to achieve a better looking result (in plain English: “swap” volatile with persistent data per generation [impossible to do that without code]).

Believe it or not this could require a full day of coding (or maybe more if Karma is in short supply).

On the other hand … well … science is maybe(?) a good thing … but … getting results ASAP is a far better thing. Here’s what I would do if I was after a similar goal:

  1. Create voxels (proportionally) out of your objects. This means that the “modules” should be the “same” (more or less). Here’s what meas proportional (or not) voxels:

Plan B (maybe better) : Create for a given steady box random voxel collections like this:

  1. Find the max box (or use the box from Plan B).
  2. Create a particle system using that max box and spread boxes in 3d space. Say like this (a capture from some other def doing other things):

  1. Do the recursion and pack the boxes (easy) using Ray3d (or classic particle stuff).
  2. Replace the boxes with the voxels. DO NOT attempt to unite the voxels (takes BIG time).
  3. Interactively vary the end result for some better looking combo.

BTW: If you do all these … who can tell if the final collection is due to a List of voxelized objects or is just due to a zillion voxels? Who could spot the initial objects among that freaky mess? This means: why wasting time for particle systems, packing and all that stuff? You tell me. This means: just do a random thingy like the black one in Plan B (in much higher resolution and far more sparse) and if someone asks you how is made … well … white lies are a must in our trade.

For instance … the sequel attached is made (C#) via classic recursion (on tetrahedron meshes). No particles, no voxels, nothing except a very simple rule for “packing” (i.e. grow the previous mesh by adding a tetrahedron in some random face). If you replace tetrahedra with boxes [for 3-4K items you’ll need about 700 milliseconds on some I9 or a Ryzen] … who could tell if this is made due to an ultra advanced particle system, collisions, explotions, cats, dogs and/or voxels … or just due to 40 lines of code?