Naiad - Superfast recalc of massive data structures - GH & Meshes?

Is Naiad something that could speed up updates and recalculations of GrassHopper definitions and Meshes with magnitudes?

I you have not evaluated this, then perhaps you should? ( @DavidRutten and @DanielPiker ). Perhaps something even for display pipelines where not all of the scene needs an update?

A quote:

… what happens if the input changes? Perhaps a single edge is removed, which can result in the separation of two previously connected components. … not easy to determine how to unwind their propagation to return the computation to a state from which new correct labels can be determined. The data-processing systems … are forced to discard the results of their previous computation and start over from scratch.

Naiad, by comparison, represents a dataset in a compact form indicating where and when records have changed. The specific representation enables efficient combination of incremental and iteration computation, and allows us to update computations … in a fraction of a second. Naiad is currently capable of maintaining the strongly connected component structure (a doubly-nested loop) of a graph defined by a sliding window over edge stream with rates exceeding Twitter’s full tweet volume, all with sub-second latency.

Think of massive DataTrees, or Meshes, where only a single node or only a few, has changed. Iterative changes, like physics simulations. And so on.

https://www.microsoft.com/en-us/research/project/naiad/

Interesting examples towards the end of the video.

// Rolf

1 Like

It seems to be targeted at highly distributed computing, using e.g. clusters, on a massive scale. So I guess that if you can throw an HPC cluster at your Grasshopper problem, it will indeed go faster. For most users, it does not seem to be the tool for the job though.

FTA:

Naiad’s most notable performance property, when compared with other data-parallel dataflow systems, is its ability to quickly coordinate among the workers and establish that stages have completed, typically in less than a millisecond for our 64 machine cluster.

Yes, it works on clusters, which typically is haunted by old and slow map-reduce solutions. But the main point seems to be that it acts like a giant “statemachine” keeping track of local changes in massive data structures. That makes it just as useful also on multi core local machines reducing the need for reevaluating big data structures from scratch.

// Rolf