What I, and probably everyone else, playing around with this notices though is that the meshing step is ultra slow compared to the the purely mathematical fields.

Would there be a way to build a raymarcher for Rhino to visualize fields? We have done this in our realtime software to render SDFs. Of course exporting as a mesh is still super slow, but viewing works are 60fps.

I am curious if there would be a way to view the fields or rather the surface at a certain value of the field that way.

Would it even be possible to integrate a raymarcher into GH/Rhino?

As I touched on in this post above - yes, direct rendering with a shader could indeed be very nice and fast, but this is only possible for certain types of fields (i.e. SDFs), defined in a certain way (i.e. in shader compatible language, which means a limited set of functions).
It’s sadly impossible to have the degree of customisability of fields (where you can define them using whatever functions you like from RhinoCommon or other libraries, and they can be non distance-like) at the same time as having them directly raymarch-able on the GPU.

Still, an SDF raymarcher for Rhino would be possible, and I think could be very nice even though it would only allow certain field types and require a different way of defining them.

Also, I think there’s often room for optimising how certain fields are defined even within the current approach that could give big speed-ups.
For example, with Laurent’s nice shadow cube above, I suspect the use of the FieldFromMeshColour field is a bottleneck, as it does a 3d mesh closest point call, when maybe for this application it could be reduced to a faster 2d call.

Many thanks for your feedback, we will review and give it a try to get it deployed. Scripted objects are fine to be run on ShapeDiver, the script pass through a reviewing process right after model upload.

Hi Daniel. This is great! I need some time to digest all the tools here and the workflows, do you have plans to have some sort of documentation?

Regarding the helicoids, I tried to make a simple case with an array of vectors in X direction. I expected the resulting isosurface to be periodic (in X), but it is not. Can you help me understand why this is the case? Is there a way for this approach to result in a periodic surface.

Looking at the complex plot, we can see that if you keep adding helicoids together along a line, it trends towards the periodic solution in the middle as the number increases,

Anticipating the next question -
There is also a way to generate infinite doubly periodic grids of helicoids without brute-force summing, by using Jacobi elliptic functions:

I used these to find a new triply periodic surface family where one parameter can be varied causing the surface to turn about the 3 sets of orthogonal axes:

I’ll write more about this and Weierstrass elliptic functions with some examples soon.

What about using OpenVDB as a bridge to render the SDFs?

The basic workflow would be to create SDFs, transfer them into OpenVDB format based on sparse volumetric grids, and then use GPU-based rendering techniques of OpenVDB or GhGL to render the OpenVDB field or to then generate the final output mesh.

Geometry could then be created through different means e.g. point clouds, SDFs, mesh, brep and combined using OpenVDB. The OpenVDB file can then be used to create one tight output mesh, perform simulations, or create slicing data for printing.

Have a look at this thread and comments 69 - 73:

As far as I know, there there is a render in Rhino/GH for GhGL but I have not seen a render engine in Rhino for OpenVDB files, which seems to be the missing piece.

Yes, there is Dendro for creating and editing OpenVDBs - but the problem is that for every change in the field (or any Dendro component in the Grasshopper workflow), Dendro needs to generate a mesh as an output to render the field inside Rhino. However, if there would be a direct render for OpenVDB, this would open up the possiblity to create super fast and robust workflows.

I am not an expert in computer graphics. It is just how I look at it as a user after building many design workflows with Grasshopper.

I was just about to ask if there was a way to generate infinite doubly periodic grids of helicoids without brute-force summing, by using *Jacobi elliptic functions!

For anyone not familiar with it - OpenVDB is primarily a library for manipulating volume information in voxel form.

You can think of voxel representations similarly to raster images like bitmaps - a set of values associated with points on a regular grid.

Functional representations of fields on the other hand are code or equations which can be evaluated at any point to return a value. These functions can be written in different languages, such as C#, or GLSL. As I went into a bit in this post above, the range of methods you can use in these field definitions varies significantly depending on the language.
(Also remember - not all implicits are SDFs.)

Voxels can be a sampling of a function/field - a set of values corresponding to a 3d grid of positions, but the voxel representation does not contain the original functional definition. To get the value of a voxel representation anywhere other than at the sample points you can only interpolate from the values you have, which will differ from the true value. Manipulating voxel representations is lossy for this reason, whereas you can chain together operations on functional representations and still evaluate at any point with full accuracy.

OpenVDB is fast for things like shrinkwrap primarily because of its use of efficient hierarchical voxel data structures.
A functional representation can always be converted to a voxel one by sampling at every point on a 3d grid, but this loses the speed advantage. When the function is simply distance from a set of input geometry, the hierarchical structure comes into play because you can skip large regions away from the surface, but this is not the case for functional representations of any field.

For many field types the process of generating a voxel sampling of sufficient density from the functional representation would likely take longer than meshing it directly.

When people talk about rendering SDFs with shaders, as in the post you link, usually they are referring to the functional form, where you can raymarch without having to first compute the values at all grid points.

I do think using voxel operations for speeding up some parts of the process when working with combinations of functional representations and input Rhino geometry could be useful, but voxels/OpenVDB are only one part of the puzzle, not some magic that makes anything and everything faster.

Well done, you have developed a cool tool. maybe someday you will be able to implement the Adaptivity feature that exists in OpenVDB. it speeds up the creation of Hexahedral incredibly. or maybe you don’t need to.?