Isopod - implicit surface tools

There are procedural textures for wood…

There are, but how many volumetric wood materials do you know? I know two: One from Rhino that looks very dated, and a quite realistic one that one guy made for the Substance Painter.
My post was more about volumetric textures creation in general. Maybe Adobe will add such possibility in the Substance Designer, from some time now they have their own SDF modeling software.

1 Like

Hi @moby-dk
Sorry for the delayed reply, I was away for a few days.
Yes, I think volumetric texture generation can be an interesting application of fields, here’s a quick go at this:


woodgrain.gh (19.0 KB)
It does a smooth union of a bunch of capsules then maps this 1d texture(you’ll need to download it and set the file path for the image) to the scalar value:
woodline

23 Likes

Crazy!

link to 1D wood grain image

4 Likes

I just realised that the example above will show in preview but when baking the texture coordinates are not correctly carried across.
Here’s a version that fixes this:
woodgrain2.gh (16.1 KB)
After baking you can assign this material:
wood.rmtl (22.5 KB)
Then in rhino you can play with texture, eg to add clearcoat etc


polished_wood.rmtl (34.9 KB)

26 Likes

I tried for 27 minutes straight and burned 3 different mice right-clicking on that one-pixel-perfect-wide image :heart:

THANKS :slight_smile:

4 Likes

Daniel, I’ve been keeping up with this thread since you posted, trying the examples, having fun, etc… but now I’m confused.
The wood grain material, I get that inside grasshopper it works by plugin magic, thats fine, but when I bake the mesh and apply the material, is still works?
But how is the mapping happening in the PBR material?
If I rotate or scale the texture, the “veins” are still there and can be modified, and in the UV Editor all I can see is the streched 1px texture, there is no mapping widget,
What properties of this mapping or texture distortion can I edit in Rhino?

This is using the mesh texture coordinates.
For each vertex of a mesh there’s a 2d coordinate stored, which determines which part of the texture image gets displayed there (and these are interpolated across the faces).
I guess we don’t normally interact with these directly except through code - we normally just set a texture mapping type which creates these coordinate values.
In this example I’m only actually using one dimension of the texture coordinate so only a 1px wide image is needed.
When texture coordinates are set in this way it does mean you can’t really edit the mapping in the same way once it is baked (except changing the v-offset).

2 Likes

Thanks, I was sure that it was something different about the usual stuff we can do in Rhino, thanks for the answer, and I will not be editing textures through code thank you very much :sweat_smile:

Wow! That’s amazing!

Beginner Question Alert
Why can I see the texture when I render but not when I set the display to render?

EDIT: Fixed it.

Can you elaborate on this? I’ve been using something very similar due to the lack of n - dimensional points in Rhino. Assuming a mesh of a physical object with real 3D - coordinates, I duplicate the mesh (which face consists of which vertex-ids) and abuse the vertices as data storage for my n - dimensional problem. For example, I store stresses of a FE simulation in those. If the stress is needed on a certain location, I do MeshClosestPoint() on the physical mesh to get the index of the face and the barycentric coordinates (Rhino unfortunately does not handle quad, although the math is still easy) and evaluate my “ghost mesh”.

Is there a better approach I do not know/think of?

Wow, what a tool. Looks great!!

@DanielPiker Am I right in assuming that this is the same/similar approach Spherene is taking? Especially looking at the very first image you posted?

Can’t wait to play with this soon.

1 Like

You can store positions, normals, colours and texture coordinates per vertex for a mesh, and as you say, you can use these as a way to store other information.
You might be better off though making a custom class which stores an array of stress tensors (or whatever other info you have), one per vertex, and has a method which returns the interpolated value for any input MeshPoint.

Thanks. My impression of Spherene is that it’s actually doing something much more like this:

So explicit rather than implicit, and specific to this Voronoi topology.

2 Likes

Just a small OT.

SDF seems definitively something that could exploit a faster parallel computation…
Fields using rhino geometries probably use rhinocommon and such.
But fields generated by simple lines/points/formulas maybe are just simpler math… ?

Then also the final cube marching should too be something unrelated to rhinocommon, and maybe something that could be fed to openCL…

I’m not asking to rewrite Isopod.

1 Like

Indeed, there are some ways in which SDFs and GPUs can fit very nicely together. A popular one is direct raymarching, to render the surfaces without meshing them first. You still need meshing (or similar, like slicing) to actually do anything with them other than render them, but it can be a nice way of having fast previews while working.

However, for raymarching you generally need SDFs, or at least fields with reasonable Lipschitz bounds, which are only a subset of all the possible implicits, and many interesting implicits are not very SDF like at all.

If you look at examples of the output of existing SDF based modelling tools, you’ll see a lot of it is either cute cartoony creatures or pseudo-mechanical looking robots and guns etc, and this aesthetic is driven by the limitations of composing forms by blending SDF primitives.
I was hoping to use implicits that also allow a much wider range of expressive and sculptural forms, and fit more into the Rhino world of NURBS.

There has been some recent work extending the types of implicits that can be directly rendered, but it’s still only a subset.

Aside from the SDF issue, making GPU friendly implicits still means they have to be written using only the set of mathematical functions available on the GPU, which is a lot more constrained than what you can do in any C# or Python script, allowing the use of any libraries, including RhinoCommon.
This applies even if it’s just about using the GPU for meshing, rather than raymarching, because a large part of the meshing computation time is many evaluations of the field.

Still, even after saying all this, I do think the potential use of GPU processing of implicits is exciting, and it’s something I’m continuing to look at. Perhaps for some things the speed benefits outweigh the limitations, and perhaps there can be ways to combine CPU/GPU for different parts, to make it faster for the things where it’s possible while still keeping the flexibility, but it’s tricky.

Btw, I think the title of that particular thread linked rather oversells it and builds unrealistic expectations, even if it adds some caveats later.
Better ways of linking GPU libraries to Rhino are of course very welcome, and the benefits for specific and limited types of operations can sometimes be large, but this is not, and will never be, as simple as “running your GH definition on the GPU”.

8 Likes

Hi Daniel,

Super excited to see yet more implicit libraries making their way into Rhino / Grasshopper! Really keen to get stuck in and see what this can do. Are you looking for contributors to aid in the development of this library at all?

I am getting the following error when attempting to extract a mesh: 1. Invalid cast: ObjectWrapper » IField

Looks like there’s something strange happening.

Rhino 8 SR10 2024-8-15 (Rhino 8, 8.10.24228.13001, Git hash:master @ f4c93f2b85de4dc69b50ed22f1b0d91445724d03)

Thanks!

Hi @TobyW
Did you see this part of the discussion above?

I did not! But that’s done it, thanks!

1 Like