A Glsl weekend

I wanted to share with you an incredible discovery. Rhino + Glsl Grassopper + _ViewCaptureToFile

6 seconds for this raw rendering of 10000x5000.


Original image 18Mo

The important thing to see in this image is the grass. There are exactly 6 million rectangles with a grass texture and no rectangle is modeled. Everything is created by a Glsl shader with the Grasshopper component “GL Shader”.

I still can’t believe that a simple “_ViewCaptureToFile” could produce this, and I don’t think I’ll get a better rendering with Blender’s EEVEE.
And with 6 seconds, we can imagine a full champagne video. No need to spend four days importing your model into Unreal. Even with Twinmotion, it wasn’t that good.

I didn’t know anything about Glsl and I think making weed is the “hello world” of programming shaders and my graphics card is standard for working in 3D (GTX 1080).
Bef I’m amazed!

Of course not everything is perfect, there are some things that are impossible or annoying with this method in Rhino. But voilà! 6sec and ready to work in photoshop …

Raw image, 10000x5000, 4 seconds.

5 Likes

Nice job, you now have graphics developer superpowers :slight_smile: Did you use a screen oriented texture or are you somehow drawing individual blades of grass as geometry?

My code is absolutely not optimized (that’s what’s crazy), no grass rectangles are oriented to the camera.
I am using the Tarsier plugin to generate points sent to the geometry shader.

The GS creates a cross of three rectangles oriented randomly on each point. I managed to create a random variation for the size and random colors, But I haven’t really played with the transformation matrix.

I have twice this Grasshopper code for two different textures.

The names chosen are not the same as in the tutorials. I get lost with “Screen”, “Clip”, “World”, but it would be nice if I could have a LOD system. If I look at the Unreal Engine Vegetation Pack, there are high definition models in the foreground and a simple rectangle oriented to the camera in the background.

Your component is really great. Shaders have always been a strange thing to me because I don’t know of any way to debug or see what’s going on in the graphics card. Being able to see the changes live with the “GL Shader” component is truly amazing.

With my (average) English I’m still afraid my words will be misinterpreted, but if that helps, can I give you some feedback on these experiences?

1 Like

stunning scattering result! @kitjmv, I’d like to see real-time viewport navigation,
any chance to upload a basic example of your alpha channel use, please!

Sorry I do not understand.
You ask: how not to print the transparent pixels of the texture on the screen?

yes, how about with people in crowded places, a kind of RPC https://archvision.com/rpc/

It’s the easiest! :slight_smile:
In the fragment shader, if the pixel is transparent, you must use the “discard” keyword.

vec4 pixel = texture (image, uv);
if (pixel.a < 0.5) discard;

That’s all!

In truth, it is sometimes more complicated.
If your image has translucent parts, you need to use the blend function. I didn’t need to use them so I don’t know.

Image quality is important to get good clipping, 1K and 2K were not good enough in my tests, so I’m using a 4K image which is the minimum. And in 8K, it’s much more precise and pretty.

In images with a lot of outlines and fine detail, you need to vary the acceptable tolerance if (pixelColor.a < 0.5) discard; otherwise you will get a big colored dot when the image goes away from the camera.

therefore

float averageTolerance = 0.5
float distance = // computed in the vertex shader or geometry shader (value from 0 to 1)
if (pixelColor.a < averageTolerance*distance) discard;
1 Like