is there any method available in SDK or RDF that will allow to do just this ?
run very simple tasks using some or all the GPU threads? and then pass information back to my C# code.
Could you please give an advise of where to look?
Thank you so much in advance
No, this is not part of Rhino’s SDK. There are C# libraries available for working with calculations on GPUs. I would google for C# GPU programming to see what is out there.
Thank you Steve,
Much appreciated you swift respond.
Are there any Plans at McNeel to have GPU computing or Compute Shaders (similar to Unity) capabilities any time soon.?
We have no plans for this feature. What are you trying to achieve?
I just got a chill, that perhaps someday, there will by Crypominer plugins.
Much faster computation times)
I just want to warn that AleaGPU doesn’t work with RTX30xx cards
Alea GPU hasn’t been updated since 2017, and at least at this moment I was unable to get onto the project website.
Perhaps a newer and more modern library is https://www.ilgpu.net
it is still possible to download packages, and works fine on RTX 20xx and GTX cards
I’m working on Erosion and real time wind simulation(CFD ) plugin for GH and since it has massive 3D arrays to scan I would love to have GPU helping me out. Both Parallel and RTree logics works just fine but super slow when going for high density voxels space.
I’ll give it a shot , thank you ,Nathan.
Wow. Absolutely awesome
Library - exactly what I need- would be great to have something like this introduce into rhino functionality.
Thank you for highlighting this library !!!
I was also wondering if it was possible to do something similar to compute shaders.
A hacky solution is to create a shader that draws to a portion of the screen, then to read it back as part of the current FrameBuffer. In this example I created a 2d noise texture then sampled it to create a mesh.
Depending on your data you can probably structure it to write/read from a texure and follow this approach.
computeShaderExample.gh (21.8 KB)
Here’s another example that mixes the FrameBuffer inside the shader to get a diffusion effect.
I wanted to avoid the timer triggering the python component by using
uniform sampler2D _colorBuffer, but that was providing the buffer before any ghgl effects had been applied. If there is a way to avoid the timer FrameBuffer capture it would be way more responsive.
ghglDiffuseDraw.gh (17.1 KB)
I modified ghgl to store the previous frame on redraws, so now there’s no need for a timer + python component.
That looks great! Can you include this sample in src/tests directory in the PR?
just added now