I’ve just updated the drivers, clean installation. No luck.
I’ve tried the same on RH7 on a full-Intel laptop:
With antialiasing on, PointSize from 1 to 16 is displayed correctly, values bigger than 16 are shown as 16.
With antialiasing off, the point is displayed as a perfect square (like you just said), PointSize from 1 to 255 are correct, any greater is shown as fixed 255.
I’ll search for other implementations then…
Drivers get to specify what their upper limit is for point size. See the definition of
GL_POINT_SIZE_RANGE on https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glGet.xhtml
Once I’ve recorded my billboard session on GhGL later this week I can show how this is used to draw points of any size.
Better implementation, more simple, less errors …
I will be giving another presentation in two hours on creating billboard effects for GhGL (10 am Pacific time). Please send me a PM if you want an invite. I will post a recording of this later today.
A post was split to a new topic: Toon Display
I’m not gonna lie - I am clueless about the shaders but it looks impressive.
So… could this potentially be used to make shaders as seen on this site?:
or something coompletely different like this?:
Yes, we can do quite a bit more than what can be done with ShaderToy since we aren’t limited by WebGL and can work with any sort of geometry generated from Grasshopper. I’m trying to get everything set up you can install some GhGL based logic as a “preview component” and just directly use it without having to know how to program it.
Great! I really wish that it in the future it can be fully utilized by both shader maniacs and also by less experienced “regular” GH users. I already tested some files posted here and there.
You really drawed my attention,
I wish you the best implementing it.
Is there any way to output the Light Direction and/or RGBA values back into grasshopper? I’m interested in using the values (in camera & world coordinate space) to inform geometry downstream that can then be used to output 3D Geo or 2D vectors.
That is what I’m hoping too. My plan if to build a library of pre-canned preview components that you can just use.
I could output a bitmap representing the color buffer and the depth buffer for downstream processing in GH. Would that work?
After seeing some videos about shader creating I must admit that it looks fantastic.
Thanks @jho for awesome videos on youtube.
I got (maybe silly) question. Is it possible, that custom shader creation would be done solely from nodes? I am talking about some sort of GH plugin (with let’s say 20 components). Or it is impossible to translate that coding language into nodes?
Oh, @stevebaer, you just aswered this question a second ago, I guess.
Yes that would work because then we could feed the bitmap through the image sampler to attain numerical values. However, that leads me to the question: would the Bitmap be ‘live’ and/or parametric?
Honestly, I am lacking complete comprehension of GLSL / OpenGL to understand if a live output of screen color/shade values is easily attainable.
I’m not sure how this would work either. It seems like we would need to specify a single viewport for the image to be generated from as currently GhGL previews in all viewports.
Hi @stevebaer is available in WIP 7 ?
You need to run
PackageManager in WIP 7 and install the component from there
Cool thank you. I’ve seen it on GitHub starting video.
Haven’t install WIP 7 yet. Will do it now.
So is it possible for GHGL to have the same effect as OPENCSG ?