Get render color at point on surface


#1

Hi,

I’d like to know if there is a way to evaluate a point on a surface for the color that the Rhino viewport is displaying. Basically, I’d like to find all points on a surface that are being rendered in the viewport within a specific range. Is this possible? I have tried looking for commands in Rhino Commons that would allow me to get some basic information about the render material, type, etc. as well as being able to use the surface commands to evaluate the surface, but i’m having a hard time finding something that could tell me the color at a specific point on the surface.

Thanks
devin


#2

Hi @devin_jernigan,

it is possible but depends on a few things:

Do you first have the pixel in the rendered viewport and then want to find the surface point or do you have the surface 3d point first and want to find the corresponding pixel ?

If you eg. have the image of the rendered viewport (by getting a view capture), each pixel has its xy coordinate in relation to the width and height of the image. Using these (screen) coordinates you could shoot a ray onto your surfaces from the camera 3d location to a 3d point. This 3d point is the transformed 2d point from screen space to world space. The location where the ray hits is the point on the surface.

I guess it will also work the same if you query a 3d point on a surface first to obtain a pixel in the captured viewport. But you will have to test first if the 3d point on the surface is in front of the camera and not occluded by other geometry.

Are you interested in the rendered / shaded pixel color or the raw color information eg. from a material texture, regardless of any lightning ?
_
c.


#3

Hi clement

I want to be able to sample a bunch of points on a surface and based on the lighting, detect the highlights and shadows. I want to light the scene myself and then just generically sample all over every surface in the scene and detect how much light each point is receiving. I thought it would be easier to do by somehow using the display information but the more I think about it, the less likely it seems.


#4

@clement

Here’s an example. In the scene I have a distorted surface that is covered in randomly distributed points, and a spotlight. The randomly distributed points were created using the populate geometry button in grasshopper. I am in the render viewport with a spotlight so that I can directly control the highlights and shadows that are shown on the geometry, no matter how I rotate the view in perspective. It seems like to me there should be a way to get access to the color data that the render viewport display is showing for the object (or objects).

How would I go about doing this? :

If you eg. have the image of the rendered viewport (by getting a view capture), each pixel has its xy coordinate in relation to the width and height of the image. Using these (screen) coordinates you could shoot a ray onto your surfaces from the camera 3d location to a 3d point. This 3d point is the transformed 2d point from screen space to world space. The location where the ray hits is the point on the surface.

I was thinking that it would be easier to just get access to the data that’s being calculated in the render viewport display, so that I don’t have to take an image and reconfigure every time I want to change the view.

Thanks


#5

@devin_jernigan,

below is something to test. Please note that this is by no means a complete script, it just shows a quick concept how to get the captured color to the point color. It does not check occlusions of the points nor does it check if a point is within the screen area. The script has only been tested with the example scene.

To test just open the scene in Rhino 5, make the perspective viewport active in rendered display mode, then run the python script.

PointScreenColorCapture.3dm (280.2 KB)
PointScreenColorCapture.py (1.4 KB)

btw. since rhino draws the points in the capture, they have to be hidden before the capture. The script does this and also captures no grid or cplane axes. However, you need to hide everything, eg. the light. To view the resulting point colors better i suggest to hide surface edges and change the point display for the rendered display mode like shown below:

The scene, with all points in white color looks like this:

After the script was running, it looks like this (point display is set to solid square):

_
c.


#6

@clement

Thanks

Yes, I can see what it’s doing. It’s looking at the texture that is applied to the surface. But for this to work for what I’m trying to do, I would have to texture the model and apply that as a texture to the object for it to work. Which seems to defeat the purpose because I’m pretty sure I can grab the texture color another way (easier) using grasshopper built in components.

devin


#7

@devin_jernigan,

no it is not looking for any texture. It gets the color from the captured image’s xy coordinates which are build from the 3d location of the points. Thats why i created a light source in the rendered viewport so you see it’s actually a shaded color not the constant texture or material color.

I’ve assigned the checker pattern just to make the results better visible. If you remove it you’ll capture just the shaded color, which includes hightlights and shadows. Note that this method only works if nothing is between the point and the camera…
_
c.


#9

@clement

Can this be used in a python button in grasshopper?


#10

I guess so if you’re able to port it into a python component, handle the GH display and occlusion of points, visibillity in the view and view capturing.

_
c.