I have a mesh and I’m trying to collect an image dataset for training a machine learning model. I’m able to change the camera position to get different views of the image. From this, I’m able to get RGB image data. Is it possible to obtain the depth data as an image? I have attached the sample images of how a depth image can look like.
1 Like
Wow, that was simple. Thanks!!
Maybe I’m misunderstanding the process, but how can you extract depth data from a 2D image, even if you change the camera angle? Don’t you need something like parallax?
I’m not extracting the depth image from the 2D image. To be specific, what I’m doing is I’m training a CNN with RGB and Depth data obtained from camera and depth sensors. I wanted to train it with the simulation data beforehand to see the performance of the network.
If you are asking regarding the simulation itself then, the showzbuffer documentation says that it maps the max to min distance from 1 to 0 as a gray scale image.
Well, that clears that up!
1 Like