I’m working on an interactive map project in Rhino, and Rhino’s material system is an area of development I don’t fully master, which I think led me to make some suboptimal choices.
Currently, all rendering is done with OpenGL (VAO, Texture, Shader, etc.).
It’s an image that is updated on the fly and applied to a simple rectangle.
This implementation has the advantage of being very fast; however, there’s no integration with Rhino materials. In other words, I can’t use it as a Rhino material.
I came across an example code that you wrote.
I was wondering about the TextureEvaluator.GetColor function and particularly when this function is called. Is it called on each render frame, and could it therefore, on the CPU side, replace the Shader texture + UV calculation, or is it used only once to load the image?
If I could create a material with a dynamic texture, that would be fantastic, but testing this requires a lot of changes in the code, so I’d prefer to ask you beforehand.
For proper integration with Rhino RenderMaterial you should write out image data to disk. This will ensure that materials also work with render engines like RhinoCycles.
The memory bitmap texture isn’t an ideal texture, for instance Raytraced/Rhino Render won’t work with it properly, and trying to change mapping settings in Rendered mode only partially works.
But writing the data out to disk and then updating the same image will work.
Thank you for your response! You just saved me a lot of time!
If a frame render consists of 100 images to download, that would mean 100 write calls to disk per frame and 100 times reloading the material. I don’t think that’s a good solution, so I’ll stick with the OpenGL implementation.
One probably absurd question: are Rhino’s PBR shaders accessible? Can I retrieve their ID and assign the necessary attributes and uniforms so that the map texture responds to light, shadows, etc.?
Set the category to Category="procedural-2d" and imageBased=false in the CustomRenderContent, and it will work in Raytraced/RhinoRender as well, even in Physically Based materials. Here with the RenderTexture using mapping channel 1 for the mapping.
I don’t see why you would do 100 times writing of images per frame. You would write the images only when actually updating.
Do you mean the OpenGL shaders? I would stay away from that. Doing direct OpenGL work will prevent the code from working on MacOS. And this is stuff that is not accessible through an API, so can change without notice, probably breaking anything you’ve done up to that moment.
You probably could just try with the change to the gist with the suggestions in my previous post. Make sure category is procedural-2d and imageBased is set to false and it should work.
I’m a bit confused about your example code on GitHub, nowhere do I see that you save the image to disk, so why would I need to do that?
I do indeed have a caching system in my code, which is actually a dictionary in memory. However, the images still need to be downloaded for the first time, and for the download phase, I’ve noticed several things:
First, you need to work in a small area for a long time, going through all the sectors and zoom levels, before all the images have been downloaded.
Sending all 100 image download tasks simultaneously, whether through asynchronous functions or in a separate thread, produced poor results. I didn’t fully understand why, but the simple and effective solution was to send the 100 downloads one after the other with a short delay between each request. This way, I receive a continuous flow of images spaced a few milliseconds, rather than a batch of 100 images all at once.
I’ve also noticed that even on Google’s very responsive server, certain images occasionally hang.
For these reasons, I update the texture composed of 100 small images with glTexSubImage2D and call RhinoView.Redraw as soon as a tile is received. Otherwise, there’s really a sense of waiting too long in front of a blank screen whenever we move the view by just one pixel.
And I’m still wondering, with the attributes [CustomRenderContent(Category="procedural-2d", ImageBased=false, ...)], will the TextureEvaluator.GetColor function be called on every render frame ?
(Sorry, Rhino materials and I aren’t quite acquainted yet…)
I assume you are talking about Rendered mode. I am not 100% sure, but I believe Rhino will bake this automatically, this at least happens for Raytraced. This means that the texture gets evaluated once the change has been signalled, baked. The baked result will be used bypassing the texture evaluator of the memory bitmap after that. But again, don’t worry about when GetColor is being used. Rhino should work just fine with a couple hundred image textures, especially when they are small.
You’ll have to make sure you override RenderContent.CalculateRenderHash2 Method to give a new hash value that is different from older hashes used by the texture instance. This is to ensure Rhino automatically re-bakes based on changed content and settings. Untested, but I’m pretty sure you’ll have to bracket changes to your texture in BeginChange()/EndChange() to ensure the document (and thus Rhino) gets properly notified of changes to your material so that it can be updated to Rendered mode and render engines that do realtime viewport update.
Instead of doing 100 images at the same time do a smaller amount at the same time - 3 to 5 or so. But indeed doing it in one loop will be much easier (and less error prone) to manage.
OK, so if I understand correctly, the texture display will not appear in shaded view, but only in rendered mode?
Is there a way to work around this?
Yes, the fewer downloads occurring simultaneously, the better the performance. I tested it with 3 or 4 downloads at a time, which is good, but a single download at a time is even better (in terms of performance).
There are certainly multiple parts of the code to improve, like this one and the transformation matrix calculations from screen coordinates to GPS coordinates and from GPS coordinates to UV coordinates.
But for now, the main thing is that the plugin is useful. My issue is that I don’t yet handle terrain with elevation (only flat surfaces), which is why I was asking all these questions.
Thank you again for your responses—I’ll test this and see how it goes…
Set the object display mode to rendered. That is for instance how _Picture command works. Material and plane get created, and plane gets object display mode set to Rendered.
Also, if you stick to using RenderTexture/RenderMaterial this should work also on the Mac (assuming you’re not using any other Windows-specific APIs, or do OpenGL drawing).