The goal is to integrate a dynamic map, like OpenStreetMap or Google Maps, into Rhino.
I had several requirements: it needed to work all day without bothering the user, it had to be functional in perspective view, meaning it needed fast display and smooth performance.
All images are downloaded directly from the servers, just like in a web browser, and then displayed on the screen.
Of course, it’s not a 3D Google Maps map, but such a map in Rhino would have many advantages during project modeling!
Most of us who do architectural projects would welcome such a tool. Many times I have to go to google maps and do screen grabs and further edit in Photoshop and then try to determine scale, for me this would be a big time saver.
Are you going to release this tool for Rhino users? If so V7 version hopefully or does it only work in V8?
Unfortunately, no, it’s currently an internal tool, and at this point, I’m not sure what its future will be.
What is certain is that it requires far too much effort to be a free plugin, but I’d be happy to explain how I did it to anyone interested.
Thank you for your comment! And yesterday, seeing how well it worked, I had a very similar thought: “How did I ever manage without it!?”
For my part, I got tired of having to minimize the Rhino window to view Google Maps in the browser, then maximize the Rhino window to continue modeling, so I invested in a second screen, Google on one side, Rhino on the other. I can’t wait to try the plugin and see the productivity boost it brings!
I’d be very much interested in knowing more about the logic behind this one. I’ve been following your development journey in other threads but a high level overview of the necessary steps would be very welcome.
Hello @mrhe!
I figured this would pique your curiosity! How’s the 3D sculpting going? On my end, I saw that things were progressing and that it was very interactive!
Give me a bit of time to put together a proper response (this evening it’s getting late…).
GPS: EPSG:4326 with longitudes ranging from -180° to +180° (X-axis) and latitudes from -90° to +90° (Y-axis).
Mercator: EPSG:3857, the predecessor of today’s GPS coordinate system, but limited to latitudes between -85.06° and 85.06°. Used by Google Maps, OpenStreetMap, Bing, etc.
2D and 3D spaces are more familiar to us; 2D coordinates refer to screen space, and 3D coordinates refer to Rhino’s space.
Yeah, but what about tiles in all this?
Let’s simplify even more: OpenStreetMap provides the source code for different languages.
Mercator coordinates (-180°, +85.06°) correspond to tile (0, 0), regardless of the zoom level.
That makes 4 coordinate systems for the same point!!
Yes.
To visualize the coordinates in pixels and Mercator and make some operations easier later, I placed everything in 3D space with Rhino geometric objects (point, rectangle, polyline, …).
The project outline in red after applying the World-To-Screen transformation
And the Mercator outline in purple, very small near the origin and close to the Y-axis.
The dimensions and positions don’t correspond well to each other. The project appears at the bottom of the screen, while in the visualization it appears at the top; the Mercator coordinates represent several kilometers, while my project is only a few centimeters.
But none of this matters, because now that we have 3D outlines, we can create NURBS surfaces and use the Surface.ClosestPoint and Surface.PointAt functions to move from one coordinate system to another.
For example, if I have a Mercator coordinate and want to know the corresponding screen coordinate, I can simply do mercatorPlane.ClosestPoint(pointM, out var u, out var v) and then viewportPlane.PointAt(u, v)
So this is one of the trickier parts: the goal is to determine how many 256-pixel tiles to display on the screen based on the camera’s position and rotation.
But what’s great about our 3D objects, even if they don’t seem entirely related to each other, is that we can leverage Rhino’s API to perform calculations without getting bogged down in too much math.
For instance, we can easily perform a Boolean operation to find the visible area on the screen in 2D coordinates, then transfer those coordinates into 3D space. Or we can determine the 2D bounding box of the project (in pink).
For reference, tiles are divided by 4 at each zoom level (2x2, 4x4, 8x8, 16x16, 32x32, XxY, …).
So for a dimension of 7680 pixels, we have 30 tiles of 256 pixels (7680px / 256px), which gives a zoom level of 5 (log2(30) == 4.9).
(log2() is the inverse of pow())
However, be mindful of the perspective and camera tilt, and always check the number of tiles calculated using OpenStreetMap’s functions and the viewport size. If you zoom very close to a corner of your map and tilt it almost horizontally with the camera, you’ll see the entire diagonal on the screen. Without verification, this would attempt to download the entire map at the lowest level, resulting in thousands of images.
In this case, I simply increase the zoom level again:
Alright, so we have coordinates, outlines, a zoom level, and images now?
Well, we need to download them on the fly for the first time, and this is the second most demanding part.
I’ve already provided several hints in this post:
Coordinates, outlines, a zoom level, images… and perhaps a display?
I create an image at the full resolution of my screen, which represents the maximum size my screen can display, and I draw the downloaded tiles onto this image.
Then, I apply this image to the rectangular surface of the project.
Thanks to OpenStreetMap’s functions and our awesome 3D objects mentioned above, we can easily identify the tiles that encompass the visible area on the screen. Then, the texture’s UVs need to be realigned to fit the viewport.
In the video below, you can see the green outline of the tiles encompassing the visible part of the screen. I then demonstrate how the UVs are transformed to match the camera’s view.
Alright, I think I’ve opened up the implementation details quite thoroughly, so I’ll leave it here.
Not everything is perfect; for instance, I’d like to be able to display smaller tiles when they’re close to the camera and larger tiles for those further away. If anyone has any ideas, any suggestions would be appreciated!
Got it. I’m working on a different project where the in-memory bitmap implementation could come in handy. I’ll share my progress on the forum when I’ll find the time to wrap my head around it.
A while ago I developed a stand alone Python software to grab satellite imagery and save it as an image which I would then import to Rhino. It’s easy to import as the software tells you the “real life” size of the satellite image composite (in km), so you just have to scale it in Rhino. It saves you the hassle you’re describing. I still use it regularly.
I have never had the time to turn into a proper Rhino plugin, but it might still be useful for you.
It’s in French but very straightforward: CarreleurSat.
The software uses very similar techniques to those described by @kitjmv, just directly in Python. The source (warning: ugly code ahead) is in the link for those interested.
This is great work. There’s definitely scope for a very useful plugin here. Being able to just zoom and pan and then create geometry on top of the image, or mesh from something like GISMO would be brilliant!
If you could just add some AI to turn buildings, roads and other features into 3d geometry then we’re all done here!
It’s really fun to use in Rhino, and I’ve had so many ideas, …, AI! I thought of that; it would be amazing! I was also looking into how to import vector tiles, and I was reading Google’s documentation on services for downloading 3D tiles.
The project got off to a rough start when I realized that Rhino couldn’t display textures generated in real-time. This led me to use low-level APIs to keep the tool responsive, which made the project significantly more complex. I originally thought it would take me a week; it’s now approaching three weeks…
For my work, an interactive map like this will be a great help, but it will stay a visual aid, and for all these reasons, I realized the tool needs to remain simple. There’s no use trying to make Rhino into something it’s not.
(@3dsynergy) If you haven’t already, I strongly recommend learning to use QGIS, it’s an amazing software. You can do countless things, connect to 2D, vector, and 3D tile services, filter, clean up, and then export to Rhino to start your project.
Still, the next step is to make it compatible with existing Grasshopper tools like Heron or Gismo…
Hi @crtn-hrd
Very cool, thanks for providing this python software. I’m currently working in the field but I’ll check it out when I’m done with my current work sorry for the tardy reply. Can I run it in the Rhino 7 python editor?
RM
No no it’s a standalone Python software for Windows. It has its own interface.
The Python source is there and could be adapted for use directly in Rhino but that would be a lot of work.