Pointclouds with 30 billion points can be imported and viewed in Rhino but not all works as I like

@dale, @stevebaer,

I imported 33 pointclouds created by Lidar scans of a street and combined them into one cloud with 465.8M pts. If I start slowly when rotating the view, then the response gradually picks up speed while still showing the whole cloud but then after very much rotation, the view shifts to showing only the bounding-box which allows for more rapid movement. After Rhino has shifted to bBox view once, then it immediately jumps back to the bBox view for all subsequent rotations no matter how slow I start.

Is there a way to turn on/off the shift to bounding-box view? Rhino already works reasonably well showing the whole cloud and it would be faster sometimes to just stay with the full view rather than shifting back and forth.

Also is there a known limit to the size of a pointcloud in Rhino? This nearly half-billion point cloud consumes 31 GB of memory while my machine has 128 GB available. During import, peak memory consumption reaches about 42 GB during the time when Rhino copies the cloud geometry into the Rhino Document. With these numbers, it appears that my machine would be memory limited to about a 1.5 billion point cloud if only DRAM is used to maintain good performance. Or do the ON_SimpleArray’s used to store the cloud data impose a smaller limitation?

Here is a colored view of the nearly half-billion point cloud:

It is a Lidar scan of a street (look closely and you can see some cars in yellow) with mm accuracy.

I am impressed that Rhino would let me import such a large cloud. I used my own script, posted in

to do the import in less than a minute from files in Points format. Caution: Importing this cloud with Rhino’s Import tool takes over an hour. However Rhino imports e57 format files much faster and would take only a few minutes for this cloud. Something to keep in mind if you import large clouds and do not use my script: use e57 format instead of Points(pts, xyz, etc.).

My script also allows the cloud to be decimated down to almost nothing during import so that the peak memory used is reduced. This allows you to import and view pointclouds limited in size only by your disk space. For example, a 1 TB cloud file in Points format with only points can hold a pointcloud with 30-billion points. This could be decimated by 300 when importing to create a 100-million point view of the 30-billion point cloud on a computer with only 16GB of memory in about 7 minutes from an M.2 PCIE 3.0x4 drive. To show what this looks like, the 466M point cloud above was decimated by 5 to create this a cloud with 93Mpts:


So it seems that reviewing 30-billion point clouds with a 100-million point view could be useful. If only the rotation of large clouds could have the option of not switching to bBox view, then I would be happier.

Regards,
Terry.

2 Likes

You’re probably better off keeping 33 separate point clouds and grouping them together. That one mega point cloud requires a linear array of data available in memory for that massive size both on your CPU and on your GPU. Chunks of memory that big are hard to come by.

Steve,

Good point. I like to join them so they work easily with my other tools for making elevation maps with contours and slope maps that show both slope and slope direction. These use smoothing to create better views and this more complicated and slower with separate clouds.

In terms of memory storage, I would think that the cloud data is kept in ON_SimpleArray's even after being copied to the Rhino Document. If so then it is broken up into up to 4 separate arrays for points, colors, intensity and normals. Thus the points and normals arrays would be the hardest to “outfit” with linear chunks of memory. So far I have not seen many problems with my “little” half-billion point clouds, just that the view insists on popping to the bBox-only veiw at the drop of a hat when Rhino is still performing quite decently on my machine when displaying the entire cloud (actually some of the points are dropped when rotating but not enough to matter).

But you are right, it does take a lot of memory to enable billion-point sized clouds. This is why I included the decimation option when importing the clouds. It allows you to get a big cloud into Rhino for basic inspection and measurements no matter the size of your machine. The Lidar scanning guys really need help in this regard as they generate so much data; a Leica RTC360 captures 2Mpts/sec or 0.12 billion colored points per minute.

Regards,
Terry.