Can point clouds be merged? Or anything else to make them lighter?

Hello Team,
I am dealing with a heavy set of separate scanned point clouds.
Wondering if there is a way to merge them into one lighter point cloud?
Or is there a display mode that make this a bit lighter?
It seems to crash my computer even when moving it closer to 0,0,0
Interestingly it works better when keeping the model in Meters, probably less memory allocation with meters?
Many thanks,

Probably without file is hard to tell i.e. number of clouds, number of points.

Maybe you can preprocess point clouds in cloud compare before rhino import? CC can import up to 10 billion points, probably Rhino is not so optimized for pointcloud display, unless you try yourself to load pointcloud from file into display via C# or C++ Rhino plugin via file reader i.e. taking every n-th point.

1 Like

Thanks a lot Petras, what is CC? Cloud Compare?
Here is the mad point cloud, and an edited down version of less point in R7:

Do you have the 5551-House.e57 file in .xyz format? My point cloud script cannot read your e57 file. If I can get my script to read your file, then it would be easy to modify it to drop points and make a lighter point cloud.

I have installed CloudCompare and will try to convert it from e57 to xyz and if successful, load it into Rhino with my script.


1 Like

Worked for me like this:

  1. Open Rhino 7 or 6 and drag and drop .e57 file.
  2. Do not breathe, do not move mouse or change viewport, with a lot of patience wait for every command to execute, even select.
  3. Merge pointclouds, I used PointCloud command (standard rhino command).
  4. Downsample pointcloud, I used Cockroach that we just published recently, here is your pointcloud 1000 000 pts:
  5. Delete big cloud and Save file. (Since rhino does not delete actually geometry, I suggest to close rhino and open downsampled file to work with it)

Even Cloud Compare (CC) does not want to rotate or pan using this massive cloud :wink:
Just downsample. Yes it is a massive pointcloud and your graphics card would quickly crash Rhino because Rhino is not optimized for such huge files (5GB), let alone post-processing them.



I converted your e57 file to an xyz file using cloud compare and imported it into Rhino using my Python/C++ script to get this 9.7M pointcloud:

This cloud is somewhat far from the origin at 534,014 in X and 186,127 in Y so this may impact the resolution. It looks different than in CloudCompare and I think this is due to only 1 scan being exported to the xyz file. I will try getting CloudCompare to export all the scans.

If you could directly generate the ASCII .xyz file, I could try importing that with my script and see if it looks any better. If I can get a good result, then I will post my script and provide a link to the DLL it uses so you can do this yourself.

After learning how to better drive CloudCompare (this is my first time using it), I was able to export all the 32-scans for the point cloud to 32 .xyz files and load these one at a time into Rhino without crashing. With all 32 displayed, I can rotate the view quickly with no lag. Below are some closeups in the perspective view with some of the scans selected so it is not so black.

It takes just takes 0.1 sec to add each of the .xyz files from the 32 scans to the Rhino document. Rhino is using 6.2 GB of memory after loading all the files. Would you like me to cleanup the script and post it for your use?

I should mention that my system is not ordinary: 128 GB of 3200 MHz DDR4, Nvidia GTX 1080 Ti Video Card, 18-core Intel i9 9980XE CPU. So I am not sure how well this will work on your system. But on mine, this seems to be a piece of cake for Rhino. Perhaps using 32 separate point clouds contributes to the good performance in Rhino. To confirm this I used Rhino’s PointCloud command to merge all the clouds together. Now there is a tiny bit of a lag as I rotate the view but no crashes. Thus for lower performance computers, downsampling the pointcloud could make sense. I can change my script to do this. I think I will do it mostly on the larger files as the small ones are already sparse. The whole pointcloud has 238M points and first I will try a 3X reduction to 80M points.


1 Like

WOW, that’s amazing!!! Thank you so much Petras I really appreciate and shall not breathe in reproducing this. Did you use the Point Cloud tool “Add” option I can’t seem to merge the points like that as it adds one point cloud to another one instead of just merging all of them together

WOW Thanks a lot @Terry_Chappell would you be able to share the output of your script? Did you also use Add with the PointCloud tool of R7? this didn’t seem to work on my side as it individually selects point clouds.

My system is a bit more ordinary:

I have finished the first pass of the Python/C++ script. In one step it reads the xyz files from all 32 scans, creates 32 pointclouds and then merges them into one. It can skip points when reading the clouds in order to reduce the size of the final point cloud. It takes less than a minute to create the final 238,527,757 points pointcloud. I tried using Rhino Import to read just 1 of the 32 files and this took 75 sec so 40 minutes to read all 32. This is for xyz format files. Perhaps it would be faster with e57 format. But probably not as fast as 1 minute for all files.

Note that my script only runs at this speed when used with the C++ DLL created in Visual Studio 2017 for use with a Windows machine which your listing shows you have. Thus once I finish the cleanup, I will post the Python script, the C++ code and a link to the DLL file so you can download it (the Rhino Forum does not allow posting of DLL files so that is why I have to use a link for it). The C++ is included only so you can see what happens on that side of the script; you do not need to do anything with the C++ code as it is already compiled into the DLL. Once the DLL is downloaded, you need to put it somewhere on your machine and then put the full path to that location at the top of the Python script. On my machine the top of the Python script looks like this:

# If you want to use the DLL for 5X faster cloud import, set its path here:
dll_name = r'C:\Users\Terry\source\repos\ReadObjMakeMesh\x64\Release\ReadObjMakeMesh.dll'

You need to change this to your location. Note that the path starts with: = r'C:\Users where the r in Python means to use the raw string which allows you to type the string without double back slashes \\ for the separator.

The Python script contains a procedure for cloud import written in Python only. Normally this can be used when you do not have the DLL. But as I spent all my time on fixing up the Python/C++ version, it is not up to date and will not work for your case. Eventually I may fix it up to be the same but to get the code out to you faster, I will do this later.

Soon I should have something ready for you to test. I do not understand why you want one giant cloud in Rhino as I can hardly make sense of all its black dots. Currently I delete the 32 individual point clouds after the 1 giant cloud is make. Do you want me instead to keep those and put them on a different layer?


1 Like

32 individual point clouds will have a better chance of working in Rhino than one mega-merged point cloud. Maybe grouping these will be enough.

1 Like

You can select multiple clouds when you click add.

1 Like


The first pass of the script is done and posted below. When using it to make 32 point clouds and then merge these into one, I noticed the peak memory usage can be over 20 GB when the full 238M point pointcloud is created. This comes down to 12 GB or so when a Skip factor of 2 is used on the larger files (>200MB); this makes a 122M point pointcloud. If you run the script more than once, the amount of memory Rhino uses increases. So if you want to try using it to make the full 238M point pointcloud, you will need to start a fresh Rhino session each time. With Skip = 2, you can probably run it several times in the same session. The Skip factor is near the top of the Python script and you can increase it from 1 to make a lower-count pointcloud. The Skip factor is the number of lines in the .xyz file that are read to add 1 point to the point cloud.

Currently the lines are skipped in a linear fashion and this may not be best if the data has some repeating structure to it. With a higher Skip factor of 3, it seems like a lot of the scanned data is dropping out in the same area. I might try randomly skipping lines to see if this provides a more uniform depopulation of the scan data.

The way I use the script is:

  1. Install CloudCompare, open your e57 file and patiently wait for it to read in. Then touch the top line in the DB Tree so it is highlighted. Now touch the save icon (looks like diskette; located 2nd from left in row of icons) and for the save location choose a new directory and for the file type choose the option that includes xyz.
  2. After installing the XYZRGB_Import_Fastest Python script in a directory like: C:\Users\Terry\AppData\Roaming\McNeel\Rhinoceros\7.0\scripts\Import_Point_CloudDLL
    and putting the DLL in a known location, type in the Rhino Command: window:
    This will bring up a file selection window and you should select the Python script XYZRGB_Import_Fastest Python script. If this file does not show right away, navigate to where you put it. Change the 2nd line of the script to the path where you put the DLL and save the file.
  3. In the Python Editor, push the F5 key to run the script, or on the Rhino Command: line enter:
    and find the XYZ_Import_Fastest script. Then double-click on it to run it. It will bring up a file-select window where you need to navigate to the new directory where you put the xyz files. Go inside the directory and double click on the first file. The script will automatically load all the other files and create a single point cloud after you wait for a minute or so (15 sec on my machine). Do not push any keys while waiting.
  4. Look in the Rhino Command: window for information about the progress of the script and to see when it is done. The last line it prints with the Skip factor (on 4th line of Python script) set to 6 is:
    Combined 12.642 GB of pointclouds into one colored point cloud with 39,754,839 points in 1.3663 sec at 9.252 GB/sec
  5. Your pointcloud should appear in the Perspective view with it centered and filling the view.
  6. If you have problems, let me know. If you see error messages from the script, send me a copy.

We are still interested in non-secret details of how you want to use this point cloud in Rhino.

The Python script: (32.8 KB)

The C++ DLL used by the Python script: (NOTE: This DLL also supports pointclouds)
ReadObjMakeMesh.cpp (448.3 KB)

The listing of the C++ procedures compiled into the C++ DLL (for reference only):
ReadXYZRGB_Make_Cloud.cpp (20.8 KB)
I downloaded the first 2 files and tested that they work on my machine so I hope they will work on yours.

I added a green-gradient coloring to the points of the pointcloud to make it easier to see:

and here is an elevation map using a rainbow gradient:

compared to the all black view:

Is this useful to you? I added the coloring capability to the Python script and C++ DLL above. The new options for coloring are at the top of the Python script. Let me know if it works for you.



This is absolutely amazing. Thank you SO MUCH Terry.


I added the coloring capability to the Python script and C++ DLL posted above. Please give it a try.

Also I added the capability for importing only 1 of the pointclouds at a time. When similar files are detected in a directory, a popup will ask if you want to combine all the files. If you choose no, then only the file you clicked on will be imported and colored (if add_colors = True at the top of the Python script and there are no colors in the .xyz file).

On my machine, importing the 12.62 GB of ASCII data in your 32 scan files with .xyz format and making one 238M point colored pointcloud with 6.7 GB of data takes under 15 sec. Using Rhino Import on these .xyz files would take 42 minutes and there would be no density reduction or colors added. So the script provides about a 170X speedup when working with .xyz (or .txt) files. It should be noted that Rhino Import works much faster with e57 format files, which is the original format of your scan data; Rhino imports the entire 32-scan file of 3.7GB in 70 sec which is less than 5X slower than my script with .xyz files but, again, there is no option for combining clouds with density reduction while importing or adding colors.

Below I present you with yet another option: Elevation contours with 1 meter spacing aligned with the Z of your scans:

Is this something that would help you? I have updated the Python script with this option. To use it, set show_contours = True at the top of the Python script.

Just so you know, I have created all of the images for the merged pointcloud with a Skip value of 2 that results in a 122M point pointcloud. The full 238M point colored pointcloud is harder for Rhino to display so, even on my machine, I prefer to work with this lighter pointcloud. I also tried Skip = 3 that gives 83M points and the results look similar:

even with Skip = 6 that gives 44.5M points, the result is not awful:

So I am hopeful that you can find a setting that will work well on your machine. It should be noted that on my machine with the 6.7GB of data for the full 238M points, colored pointcloud loaded, I can still reasonably rotate the view and zoom in/out without Rhino crashing even once over the past 2 days. But it is more fun to work with the lighter, 44.5M pts and 83M pts clouds.

@Arthur_Mamou_Ma, I have further minimize the peak memory usage to around 10GB with Skip = 2 by immediately deleting each of the 32 pointclouds as they are merged into one. The updated DLL is in my post above. Hopefully this will work even better on your machine.


1 Like


I thought you might find this section of the colored pointcloud interesting. It shows the road running over the bridge with the colors adjusted to highlight small elevation differences. On the left you can see the center of the road is crowned to encourage water runoff to the gutters on the sides of the road (the gutters are next to the cars parked on either side of the road). On the right you can see that the bridge over the stream has a 2D crown crown centered in about the middle of the bridge so the water runs down to the gutters on its sides but also down to the crowned roadway and then to the gutters on the side. Also the pedestrian walkway on the bottom has a noticeable crown away from the drop off into the stream and towards the gutters.

After seeing this detail, I am thinking about making a separate Eto form driven script that allows you to easily tweak the colors like this with a legend that shows you the elevation for each color. Combined with measurements in the Top view, you can then evaluate the slope in the different areas to see if they meet specs. A better way to check the slopes is with a slope map but that is not something I have sitting on the shelf for a pointcloud. If the pointcloud was converted to a mesh, then I already have tools that do both of these things (elevation map and slope map). So maybe before I re-develop these tools for a pointcloud, I will look at converting the pointcloud to a mesh. I already tried this on a tiny 1M pts part of your cloud with Rhino’s points-to-cloud command but after an hour it had not finished so I killed it.

I also tried Metashape which took 40 min to read in the 238M pts pointcloud but it will not generate a mesh because there are too many open regions. I still want the capability of easily tweaking the pointcloud colors so I will look at modifying my mesh script to work on a pointcloud. This will take some time but if I am successful I will post the new script here.

Is this free elevation data (after you own Rhino) useful for your work? I would think you already have tools for processing Lidar scan data that does this type of thing, yes?

I have made progress on my color_cloud script. I came up with this view of the bridge region above with a wider dynamic range of colors to highlight the grade in the road and sidewalks:

The color steps are 4 cm and I will refine this further so that a sub-cm resolution is possible.

It has an ETO Forms GUI that lets you easily control the min/max Z of the cloud that is colored and the colors used across this range. From the default values on the form, you can see that your cloud spans a 17m to 46m range:

Where the cloud extends outside the Z-range being colored, the Underflow and Overflow colors set the colors in those regions. This keeps those regions subdued compared to the area of interest. In the map above, grey was used as shown in the form setting for that map below:

Currently it is fairly slow to color the elevation map, taking 31 sec or so. But I expect to speed this up by 10X with further work.


1 Like

Hi @Terry_Chappell. Thanks for posting this solution. I’m having a similar problem trying to decrease the size of my point cloud scan. I’m trying your solution, but keep getting stuck at point 3. After I run the script and I try to select the xyz file (which a .txt file after using Cloud Compare - the file type option with .xyz and .txt are the same), Rhino comes up with ‘ERROR: Could not detect if separator is space, comma, semicolon or tab so pointcloud cannot be parsed.’ I’m not sure what the problem is, or if it’s that txt files won’t work. Do you think you could help?
Much appreciated, Yvonne

I think I can possibly fix this if you send me a link to the .txt file. Or you could bring up the .txt file in an editor and sent me a copy of the first 10 lines or so.


Hi Terry, this would be great. Here’s a link to the drop box file.

Thank you for your help,


t file is 19.2 GB! Truly gigantic. I will try my best to fix my pointcloud import to read this file.

It is downloading now with 30 min left as the estimate. I see it down loading 0.1 GB every 7 sec so it is going to take awhile.

How did you generate this large pointcloud? Is it from a LiDAR scan?

The download is done.

Looking at the file in Guccio Hex Editor I can see that the format of your data is:
X Y Z R G B I Nx Ny Nz
where XYZ are double precision floating point, RGB are 0-255 integers, I is 0 to 1 intensity in single-precision floating point, and Nx,Ny,Nz are the point normals in single-precision floating point.

I am not positive about what the I is. It may be something other than intensity.

In any case I am confident I can modify my script to rapidly read in this point cloud and reduce its density tomorrow.


I surmise that I is like an alpha channel, 0 being fully transparent and 1 being fully opaque.