?Best way to reduce massive pointcloud?

I am working with a 1.5 GB pointcloud (.pts)and need to reduce it, what is the best approach?
Thanks!

Hey Jørgen,

You might have a look at CloudCompare, it’s free and open source…

–Mitch

1 Like

Thanks for the CloudCompare link. I’ve been looking for something to do deviation analysis that is low cost (or free). I’ll definitely give it a try.

Thanks for the tip. CloudCompare did the trick!

Do you have a good workflow to export colored point clouds to Rhino?

Thank you Hevetosaur. It’s a good find.

From CloudCompare? I have to admit I haven’t tried it yet, my colleague used it for a project - I’ll check with him, it should be possible. --Mitch

Yeah, the pointcloud I have has a single floatingpoint number for colors.
I found your script for importing XYZRGB clouds, but I found no way to convert that number.

And the file reads back into CC just fine though.

OK, I’m pretty sure there is a way to extract the RGB from the color number, I can modify the script for you if that will help…

@Holo - is the format just x | y | z | int where | is the separator? Is the separator a space, a comma or ? --Mitch

RhinoTerrain could decimate point cloud !!!

Hi, you pushed me in the right direction. The problem was that int(d[3]) didn’t work as int doesn’t like strings, so I had to use int(float(d[3]))

I did some more adjustments as well. Added a % counter etc.

I’ll share when it is done.

Hmm, should work… Don’t know why it doesn’t on your end.

my_string="65535"
my_integer=int(my_string)
print my_integer, type(my_integer)
>>>  65535 <type 'int'>

Below is my basic script, but I don’t know the file format and don’t have a file to test…

–Mitch

Edited - small change
XYZColorIntegerImport.py (1.2 KB)

It could be that the string is of a float number like 23.000000.

Anyway, here is the modified script.
In addition to the % counter I also added an estimated time to completion and also a feedback like this:

“Importing file took 59.8 sec. Result is a 4200300 point cloud”

And an example of the cloud:
(Edit for some reason the cut/paste became very dark, so I replaced it)

PTS_Import.py (2.1 KB)

This works well even though it is a bit slow of course.
I tested it with the 1.5 GB file that has 34.5 million points, and it took 9 minutes and 12 seconds.

Hi Mitch,
Thank you for the Cloud Compare suggestion and link. I have been experimenting with loading digital elevation models as point clouds in Rhino (then building surfaces), and Cloud Compare allows quick visualisation, sampling and subsetting before import into Rhino.
Thanks again,
Paul.

Hi everyone,
I know that t VCGLIB, CCLib, PDAL, CGAL, VTK, PCL, and OpenVDB are suitable libraries for subsampling but I need an independent library which gets point cloud file and create sample. PCL provides this capability, but it loads just 14Gbyte whereas my input file is bigger. CCLib (which is a component of cloudcompare) also depends on QCC-Lib and Qt>=4 whereas I have Qt5.3 license. In addition, I do not have CGAL licence and I cannot compile PDAL in visual studio. VCGLib (which is a library of Meshlab) is not high performance library. OpenVDB creates samples by using Voxels whereas VoxelGrid quality is lower than Poisson or space methods.
Except this libraries, is there other libraries which support both big data and Poisson or Space subsampling method? If there is not any more, how I can implement a high performance algorithm by which I can find nearest points?