Exporting a Pointcloud as a CSV file with colour

I have a pointcloud with colour attributes and I need to export to a CSV file so I can have the coordinates of each point and the RGB colour of each point.

Could anyone help me please?

Thanks,

Josh

1 Like

I just checked in with the developers and there isnā€™t anything just like youā€™re looking for currently.
If you have a mesh with colors assigned to the mesh vertex points, that will export to the PLY file format with colors.
All the basic tools are there to export points with colors but no top level command for it that Iā€™m aware of.
My guess is someone with good scripting skills could write a custom tool to do that.

Hi Josh,

Download and drop the attached button onto your Rhino workspace.
exportPtCloudToCSV.rui (5.7 KB)

3 Likes

djordje, thank you for this even though Iā€™m not the original recipient :wink:

1 Like

Iā€™m looking to add the point normal data from the cloud to the export alsoā€”@djordje I assume itā€™s just adding a variation on this:

 for i in range(len(pts)):
    if ptCloud.ContainsColors:
        line = "%.3f,%.3f,%.3f, , %s,%s,%s\n" %(pts[i][0], pts[i][1], pts[i][2], colors[i].R, colors[i].G, colors[i].B)
    else:
        line = "%.3f,%.3f,%.3f\n" %(pts[i][0], pts[i][1], pts[i][2])

but for the normals into the code, right?

Hi Jonathan,

Yes you are right, it is one part of the editing.
Check the attached script below.
To run the script, in the Rhino application menu, click on: Tools -> PythonScript -> Run, then then find the downloaded exportPtCloudDataToCSV2.py file.

exportPtCloudDataToCSV2.py (3.1 KB)

3 Likes

Gotchaā€”the rest of it being the update to the header lines, unless Iā€™m missing something else?

This is standing in for the ply export for me right now as for some reason thatā€™s broken for meā€¦

Thanks!

Yes, the headers had to be updated too.
I havenā€™t understood you. What is broken?

Rhino just wonā€™t export to ply format, and crashes. The command log just shows:

Command: Export
Error writing file Z:\Dropbox_UnionNine\3.ClientWork\directory\testexport.ply
Error saving file Z:\Dropbox_UnionNine\3.ClientWork\directory\testexport.ply

Iā€™ve already tried writing elsewhere (straight to C and Z:\ drive roots, outside of Dropbox to rule that out. Iā€™ve also tried just opening and then exporting the point clouds and get the same result. Working on it with Brian @ McNeel in email ping-pong today.

That is the problem with Rhino PLY exporter, not with the upper script which exports points,normals coordinates and colors of point clouds to .csv file.

Correct, which is why Iā€™m using the .csv file export youā€™ve put together instead :wink:

As a corollary issue, Clementā€™s script here doesnā€™t keep the colors or normals of the source cloud, so thatā€™s my next challenge Iā€™m working onā€”with that and your CSV export I should be operational :relaxed:

Hi @JKolodner, check the linked thread above for an updated script.

c.

Rhino 5 does not export PointClouds to ply format.

c.

That would explain itā€¦ would be awfully handy to have it do that :wink:

If you show @djordje a small example file containing the structure of such a ply file (ASCII), it will be possible for him to change his csv output script to do it. I guess all you need is the line prefix and the header. :wink:

Just curious, in which app you would like to import the ply file with points, colors and normals ?

c.

Iā€™m looking to pull things into CloudCompareā€”you can pull clouds in as CSV, but that requires setting it up each time to parse the file, whereas if itā€™s a format that CloudCompare likes, then itā€™s possible to automate with the command-line API with less fuss.

Then I can have a Python script call cloud compare with a subprocess call to do work on the clean cloud Iā€™m exporting from Rhino, save that, and return to Rhino to open a resulting cleaned mesh-reconstruction so I can return to work on it.

Yay process automation X-)

Edit: even with your ā€œslowā€ version, it still takes only ~5 minutes per comparison case to evaluate and return the normals-included version. Thatā€™s not too shabby, but I can hear when my computer changes from full-threaded to non-full threaded in the fan speed :wink:

Thatā€™s what my comment about Point3D was based on,starting to dig into the docsā€”yeah I saw the Point4d as well, but what we need is really a Point[N]D generalized object to hold additional attributes, or a way to match data between the data structures.

One thought of a ā€œhackā€ way to do it was it could be multiple PointCloudsā€”since my end product is getting composed back together outside of Rhino anyway. We could sample the cloud for actual locations to get cloud 1, then query the associated normals (and encode that resulting list of ā€œpointsā€ of nX, nY, nZ data, and then do the same with color.

Then weā€™d read the PointCloudLocation[x], PointCloudNormal[x] and PointCloudColor[x] values and put them into the csv or Ply format output file:

CloudCompareā€™s ply header is:

ply
format ascii 1.0
comment Author: CloudCompare (TELECOM PARISTECH/EDF R&D)
obj_info Generated by CloudCompare!
element vertex 676336
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
property float nx
property float ny
property float nz
property float scalar_curvature
end_header

And the subsequent lines are space-delimited, so itā€™d be super easy to compose that formatā€¦

ā€¦ which you could then have Rhino ā€œimportā€ as a new object I suppose X-)

As Clement mentioned, exporting cloud point data can be made in a way ply file requires.
But I do not know how to calculate the scalar_curvature variable your ply file requires. At least Rhino does not contain a native method to calculate it.
I added a ā€œdefaultScalarCurvatureā€ variable on line 35, and assigned it a value ā€œ0ā€.
I am not sure whether or not this will work.

Maybe you should investigate if this scalar curvature variable could have some default value or not. If yes, then which one.

exportPtCloudDataToPLY.py (1.8 KB)

Scalars are added data in CloudCompare that is optionalā€”it looks like the core description for a Ply would just be the [x,y,z] point positions and point-count. Normals and color are optional tooā€”so those are values you list if your point cloud has them, otherwise you can leave them out. Hereā€™s a Rhino .Ply header from when I did successfully get an export out:

ply
format ascii 1.0
comment File exported by Rhinoceros Version 5.0
element vertex 2698
property float x
property float y
property float z
property float nx
property float ny
property float nz
property uchar red
property uchar green
property uchar blue
element face 2562
property list uchar uint vertex_index
element material 1
property uchar ambient_red
property uchar ambient_green
property uchar ambient_blue
property uchar ambient_alpha
property uchar diffuse_red
property uchar diffuse_green
property uchar diffuse_blue
property uchar diffuse_alpha
property uchar emissive_red
property uchar emissive_green
property uchar emissive_blue
property uchar emissive_alpha
property uchar specular_red
property uchar specular_green
property uchar specular_blue
property uchar specular_alpha
property double shine
property double transparency
end_header

As you can see, thereā€™s a lot of additional data there, and two whole additional element types with the faces of the geometry and a material indicated in addition to points. The subsequent lines of data look like:

40.000000 119.339996 0.000000 1.000000 0.000000 0.000000 0 0 0

For the set of points, which in the file are lines 36 to 2733, which would be 2698 elements. Then the lines look like this for the faces:

...
4 114 115 113 112
4 98 99 97 96
3 218 217 191
...jumping to last line, which is the material definition element...
0 0 0 0 0 0 0 0 0 0 0 0 255 255 255 0 0.000000 0.000000

Which to me indicates that they a) donā€™t have to use all the values necessarily, and b) the delimiter between elements is the ā€˜/nā€™ and the properties are just space-delimited, and setting ā€œ0ā€ for a value you donā€™t have otherwise populated (like ambient/diffuse/emissive/shine/transparency on the ā€œmaterialā€) can be fine, if itā€™s not going to otherwise cause problems in your downstream use of the data. In other words, setting a value for color or normal or even defining material isnā€™t important and may be counter productive if all you want is point locations, so itā€™s just bloating the file size.

So if you have fewer properties, then on the subsequent lines of data you just have shorter linesā€”so a header with just [x,y,z] and [nX,nY,nZ] defined would have:

ply
format ascii 1.0
comment Author: Jonathans Revision of Djordjes script
obj_info Generated by Jonathans Revision of Djordjes script
element vertex [quantity of points]
property float x
property float y
property float z
property float nx
property float ny
property float nz
end_header

Then your lines would look like:

x1 y1 z1 nX1 nY1 nZ1
x2 y2 z2 nX2 nY2 nZ2

ā€¦and so on for all the actual data.

Iā€™m no Python syntax/parsing wizard, so it may take a bit but Iā€™ll try to revise your script to include the right behavior, and then we have a new utility :blush: