Exporting a Pointcloud as a CSV file with colour

Hi Josh,

Download and drop the attached button onto your Rhino workspace.
exportPtCloudToCSV.rui (5.7 KB)

3 Likes

djordje, thank you for this even though I’m not the original recipient :wink:

1 Like

I’m looking to add the point normal data from the cloud to the export also—@djordje I assume it’s just adding a variation on this:

 for i in range(len(pts)):
    if ptCloud.ContainsColors:
        line = "%.3f,%.3f,%.3f, , %s,%s,%s\n" %(pts[i][0], pts[i][1], pts[i][2], colors[i].R, colors[i].G, colors[i].B)
    else:
        line = "%.3f,%.3f,%.3f\n" %(pts[i][0], pts[i][1], pts[i][2])

but for the normals into the code, right?

Hi Jonathan,

Yes you are right, it is one part of the editing.
Check the attached script below.
To run the script, in the Rhino application menu, click on: Tools -> PythonScript -> Run, then then find the downloaded exportPtCloudDataToCSV2.py file.

exportPtCloudDataToCSV2.py (3.1 KB)

3 Likes

Gotcha—the rest of it being the update to the header lines, unless I’m missing something else?

This is standing in for the ply export for me right now as for some reason that’s broken for me…

Thanks!

Yes, the headers had to be updated too.
I haven’t understood you. What is broken?

Rhino just won’t export to ply format, and crashes. The command log just shows:

Command: Export
Error writing file Z:\Dropbox_UnionNine\3.ClientWork\directory\testexport.ply
Error saving file Z:\Dropbox_UnionNine\3.ClientWork\directory\testexport.ply

I’ve already tried writing elsewhere (straight to C and Z:\ drive roots, outside of Dropbox to rule that out. I’ve also tried just opening and then exporting the point clouds and get the same result. Working on it with Brian @ McNeel in email ping-pong today.

That is the problem with Rhino PLY exporter, not with the upper script which exports points,normals coordinates and colors of point clouds to .csv file.

Correct, which is why I’m using the .csv file export you’ve put together instead :wink:

As a corollary issue, Clement’s script here doesn’t keep the colors or normals of the source cloud, so that’s my next challenge I’m working on—with that and your CSV export I should be operational :relaxed:

Hi @JKolodner, check the linked thread above for an updated script.

c.

Rhino 5 does not export PointClouds to ply format.

c.

That would explain it… would be awfully handy to have it do that :wink:

If you show @djordje a small example file containing the structure of such a ply file (ASCII), it will be possible for him to change his csv output script to do it. I guess all you need is the line prefix and the header. :wink:

Just curious, in which app you would like to import the ply file with points, colors and normals ?

c.

I’m looking to pull things into CloudCompare—you can pull clouds in as CSV, but that requires setting it up each time to parse the file, whereas if it’s a format that CloudCompare likes, then it’s possible to automate with the command-line API with less fuss.

Then I can have a Python script call cloud compare with a subprocess call to do work on the clean cloud I’m exporting from Rhino, save that, and return to Rhino to open a resulting cleaned mesh-reconstruction so I can return to work on it.

Yay process automation X-)

Edit: even with your “slow” version, it still takes only ~5 minutes per comparison case to evaluate and return the normals-included version. That’s not too shabby, but I can hear when my computer changes from full-threaded to non-full threaded in the fan speed :wink:

That’s what my comment about Point3D was based on,starting to dig into the docs—yeah I saw the Point4d as well, but what we need is really a Point[N]D generalized object to hold additional attributes, or a way to match data between the data structures.

One thought of a “hack” way to do it was it could be multiple PointClouds—since my end product is getting composed back together outside of Rhino anyway. We could sample the cloud for actual locations to get cloud 1, then query the associated normals (and encode that resulting list of “points” of nX, nY, nZ data, and then do the same with color.

Then we’d read the PointCloudLocation[x], PointCloudNormal[x] and PointCloudColor[x] values and put them into the csv or Ply format output file:

CloudCompare’s ply header is:

ply
format ascii 1.0
comment Author: CloudCompare (TELECOM PARISTECH/EDF R&D)
obj_info Generated by CloudCompare!
element vertex 676336
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
property float nx
property float ny
property float nz
property float scalar_curvature
end_header

And the subsequent lines are space-delimited, so it’d be super easy to compose that format…

… which you could then have Rhino “import” as a new object I suppose X-)

As Clement mentioned, exporting cloud point data can be made in a way ply file requires.
But I do not know how to calculate the scalar_curvature variable your ply file requires. At least Rhino does not contain a native method to calculate it.
I added a “defaultScalarCurvature” variable on line 35, and assigned it a value “0”.
I am not sure whether or not this will work.

Maybe you should investigate if this scalar curvature variable could have some default value or not. If yes, then which one.

exportPtCloudDataToPLY.py (1.8 KB)

Scalars are added data in CloudCompare that is optional—it looks like the core description for a Ply would just be the [x,y,z] point positions and point-count. Normals and color are optional too—so those are values you list if your point cloud has them, otherwise you can leave them out. Here’s a Rhino .Ply header from when I did successfully get an export out:

ply
format ascii 1.0
comment File exported by Rhinoceros Version 5.0
element vertex 2698
property float x
property float y
property float z
property float nx
property float ny
property float nz
property uchar red
property uchar green
property uchar blue
element face 2562
property list uchar uint vertex_index
element material 1
property uchar ambient_red
property uchar ambient_green
property uchar ambient_blue
property uchar ambient_alpha
property uchar diffuse_red
property uchar diffuse_green
property uchar diffuse_blue
property uchar diffuse_alpha
property uchar emissive_red
property uchar emissive_green
property uchar emissive_blue
property uchar emissive_alpha
property uchar specular_red
property uchar specular_green
property uchar specular_blue
property uchar specular_alpha
property double shine
property double transparency
end_header

As you can see, there’s a lot of additional data there, and two whole additional element types with the faces of the geometry and a material indicated in addition to points. The subsequent lines of data look like:

40.000000 119.339996 0.000000 1.000000 0.000000 0.000000 0 0 0

For the set of points, which in the file are lines 36 to 2733, which would be 2698 elements. Then the lines look like this for the faces:

...
4 114 115 113 112
4 98 99 97 96
3 218 217 191
...jumping to last line, which is the material definition element...
0 0 0 0 0 0 0 0 0 0 0 0 255 255 255 0 0.000000 0.000000

Which to me indicates that they a) don’t have to use all the values necessarily, and b) the delimiter between elements is the ‘/n’ and the properties are just space-delimited, and setting “0” for a value you don’t have otherwise populated (like ambient/diffuse/emissive/shine/transparency on the “material”) can be fine, if it’s not going to otherwise cause problems in your downstream use of the data. In other words, setting a value for color or normal or even defining material isn’t important and may be counter productive if all you want is point locations, so it’s just bloating the file size.

So if you have fewer properties, then on the subsequent lines of data you just have shorter lines—so a header with just [x,y,z] and [nX,nY,nZ] defined would have:

ply
format ascii 1.0
comment Author: Jonathans Revision of Djordjes script
obj_info Generated by Jonathans Revision of Djordjes script
element vertex [quantity of points]
property float x
property float y
property float z
property float nx
property float ny
property float nz
end_header

Then your lines would look like:

x1 y1 z1 nX1 nY1 nZ1
x2 y2 z2 nX2 nY2 nZ2

…and so on for all the actual data.

I’m no Python syntax/parsing wizard, so it may take a bit but I’ll try to revise your script to include the right behavior, and then we have a new utility :blush:

Check the revised version below.

exportPtCloudDataToPLY2.py (1.7 KB)

1 Like

It works! :smile:

[

I revised it only slightly, to give credit where due in the resulting .ply files, since your tool is writing the output.

1 Like