Not at the moment. But I logged the feature request below
RV-1262 object scanning: pointCloud output
Thanks. I think this will be useful for tweaking some shrink wrap settings in Rhino that are otherwise automated from inside the iRhino app from what I can tell…helpful the various geo we are scanning. For reference we are scanning vehicle size clay models, both exterior and interior bucks and door panels as well. Results are good so far but would like to tighten things up around smaller features as well if possible
I started working on the pointcloud output but unfortunately the quality of the pointcloud the Apple API is giving us is not very dense and that’s not something we can control. So I’m hesitant to even add it to the release version
See for yourself the .3dm I posted here which contains a very sparse pointcloud of the same object
RV-1262 object scanning: pointCloud output
Hello. Thanks for the update.
I agree, the point cloud count is much less than what I would have expected.
Do you think this is a byproduct of the Apple software is returning via the API, or do you think this could be a result of hardware limitations of the iPad Lidar unit? (using newer iPad Pro with latest os)
We have handheld scanning units we could try here, but I could see there being issues bypassing the built in hardware for the input.
Our plan is to use this for quick comparisons to existing math data to clay and it has been working well so far, so I will continue testing and providing feedback. Higher resolution is always welcome, so if its not a hardware limitation hopefully apple will improve what gets returned from the API.
Thanks again!
It’s Apple, I have the latest and the greatest iPad Pro M4.
Even the documentation says “sparse” pointcloud
A sparse point cloud data structure output as the payload of a
.pointCloud
request
@mkarimi The scanning via LIDAR works pretty well on the iPhone! Thanks for the feature. Could you kindly implement a turntable feature so the person does not have to move around the object during scanning but that the object can be placed on a physical turntable and the user just has to hold the scanning device in a steady point? Thanks for your opinion on that!
It won’t be possible because the scanning API uses the device orientation to correlate photos with positions.
Thanks for the answers, thats somewhat unfortunate as it would come in very handy for many applications. maybe at some point the scanning api gets updated or exchanged to one orienting more on the content scanned? or a second scanning api implemented? thanks anyway!
We’re using Apple’s Object Capture API. They may change that in future releases
Hello Again Morteza.
I reviewed to point cloud data with some of my clay teams and they think that even this sparse point cloud data will be of use to us in the context of the checks that we are running. Do you think there is a possibility of extracting the point cloud being available in a future WIP build, or is there a way I can achieve this myself with the current WIP for testing purposes?
Thank you for your time.
I can try to add it to a hidden layer. Still needs some work, don’t expect it in the next release
Understood. Thank you for considering the request.