I’m wondering what kind of support McNeel/Rhinoceros will provide for us clients using the iphone 12 lidar data. Currently I’m on android and waiting to switch to iphone 13 after lidar hardware/software on the iphone matures.
I’m aware of point clouds though interested to know if McNeel has any plans to expand on lidar functionality.
I don’t think McNeel will or even should go into that space. They got more than enough on their plate with a windows and mac version of Rhino, they don’t need to go into a completely new space. There are plenty of companies out there that are building apps around Lidar.
Why do you think that McNeel should go into that space? What advantage would that have?
Correct, but this is specific to Rhinoceros/Windows not the iphone as stated. I’m curious if McNeal has plans on expanding on lidar support. Not ‘iphone/apple’ support. Although McNeal already does offer apps for apple products, but that’s aside from the subject.
What do you expect that Lidar support to look like?
The output data of Lidar scanner software (meshes/pointclouds) can be used by Rhino.
Not saying there is no need, I might just not see it.
So take this as an honest question please.
If you ask me, its not just about importing a mesh, but its also not the domain of Rhino.
A useful application would combine multiple lidar scans into one 3d model, applying color and other meta data to it. It further has to reduce noise, reconstructs edges and remeshes anything for you. Also a seamless integration to Rhino might be useful if you want live data or improved workflow (for whatever reason).
Still the Lidar sensor built into current Apple products is not a very good way of scanning objects, nor it replaces professional tools. It does it fast and good enough to support AR applications and to improve measuring, facial recognition and similar things which benefit from having a rough idea about the 3d context. But its questionable if the outcome is good enough. A lot of its usefulness will depend on the quality of the software interpreting the scanned data. So there is potential to it…
I have seen a video of a prototype app somewhere that was refining the relatively coarse scan results of the iPad’s LIDAR sensor by repeatedly scanning over the same surface.
So while I wouldn’t expect professional scan results of the iPad’s/iPhone’s hardware, there might be still some room for improvement resolution-wise…
Definitly! And even with default resolution it may already be sufficent to get an impression of the overall shape. Still compared to scanned data I used to work with, the quality is not even close. That has especially todo with the bad edges. But of course algorithms can help out to predict them better. That why I said, it really is up to the software.I believe what is there, is probally not reflecting the full potential yet.
In that case of course. I was thinking more about the general reverse engineering use case. E.g. Something is broken and you need to recreate parts of it. The most important things are edges in that case. Yet, what you get for that money is really good. Professional scanners are extremely expensive.
Does anyone seriously thing this is worth the time of any developer? Let alone the McNeel ones who have a huge backlog of things to fix, improve, create that could really help our workflows.
What could be done with this thing? I love 3D scanners, and I think this is not one. This is basically a low Rez depth sensor for a camera.
Yes I think so. In theory you can drastically improve details, by overlaying multiple scans. Equal to the process of 3d scanning from photos , which works surprisingly well. Still edges will be the issue, and of course the app development in general. But what is a lidar sensor? its a grid of lasers! The more points you create the higher the detail. Its rather a matter of combination and edge prediction.
Making those inputs useful is not a side job, if doable at all.
Have you tried it Tom? I have tried both the iPad Pro and the iPhone 12 Pro, with several apps, they are equally limited and of questionable relevant use IMO. But I also use real scanners too so maybe my expectations are skewed by having professional standards.
I think you are right that this combined with Photogrammetry could be potentially useful. However it seems that there’s no money to be made in that space so no one in their right mind would put a team of expert scientists/developers to embark in such an alchemist project, for what? To sell a $9.99 app? this is maybe why the entire iPad ‘Pro apps’ ecosystem pretty much died, or is not financially viable. It’s a really tricky industry.
I’ve seen some neat results for interior room scanning where they take the input and make it into a cleaned-up straight walls room, which is much better than all the real estate ‘3D tours’ that show you how the house you are considering buying would look after a fire. But let’s just say I’d only trust that ‘cleaned up’ scan as much as I trust a drunk guy and a stoned one being sent to a job site to take measurements: I’d use that input to know if a couch would fit, not to build custom cabinetry without me doing proper measurements
Well, lets say it like this. I was really curious once they announced that. The problem is indeed that any useful 3d scanner is out of range for hobby usuage (< below 2000$). So I was really excited. Then I saw the first presentations, knowing this was not useful at all as it is.
In parallel I bought some Intel RealSense cameras. That was disappointing at well, but one thing it maked me clear is the fact, that its not an hardware limitation in first place. Its rather a software limitation. Some research applications where able to already do some of these things I was mentioning quite well. And yes, you can already use that for creating a very rough shape reference.
Of course you have to measure things manually and refit the scan to it. But its already a hugh gain for something which has to fit to something more complex, more “free form”.
I mean its not that a lidar system is inaccurate, its just the lack of density.
Furthermore, its not about getting a perfect scan, its about making good enough for hobby usage. I’m sure there is potential to it. I mean I would pay 200 $ for such an software. Doesn’t have to be an app and definitly its not McNeels domain. I’m just saying there is potential to it. And thinking of Flappy Bird or other none sense stuff, the success of an app is not predicatable at all, and its definitly not connected to its usefulness
What about the stuff from Occipital? I remember they launced their original sensor for the iPad before it had it. They have a Mark II now for around 500 dollars: https://structure.io/structure-sensor
Plus they also make an app for the iPhone 12, but yeah, I think the quality will be far from what you might want. It wont come near any professional 3D scanner.
I think the treshold for usable Lidar 3d scanning that goes beyond the already useful AR applications would be scanning data quality close to the quality of the existing phone photogrammetry apps, but with the ease of use and convenience of realtime scanning.
It looks like they are using a IntelSense D415 clone or similar. But you can buy them for 100 € or less. Thats essientially what I did. But as I said, the outcome is not good at all.