Scanning object with iRhino3D WIP

iRhino3D WIP (8.1.23259 (11.2.1) - September 18) available via Testflight uses iOS 17’s Object Capture API to create fully textured 3D models of objects using photogrammetry and LiDAR input.

This is an unfinished feature with some known issues and many unknown ones. For serious work, use the Appstore version of iRhino3D.

Requirements:

  • iPhone or iPad with a LiDAR Scanner and an A14 Bionic chip or later
  • iOS or iPadOS 17 or later
9 Likes

This is awesome. :sunglasses: downloading asap!

I’m still not seeing this option:

:face_with_monocle: :thinking: :thought_balloon:

Maybe I downloaded an old one from dec '22. brb. welp, same version:

I’d like to be able to scan small objects quickly and get a length and width dimension.

No option for ‘objects’:


:neutral_face:

maybe need iOS 17

iOS 17 is required for the new object scanning feature

1 Like

downloading asap! :open_mouth: wow taking forever to download iOS 17 :rofl: :sweat_smile: using fiber optic too.

Yeah that was my experience too. Apple is kinda slow rolling the release of iOS 17.

1 Like

Can’t wait to try this! Need to scan the interior of my truck to build a custom drawer unit in the back (in Rhino of course).

2 Likes

I’d be curious to see if it works for an interior space. Please report back

3 Likes

I have some ideas about this type of technology that are very refined from very extensive experience.

I’ll try to boil it down very simply here:

1.) The best 3D capture is always one that is taken from a very stable position – regardless of how motion friendly the implications are, i.e. “hand-held-scanners”. The common ‘dream’ of scanning objects where the scanner is constantly moving around and expected to capture ‘everything’ accurately and patch it all together flawlessly – is a false hope and dream. Stability is key, and multiple scans is key, and aligning them together is key.

2.) The best 3D meshes of objects being captured, are ones that are capable of being aligned and merged together – yes from multiple scans, regardless of popular belief about the whole “handheld scanner thing”. I’ve used the best of the best and have decades of experience, with terabytes of data.

3.) The ‘capture-focal-point’ is the upmost critical area of concern. Every capture of data should be focused upon the focal point. Every scan should maximize the use of said point. Any erroneous data that lies outside of said point should be eliminated before the alignment stage and merging stage – this is called the ‘trimming stage’. Yes there is a perfect workflow, that I’ve refined extensively throughout my experience with stationary scanners and handheld scanners & softwares.

4.) Scan alignments, are the key to the best 3D capture workflow. If you can figure out how to align scans easily, quickly and accurately, then this is the best scenario for any 3D capture workflow.

5.) If you can figure out how to align ‘meshes’ or ‘point clouds’ (period); then you can solve the problem of aligning any data from any scanner – given the appropriate file format(s) of course.

The best and most accurate scenario I ever worked with is one with the NextEngine technology and the RapidWorks technology workflow, whereby I would capture hundreds of scans of a particular object, trim, align, and merge them very meticulously together over several weeks – long story short I would obtain beautifully accurate meshes to within +/-0.0050" in Z and +/-0.0030" in XY.

The other scenario of ‘handheld laser scanners’, I’ve worked a fair amount with Creaform with VXelements, and found they had a deficiency of tech for aligning multiple scans. Yes, they do have some semblance of tools for said alignment, but due to their extreme focus on the belief and projection that their technology does the said ‘alignments’, ‘trimming’, ‘merging’ (automatically); they fall short in the simple fact that ‘stability is key’ and expecting objects to be scanned 100% in all directions in one scan, is a fallacy – to say the least.

Trimming, Aligning, and then Merging, of multiple scans – is key.

1 Like

Does the Lidar sensor of the New IPhone 15 gets any updates to worth upgrading from 12 pro?

These are great point, we are simply using Apple’s Object Capture API and have little control over how that part of the experience. What we did was making it possible to bring the result into iRhino where you can save as 3DM and make further adjustments in Rhino.

Blockquote
Trimming, Aligning, and then Merging, of multiple scans – is key.

I actually tried this the other day, Rhino 8 has great features like ShrinkWrap that can really improve this workflow. Manually aligning partial scans in Rhino and running ShrinkWrap to combine them into a single mesh.

1 Like

There are rumors but I haven’t verified if it improved the quality of scans in this case.

Yes, but I still dream about Rhino having ‘automatic’ alignment features.

What do you exactly mean by automatic alignment? Are you thinking of scanning a part of the floor plan and do the rest later and want to stitch them together? What’s stopping you from doing it in one session?

Or do you mean scanning multiple floors and stack one on top of the other?

Well, it has mostly to do with everything I witnessed and experience in my extensive time being an active user of reverse engineering tools and workflows over almost a period of 2 decades.

Basically it boils down to the fact that scanning technology doesn’t do a very good job seamlessly stitching together 4-dimensional-space-time.

Therefore, 3D-scans are of higher quality and are more accurate when they’re thought of as smaller constituents of the overall picture.

Meaning, rather than trying to capture any object via one single scan, it’s actually better to capture it in multiple pieces, and then align them later during post processing.

Hence:

So, the ‘alignment’ per say, and rather ‘automatic alignment’ should be done in a sequence later in the workflow during post processing.

The idea that ‘alignment’ / ‘automatic alignment’ should occur (during the scan process) is actually what causes an whole assortment of compounding errors over time.

It’s my strong belief that iRhino would be better off if it had the ability to incorporate ‘multiple scans’ that can be aligned later during post processing.

This would enable more accuracy and higher quality outcomes.

During post processing, of course, scans need to be cleaned up and noise needs to be reduced prior to the alignment and merging process of the multiple scans.

Hence:

1 Like

Have you worked with an Artec Leo and Artec Studio?

I don’t have experience with other scan post processing software but Artec Studio is great to align multiple scans.

FWIW I think handheld wireless 3D scanning is a great and especially mobile solution to capture objects quickly and without good resolution.

Of course it’s not as accurate as a the scans done with the most advanced stationary devices but hey, isn’t everything a trade off in some way?

1 Like

Sorry in previous reply I was thinking that you’re talking about “Scan a room” feature, I’m tracking now.

For “Object Scanning” we’re using Apple’s object capture API. This API is based on photogrammetry and produces post processed meshes. I don’t believe we get to access the pointcloud data but we do get the source photographs. So hypothetically you can have photographs from multiple sessions and run them through a photogrammetry program to reconstruct a complete scan.

Having said that, what’s stopping you from doing multiple partial scans with the existing feature, save them as 3dm and post process them in Rhino later?

1 Like

I’m not familiar with those. I’ll try to look into them, but meshlab I guess has been on my list for my next step of discovery at the moment. But I’ll consider those you mentioned as well.

My objective here is to try to get others to realize how much more accurate it can all be with the understandings that I’ve gained over the yrs of experience I have in the field.

I believe the ‘poor resolution’ is a needless causality due to improper understanding of ‘work flow’.

That’s why I’ve broken it down above and listed the workflow as I did.

I believe something like iRhino or any LIDAR style application, has the capability to be very accurate if they’re implemented accordingly.

Obviously yes there’s more accurate versions other than LIDAR, but no matter the type, if the workflow is wrong then the resolution will be exceedingly bad.

Maybe someday the algorithms will become better and actually do a good job, but the biggest problem I see is the fallacy that people believe these algorithms do a good job automatically aligning the point-clouds in realtime – during the scan, while they’re moving the scanner around all over the place and expecting the software to know what to do. This always leads to exponential messes of data, the more movement there is.

They believe that you should only have to capture one scan of a 3D object, all in one shot. This belief is a fallacy, in my strong opinion.

I believe algorithms are very very very under-developed and in the stone-ages. It will probably be 25 years until they’re actually capable of doing this accurately. In which case we’ll reach the singularity and 3D scanners will probably be obsolete or something.

1 Like

That’s probably a viable option for sure, but I need Rhino to have ‘auto alignment’ algorithms for alignment between scans :slightly_smiling_face:

I do in fact have terabytes of data that I’d really like to use in Rhino, but I need some algos to align the data :sweat_smile:

I could do it manually but that’s not as fun haha. So, I’ll be looking into meshlab and these too when I can:

But iRhino could be super powerful too, if it had some of the workflow I described. Users just need some alignment algos is all no big deal :upside_down_face:

Yes that would be nice!

1 Like

I’ve ben testing at work and its really looking good so far Is there a a way to output just the point cloud data without the meshing or shrinkwrapping?

1 Like