Scanning object with iRhino3D WIP

iRhino3D WIP (8.1.23259 (11.2.1) - September 18) available via Testflight uses iOS 17’s Object Capture API to create fully textured 3D models of objects using photogrammetry and LiDAR input.

This is an unfinished feature with some known issues and many unknown ones. For serious work, use the Appstore version of iRhino3D.


  • iPhone or iPad with a LiDAR Scanner and an A14 Bionic chip or later
  • iOS or iPadOS 17 or later

This is awesome. :sunglasses: downloading asap!

I’m still not seeing this option:

:face_with_monocle: :thinking: :thought_balloon:

Maybe I downloaded an old one from dec '22. brb. welp, same version:

I’d like to be able to scan small objects quickly and get a length and width dimension.

No option for ‘objects’:


maybe need iOS 17

iOS 17 is required for the new object scanning feature

1 Like

downloading asap! :open_mouth: wow taking forever to download iOS 17 :rofl: :sweat_smile: using fiber optic too.

Yeah that was my experience too. Apple is kinda slow rolling the release of iOS 17.

1 Like

Can’t wait to try this! Need to scan the interior of my truck to build a custom drawer unit in the back (in Rhino of course).


I’d be curious to see if it works for an interior space. Please report back


I have some ideas about this type of technology that are very refined from very extensive experience.

I’ll try to boil it down very simply here:

1.) The best 3D capture is always one that is taken from a very stable position – regardless of how motion friendly the implications are, i.e. “hand-held-scanners”. The common ‘dream’ of scanning objects where the scanner is constantly moving around and expected to capture ‘everything’ accurately and patch it all together flawlessly – is a false hope and dream. Stability is key, and multiple scans is key, and aligning them together is key.

2.) The best 3D meshes of objects being captured, are ones that are capable of being aligned and merged together – yes from multiple scans, regardless of popular belief about the whole “handheld scanner thing”. I’ve used the best of the best and have decades of experience, with terabytes of data.

3.) The ‘capture-focal-point’ is the upmost critical area of concern. Every capture of data should be focused upon the focal point. Every scan should maximize the use of said point. Any erroneous data that lies outside of said point should be eliminated before the alignment stage and merging stage – this is called the ‘trimming stage’. Yes there is a perfect workflow, that I’ve refined extensively throughout my experience with stationary scanners and handheld scanners & softwares.

4.) Scan alignments, are the key to the best 3D capture workflow. If you can figure out how to align scans easily, quickly and accurately, then this is the best scenario for any 3D capture workflow.

5.) If you can figure out how to align ‘meshes’ or ‘point clouds’ (period); then you can solve the problem of aligning any data from any scanner – given the appropriate file format(s) of course.

The best and most accurate scenario I ever worked with is one with the NextEngine technology and the RapidWorks technology workflow, whereby I would capture hundreds of scans of a particular object, trim, align, and merge them very meticulously together over several weeks – long story short I would obtain beautifully accurate meshes to within +/-0.0050" in Z and +/-0.0030" in XY.

The other scenario of ‘handheld laser scanners’, I’ve worked a fair amount with Creaform with VXelements, and found they had a deficiency of tech for aligning multiple scans. Yes, they do have some semblance of tools for said alignment, but due to their extreme focus on the belief and projection that their technology does the said ‘alignments’, ‘trimming’, ‘merging’ (automatically); they fall short in the simple fact that ‘stability is key’ and expecting objects to be scanned 100% in all directions in one scan, is a fallacy – to say the least.

Trimming, Aligning, and then Merging, of multiple scans – is key.

Does the Lidar sensor of the New IPhone 15 gets any updates to worth upgrading from 12 pro?

These are great point, we are simply using Apple’s Object Capture API and have little control over how that part of the experience. What we did was making it possible to bring the result into iRhino where you can save as 3DM and make further adjustments in Rhino.

Trimming, Aligning, and then Merging, of multiple scans – is key.

I actually tried this the other day, Rhino 8 has great features like ShrinkWrap that can really improve this workflow. Manually aligning partial scans in Rhino and running ShrinkWrap to combine them into a single mesh.

There are rumors but I haven’t verified if it improved the quality of scans in this case.