I tried scanning my kitchen, the app is pretty freakin’ cool!
It mostly understands what it’s seeing; I’m saying that because it’s trying to create a “box” into which, I think, it’s trying to make everything fit and “make sense.”
However, we have some unusual details in the way certain geometry exists - for example, an extended soffit - and iRhino isn’t making sense of it.
My strong preference would be if the app were content-agnositc, so that I could scan “rooms” using the same logic as an object, where it’s just “reporting on what it’s seeing” rather than constructing a room into which other things are supposed to fit.