Hi @mkarimi
Thank you for the options to scan rooms and objects directly in iRhino! These are really great tools which connects Rhino to our environments. Also the face scan in is amazing.
Now I wish to have 3D Pose Detection in iRhino. This would be really cool for ergonomic design tasks and many other use-cases
It would be great to have the results (named joints and points) on a single layer, grouped accordingly. Camera as named view and the picture would also be nice to have.
I’m working on a hops component to generate solid humanoids in “real time”. It would help if you could model the pose estimation including the transformations (for example as joints & bones in blocks) - then we should get the rotations…
In the past people have rigged up the Kinect controller and Leap Motion controls to Rhino and Grasshopper to work as a design controller. This was done mostly thru FireFly
Thanks for the links. I know these other libraries. The point is to use the LiDAR sensor with the iPad which will give even better 3D results. That in combination with iRhino has the potential to define a standard for solid humanoids in usd format.
Yes, it is basic but that’s fine for now and I can adopt Apple’s computer vision point in my template. Hand gestures are not detected but the points help me to get the wrist rotations.
Implementation of planeDetection and orientation may be the next step in iRhino’s AR mode?
I can see a mask for frameSemantics but I think that’s not from my tests? I’ll have to experiment with that a bit more when I have some time…
I did some more testing and found that the AR pose detection is not precise enough for ergonomic studies. It may work for People Occlusion in very simple AR scenarios though.
What I need would be a separate input like Room or Object scan using camera and LiDAR based on DetectHumanBodyPose3DRequest
I don’t think fundamentally there’s any difference between the quality of output you’d get using this API. I’m pretty sure they use the same underlying technology.