Scan using camera and LiDAR - Human Body Pose Detection in 3D

Hi @mkarimi
Thank you for the options to scan rooms and objects directly in iRhino! These are really great tools which connects Rhino to our environments. Also the face scan in is amazing.

Now I wish to have 3D Pose Detection in iRhino. This would be really cool for ergonomic design tasks and many other use-cases :slightly_smiling_face:

It would be great to have the results (named joints and points) on a single layer, grouped accordingly. Camera as named view and the picture would also be nice to have.

Hope it is not too much work.
Thanks, Jess

I had looked into it before but didn’t prioritize it since no one had asked about it. I logged a feature request and hope to get to it at some point

RV-1403 Pose detection in iRhino

4 Likes

I’m working on a hops component to generate solid humanoids in “real time”. It would help if you could model the pose estimation including the transformations (for example as joints & bones in blocks) - then we should get the rotations…

Cheers, Jess

In the past people have rigged up the Kinect controller and Leap Motion controls to Rhino and Grasshopper to work as a design controller. This was done mostly thru FireFly

Of course there is always Gravity Sketch also: https://gravitysketch.com/

2 Likes

@Mahdiyar showed his Euglena plugin a few months ago. It might be worth trying…

4 Likes

@Jess I won’t be able to implement this anytime soon, it’s probably worth taking a look at alternatives Scott and Martin posted.

I’ve used Firefly with a Kinect many years ago but don’t have the device anymore.

How would Gravity Sketch be useful to track / detect a body pose?

Thanks for the links. I know these other libraries. The point is to use the LiDAR sensor with the iPad which will give even better 3D results. That in combination with iRhino has the potential to define a standard for solid humanoids in usd format.

2 Likes

Hi @mkarimi

Thank you for the fast implementation of the 3D Pose Detection! It works pretty good and I can already work with it :slightly_smiling_face:

Thanks, Jess

3 Likes

You beat me to it, I haven’t even announced it.

Glad it works for you, it’s pretty basic now but we can improve it.

@mkarimi

Yes, it is basic but that’s fine for now and I can adopt Apple’s computer vision point in my template. Hand gestures are not detected but the points help me to get the wrist rotations.

Implementation of planeDetection and orientation may be the next step in iRhino’s AR mode?

I can see a mask for frameSemantics but I think that’s not from my tests? I’ll have to experiment with that a bit more when I have some time…

Good job!

We already have plane detection, use “Surface” AR mode

Yes, and I meant that in combination with body detection to orient the character on that surface in AR mode.

Hi @mkarimi

I did some more testing and found that the AR pose detection is not precise enough for ergonomic studies. It may work for People Occlusion in very simple AR scenarios though.

What I need would be a separate input like Room or Object scan using camera and LiDAR based on DetectHumanBodyPose3DRequest

Thanks,
Jess

Hi Jess,

Where can I find the 3D Pose Detection? I would like to see if I can use it to track some foot’s movements. Any advice would be appreciated!

Thank you!

I don’t think fundamentally there’s any difference between the quality of output you’d get using this API. I’m pretty sure they use the same underlying technology.

I don’t have a mac at the moment to test this but I think these are different APIs. The resulting point clouds are also very different.

Here you’ll find an Apple sample project with the source code to download: Detecting human body poses in 3D with Vision | Apple Developer Documentation