Motion sculpture human body

Hello,
Does anyone know how to create a surface from the motion of the human body. Surfing the web couldn’t really find something that would help me. Besides a mention that it might be doable with the help of blender motion, but without any specific examples or anything more precise than “do-able with blender”. Plus, can’t really use Blender atm. Is there any way to create a motion without using something like Microsoft Kinect?

Here are some examples of what I’m looking to achieve


http://www.studentshow.com/gallery/12818499/Syntax-Error
And I know there is an almost similar post, but didn’t actually help me get on with the definition.Motion capture

1 Like

Hello,
I find this interesting and want to apply the principle to an ergonomic body in 3D:

Once you have some poses inside of Rhino then it should not be too hard to make sculptures from bones, mesh outlines or the silhouettes:


Jess

3 Likes

It does look like an interesting method indeed, I’m trying to wrap around my head on how to make the rhino connection for now. Well, that is the very reason of the post. I am looking for using the ergonomy of particular people as per my idea that i had

Maybe a service like https://www.doob.eu/ will help?
Once you have the geometry then it should be relatively easy to make a sculpture from with Rhino.

1 Like

Thing is i’m not looking to make a sculpture. I’m looking to have a space sculpted by the human body. I.e. simulate the movement of a person through the apartment and have the resulting surface = a living unit. The main reason why thought about the motion recording was because it was the 1st I figured could result in what I want to make.

I haven’t use a json with rhino before. (My knowledge of python in rhino is a WIP, but don’t really have a choice). Is there a page you could recommend on how to use this library? And the processes of creating the silhouette? Is there any way to add an external object to the body? Like a cane for example

Well, this library is designed for 2D applications in the first place. I have a calibrated setup with two cameras and will try to calculate the 3D positions of the 16 key points by photogrammetry. Since my model has inverse kinematic and well defined constrains I expect realistic looking animations. Accuracy of the motion capture will be low, but for my needs this does not matter. Cool would be to create the pre-trained model automatically based on my 3D model but the required sources are not (yet) available I think… so that’s something for the future :wink:

As said, the silhouette can easily be created with the silhouette command or the silhouette methods in RhinoCommon from breps and meshes, but of course dependent on the camera position.

If you are interested in the engrossed space moving and morphing objects take then I would use a different method. What exactly are you trying to do?

Edit:

[quote=“gpetrica001, post:5, topic:95595”]
simulate the movement of a person through the apartment and have the resulting surface = a living unit.[/quote]
would a point cloud be sufficient?

1 Like

In my approach, I’m ultimately trying to define a process that would “conceive” a living unit ( more like an apartment) But not in a traditional way as at this point I’m not taking in count traditional architectural elements like windows and doors.This unit or space would be created by the movements of a person and the needs the person has from a space. So in a way its an ergonomically tailored “living” space. Even though I’m not specifically focused on camera recorded motion. As I mentioned I’m equally satisfied with a computer based simulation of the movements.
I also tried this one, but it wasn’t as operational for me. And not certain the extent of modification i can make on this human body model. https://www.grasshopper3d.com/group/kangaroo/forum/topics/keyframe-animation-tests-and-poseable-figure
Well, a point cloud might work as well

To be honest, at this point I’m unaware how the json collaborates with rhino. As in what the definition of your setup looks like for example.

1 Like

https://mathrioshka.ru/tools#/ghjson/
are you using this one?

I’ve used my model for example to design a sailing yacht. Maybe this comes close to what you are trying to do? It worked very well for most parts but it was hard to evaluate parametrically if a person can get into the bathroom, turn around, close the door and put down the pants. However, this can also be very difficult in reality on a 28ft boat :wink:

I’m also still looking for other ways to animate my model. The bathroom problem might be solvable with Grasshopper/Kangaroo. The sample from @DanielPiker which you have linked does not work with my current installation of V6/V7. Is there an updated version available somewhere?

Which file are you referring to? I’m on Rhino 6 and for me it seems not work more or less. Well it opens. The posable human isn’t sufficiently responsive

Do I have to install any additional components for the collaboration with rhino Gh ? as for instance something to recover data from the camera?
Is there a chance you could share an instance of the definition? to see what it should look like. As I can’t seem to find a pertinent example.

I meant Daniel’s keyframe animations. Edit: Had problems with the Grab component, but maybe the behavior is intended. I’ll have to play with this a bit more…

So far I have just played with tf/posenet in javascript and I think that I will leave there. Also please note, that it is only 2D. I’m not so used to grasshopper so I cannot really help you with that right now.

1 Like

But how did you get to Rhino from the tf/posenet? did you use photos for the position ?

My plan is to capture synchronized positions and post-process the photogrammetry part and animation of the model.
Right now I can animate this model only parametrically which is a bit cumbersome. The interactive grabbing would be a nice improvement…

Any chance you could still share or explain how did you do it? At this point I don’t understand how you you work with tenserflow. I mean I do understand the overall functionality, viewed the demo. But except the view of the available demo couldn’t really understand the general use.

If you mean the video in my first reply, this was made with a simple pose which I adjusted manually, then I mirrored and moved it, created the the silhouettes and made a normal Loft with the rebuild option. Then I scaled the inner two rows of control points to add a bit more dynamic to the straight loft.

This was pretty fast and simple, without tensorflow - sorry if you expected more magic :wink:

1 Like

I mean, i did assume there was a lot of tinkering behind. But I reckoned there’d be more magic behind it. But what with the 2 cameras you were able to retrieve a 3D position?

Everything I wrote about tf/posenet was written in future tense. I did not really start with this development yet but I did photogrammetry projects and I already have a decent setup with laser-projectors, beamers, laser scanner and cameras. So this will work…

New for me is the machine learning part and that’s why I find it very interesting. Also I have a big library with scanned humans and a SubD topology which I can automatically apply to them. I think this project has quite some potential but I’m not sure if Rhino is the right platform?

Edit: I mean Rhino is definitively the tool to develop everything needed to train the machines but the use case may be outside of Rhino…

1 Like

I understand. I still find the possibility of using posenet quite interesting. http://www.cs.cmu.edu/~hanbyulj/totalcapture/ found this work quite interesting in this aspect. Even though not sure about the degree of “do-ability” at this point. To be fair I am divided whether Rhino is the appropriate platform for it. I did think about an interoperability with processing or yet again blender maybe since its inherently made for animations thus motions by extension…

To be honest I’m still curious of how you recovered the pose from tenserflow and imported it in rhino though.