Rhino 8 (Mac) Feature: ObjectCaptureFromPhotos

I believe this is fixed. @Ukktor I believe this will address this issue:

but I’d definitely appreciate your help testing.

@lordarchiebald I also fixed this rather embarrassing bug:

RH-66403 ObjectCaptureFromPhotos: Accept paths with spaces

I’d love to see some of the models that get generated.

Thanks again for testing.

@dan I just saw the change in the new build to prevent clicking other windows from cancelling the OCFP process. The new build works fine for me. I actually had this bug occur on one occasion on the older build when I happened to have multiple windows open and I never went back to see about replicating it as I was focused on testing the other features.

Also, nice to see the OCFP command in the right click Command history list now.

The Detail Level/Full now works as expected too. Gonna restate my previous test with that setting added now. I did retest all these, so I know you didn’t break anything when you fixed it!

Preview- 90 secs-25k vertices- not the best texture mapping, “soft” low-detail mesh, but usable for preview
Reduced - 4 mins - 24,999 vert- better texture mapping, and meshing detail is improved
Medium - 4 mins - 50k vertices- texture mapping appears similar to the Reduced setting, meshing detail is a step up again with corners, edges looking crisper
Full- 4.75 min- 100k vertices - texture mapping is improved in the corners/edges again and appears more “in focus”, mesh is looking very decent in the nooks and crannies.

In the Rhino shoe scan, the resulting mesh detail is obvious to see, but after a point, the improvements in texture detail shows up best in things like the bow in the laces and Vibram logo on the sole. You have to really look close.

Tested Sample Ordering setting of Unordered vs. Sequential. This made no difference in time for this set of photos. I checked your images in the folder and they are photographed and named in a fairly sequential order already. This setting may only make a difference if you have a wildly random set of photos.

1 Like

As far as Feature Sensitivity: Normal vs. High

High takes longer as seen in my testing below. Time is money and maybe you want an idea of how long you are about to be unable to use Rhino while it processes the images.

Preview: 1.5 vs. 4 mins
Reduced: 4 vs. 6bmins
Medium: 4 vs. 6 mins
Full: 4.75 vs. 7 mins

The Normal meshes looked a teensy bit better to me in this case, but as I read, High is for objects with less detail.

I did not feel the result was worth the extra processing time in this case, but it may be different for other image sets.

I would upload the completed file with all 8 shoe meshes in it, but it is about 230 megs!

Also thanks again Dan for the Repeat functionality, I have used it A LOT today!

1 Like

Again, thank you for testing this and reporting back. I’d like to get a better handle on how to get performance improvements using sequential shots. I don’t think I fully understand what the APIs are looking for when they do the optimization. For example, if you shoot on a turn-table and go counter-clockwise, what should you do to maintain a sequential set of shots to get the bottom of the object? I don’t know yet. I’ll try to pin this down.

So, perhaps some sort of estimation would be helpful?

Would you also like to see time-elapsed?

The general progress bar as shown is fine. I just want to know my machine has not crashed. Anyone with computer experience knows to never quite trust “Displayed percent or time remaining/completed” shown anyway.

As for a “time you are about to spend waiting for image processing to complete this task” general prediction, that would be nice, but probably not easy implement without DOING the function. This may just have to be user-learned from tests and individual machine setups.

After all, that is what the “I just screwed up”. AKA the “Cancel” button is for!

1 Like

Makes sense to me. I doubt I could get a prediction even tolerably correct…but perhaps I could submit a feature request of the Apple RealityKit developers.