Hi Yes just watching now thx
So looking at the developer downloads I have less than no idea what I am supposed to be doing to use object capture on my phone ? Is it an app hidden away in the files somewhere ?
No, just taking photos should work. I will take a look at the images you uploaded today to attempt to determine why it is not working. It is highly likely a bug on my part.
Haha thx but as usual I expect its operator error at my end !
Also fyi I updated to the latest iOS last night and tried another set of photos this morning but same deal.
@lordarchiebald As I said, this is really early work.
As I expected, this is me being stupid. Logged:
RH-66403 ObjectCaptureFromPhotos: Accept paths with spaces
Please rename the folder that contains the images to not contain spaces.
I’ll get this fixed.
PS: I’m also having trouble with the chair example you sent…spaces or not. Looking into that too.
I’m just gonna jump in and say I love this idea and want to encourage mcneel to pursue more out of the box ideas like this !!
Just tried it and it works.
I work in the film industry and we are always incorporating prop purchases in to our designs and this is epic. Going to save so much time surveying and building assets.
I’m just off to set up a photogrammetry area to play with this a bit more.
Also I just did the chair as well and I think the reason its not working for you is I changed the extension names when I was trying to get it to work earlier in the week Jpg, heic, png etc. I think maybe what I sent you was one of those sets ?
Just did it with the original files and it works ?
Thanks so much for your time and ill keep trying to break it and will report in if I have any more issues.
Wonderful! I fixed that bug with spaces in folder paths so that, after next week’s RhinoWIP, it shouldn’t be a problem.
There was also a problem with processing larger files that I worked on yesterday:
RH-66252 ObjectCaptureFromPhotos: progress should include file import step
that @Ukktor noticed. Hopefully, that will be fixed too after next Tuesday.
Dan, is there a specific order or orientation that is used for the Sample Ordering? Or does the software just look at the next file in the sequence as a likely candidate? I did not know if this was something passed in from the Apple side or was a Rhino feature.
Just thinking about doing a sample scan and was wondering if there was a “Best Practice” on the Sample Ordering.
It is handled on the macOS side; it is not a Rhino feature. I believe the photogrammetry API just looks at the next file in the sequence as the likely candidate, but I will have to verify this.
I believe this is fixed. @Ukktor I believe this will address this issue:
but I’d definitely appreciate your help testing.
@lordarchiebald I also fixed this rather embarrassing bug:
RH-66403 ObjectCaptureFromPhotos: Accept paths with spaces
I’d love to see some of the models that get generated.
Thanks again for testing.
@dan I just saw the change in the new build to prevent clicking other windows from cancelling the OCFP process. The new build works fine for me. I actually had this bug occur on one occasion on the older build when I happened to have multiple windows open and I never went back to see about replicating it as I was focused on testing the other features.
Also, nice to see the OCFP command in the right click Command history list now.
The Detail Level/Full now works as expected too. Gonna restate my previous test with that setting added now. I did retest all these, so I know you didn’t break anything when you fixed it!
Preview- 90 secs-25k vertices- not the best texture mapping, “soft” low-detail mesh, but usable for preview
Reduced - 4 mins - 24,999 vert- better texture mapping, and meshing detail is improved
Medium - 4 mins - 50k vertices- texture mapping appears similar to the Reduced setting, meshing detail is a step up again with corners, edges looking crisper
Full- 4.75 min- 100k vertices - texture mapping is improved in the corners/edges again and appears more “in focus”, mesh is looking very decent in the nooks and crannies.
In the Rhino shoe scan, the resulting mesh detail is obvious to see, but after a point, the improvements in texture detail shows up best in things like the bow in the laces and Vibram logo on the sole. You have to really look close.
Tested Sample Ordering setting of Unordered vs. Sequential. This made no difference in time for this set of photos. I checked your images in the folder and they are photographed and named in a fairly sequential order already. This setting may only make a difference if you have a wildly random set of photos.
As far as Feature Sensitivity: Normal vs. High
High takes longer as seen in my testing below. Time is money and maybe you want an idea of how long you are about to be unable to use Rhino while it processes the images.
Preview: 1.5 vs. 4 mins
Reduced: 4 vs. 6bmins
Medium: 4 vs. 6 mins
Full: 4.75 vs. 7 mins
The Normal meshes looked a teensy bit better to me in this case, but as I read, High is for objects with less detail.
I did not feel the result was worth the extra processing time in this case, but it may be different for other image sets.
I would upload the completed file with all 8 shoe meshes in it, but it is about 230 megs!
Also thanks again Dan for the Repeat functionality, I have used it A LOT today!
Again, thank you for testing this and reporting back. I’d like to get a better handle on how to get performance improvements using sequential shots. I don’t think I fully understand what the APIs are looking for when they do the optimization. For example, if you shoot on a turn-table and go counter-clockwise, what should you do to maintain a sequential set of shots to get the bottom of the object? I don’t know yet. I’ll try to pin this down.
So, perhaps some sort of estimation would be helpful?
Would you also like to see time-elapsed?
The general progress bar as shown is fine. I just want to know my machine has not crashed. Anyone with computer experience knows to never quite trust “Displayed percent or time remaining/completed” shown anyway.
As for a “time you are about to spend waiting for image processing to complete this task” general prediction, that would be nice, but probably not easy implement without DOING the function. This may just have to be user-learned from tests and individual machine setups.
After all, that is what the “I just screwed up”. AKA the “Cancel” button is for!
Makes sense to me. I doubt I could get a prediction even tolerably correct…but perhaps I could submit a feature request of the Apple RealityKit developers.
Hope This will come to Windows atleast with Rhino 9^^
it is a mac os feature which has been utilised for rhino for mac. unless microsoft implements something in this direction i am afraid that will not happen.