What is the quick way to delete a control point in iRhino without using a keyboard?
I have a pencil…
What is the quick way to delete a control point in iRhino without using a keyboard?
I have a pencil…
Hi Martin -
When I select a curve, the control points are turned on. When I then rectangle-select with the pencil around a control point, that is selected. I’ve put the Delete
command on the toolbar, so then it’s simply hitting that icon with the pencil to delete the control point.
-wim
Interesting, I was able to pin the Delete command to the toolbar. The button does not have a symbol, I see a question mark ‘?’
Same problem here…
Some commands don’t have a corresponding icon. It’s logged here:
RV-1219 Missing toolbar icons
Wait, how do you use a pencil in iRhino though, maybe my iphone is outdated…, I want this iphone pencil
I’m probably slacking though, I need to spend some time with that app to check it out more.
I have a new project actually that might be good to see how iRhino does. I have to scan a fan blade for repair purposes. So I’ll give it a try, maybe today.
You cannot use an iPencil with an iPhone. You need an iPad to be able to use the iPencil.
I have an iPad Air 5th gen. and an iPhone 15 Pro with lidar. Scanning is possible with my iPhone but I find the object scanning rather slow. Compared to my Artec Leo, the scanning procedure is a bit of a pain and at least now in the evening the app constantly complains that it needs more light. Then it tells me to move slower. And then my quick test crashed during processing and everything seems to be gone.
The room scan feature is pretty cool to build a quick scene…
One thing that may not be obvious in object scanning in iRhino is that you can take manual shots using the button on the bottom right of the screen, and you don’t have to finish all recommended phases of the scan if you prefer speed over accuracy.
My preference in scanning in general is, the least movement possible is the best, and I believe iRhino (like any RE post processing software) should be (or become) capable of aligning and merging separate scans.
My point is I understand the need for slow movement if not zero movement.
Your setup sounds cool though, I’m glad you’re pushing the limits of technologies.
My interpretation is this sounds more accurate, and not necessarily super fast, but what I’d prefer.
Just need to be able to align separate scans, or rather trim bad data first, then align, merge etc.
The Apple photogrammetry API we’re using does a lot of post processing and it’s hard to imagine getting useful partial scans out of. I think best we can do is maintain the scanned object orientation and do the aligning/mergin in Rhino with Shrinkwrap.
I recently added a pointcloud output when scanning an object. It’s pretty sparse but I’m curious if that’s of any use to you.