I’ve been inspired by the selection system on qutebrowser. It basically puts a two-character cue on each selectable thing so you can just type the two character tag to make that selection.
This got me thinking about a selection strategy to cut down some percent of the traffic to and from the mouse in Rhino.
What if you could run a command/alias something like kblnee (keyboard selection mode, make a line, use end point snaps for both ends of line)
So then the user is presented with selection cues in the viewports representing all the possible end points to snap to. and you just type jd or whatever and then move on to the next keyboard snap. Then the cues go away.
Ideally the snap point coordinates would be used in some kind of 3-dimensional checksum style algorithm to help assign the alphabetical codes… i.e. so they’re repeatable based on location. i.e. a snap location at the world origin always works out to be aa or whatever, and then random substitutions are considered afterwards to resolve conflicts if needed. Just so as you work with a model over a long period of time routine selections can become automatic.
Perhaps they fade or cull based on distance from camera in perspective just to control clutter. Like you presume the user has the point they want to snap to reasonably conveniently in at least one viewport.
Any thoughts? Could this be crudely prototyped with python? Or perhaps a combination of GH & Python? Just to get something that sortof works on demonstration level and see if it’s practical?