Making clickable icons through the display pipeline

I wish to be able to show icons in my Rhino Viewport and when I click on the icon a text box (or similar appears). I want to use this to show warnings in a model and only when the user clicks the warning sign will the description of the error be shown.
image

I found these amazing thread GUI - Rhino - Radial Menu - Cross Platform - #99 by michaelvollrath which helps me show the icon I want to show.

But I fear I have to go a different route to make the icon clickable.
Any suggestions are welcome!
WarningSign.gh (93.1 KB)

1 Like

You could implement a MouseCallback to intercept mouse movement and clicks. I posted a GHPython example over here that demonstrates how to make a 2D screen space element with a hover-over state, but it should be fairly straight forward to extend that to cover clickable states as well:

Just had a look at the file and it looks quite similar to code I’ve previously posted. Anywho, you could also drop the Human dependency by deserialising the image within the script as well. See this example:

Implementing clicking is really getting into some juicy programming and I find it immensely satisfying.

This is a really nice simple way to implement clicking with a mouse callback, and if this works for you, use it, mark it as the solution, and ignore my ramblings below.

But … What if multiple icons overlap? Or you need to deal with complex perspectives? What if you need the clicks to be very precise, and transparent areas of the icons not clickable? Then you’ll have to render your viewport as a bitmap and index the colours of each item.

This code will make all of the non-transparent pixels of each bitmap a flat colour, unique from every other bitmap.
One issue with the below is that its VERY slow in python. Normally I Lock the bits, but this is causing me issues currently.

## Colorise the bitmap for each alert icon
i = 1.0
for bitmap in bitmaps:
  i+=1.0
  col = System.Drawing.Color.FromArgb(i/255, i % 255, 255)
  for x in range(0, bitmap.Width):
      for y in range(0, bitmap.Height):
          if bitmap.GetPixel(x, y).A > 0:
              bitmap.SetPixel(x, y, col)

This code returns a flat bitmap of all the unique coloured bitmaps (Called inside of DrawForeground.)

### Get a bitmap of the viewport we clicked in
def DrawForeground(self, sender, arg):
...
selection_bitmap = arg.Viewport.ParentView.CaptureToBitmap(False, False, False);

This code gets the index that was clicked using MouseCallback

## Retrieve the selected Index
px_colour = selection_bitmap.GetPixel(click.X, click.Y)
index = px_colour.R * 255 + px_colour.G
clicked = bitmaps[index]

– cs

3 Likes

Indeed, the method I proposed would be limited to rectangular/circular shapes if the hover-over detection ought be 100% accurate. The bitmap approach is very interesting, will definitely have a closer look. What does the i++ line do? Pretty sure one can’t increment in Python using that syntax.

Apologies I’ve updated the i++ to i+=1. I’m so used to C# now my python is a bit rusty.:sweat_smile:
The bitmap approach done right is pretty much the gold standard, it allows for clicking and drag selecting and always works the way the user expects it to. In C or C# or C++ (Maybe even in python? I’m unsure) you can do all of this manipulation via pointers which makes it a very quick operation. Locking the bits also speeds it up too.

I think your method is likely best as its easy to implement and debug, but adding in my 2 pence for completeness.

2 Likes

Haha, I didn’t want to presume. And you never know, since they added static typing to Python :face_with_spiral_eyes:

1 Like

This is definitely piquing my interest and now has me thinking about overriding object colors or perhaps a custom display mode that would render every 3D object as a flat color similar to what you see in a render pass like “Object Id” I think its called. Or directly accessing the viewport “render passes” via the API if that’s possible in Rhino, I’m guessing it is

The new hover silhouette highlighting in R8 is performant and accurate so I’m assuming it’s using some kind of 2D screen hit on an object mask or something because trying to apply bbox hit logic on 1000+ objects gets slow without further algorithms.

Sorry obviously I’m speaking on something I know nothing about but would very much be interested in learning more about a 2D “indexed color” hit detection method… I’ll do more reading and digestion of what you all shared, thanks! :pray:

EDIT: just reread your code up above and basically can ignore this response, I see you already are doing that in the code with the unique colors. Not sure what I was reading the first time :thinking:

1 Like

I can post a sample as a C# project if that helps? An example plugin that lets you create and select phantom objects?

2 Likes

Yes please! The more I get into it the more I feel the need for “phantom” selection as you put it.

The grab component from kangaroo is a fun example of an adjacent use case, being able to grab a reference geometry from grasshopper from within the Rhino modeling environment is awesome and opens up a lot of workflows and feels very easy as a user to engage with and id love to extend similar interactions to visual display elements where appropriate

2 Likes

I started playing around with this idea a bit and made some progress on a 3D object version.

  1. Use a custom display mode (flat shading, black background) and GH Custom Preview Component to assign a unique color to each object in the model.
  2. Use a mouse callback to get the screen position of the mouse cursor
  3. Use a custom implementation of the Color Picker - Eye Dropper to get the “pixel” color value at the mouse cursor location
  4. Return the “obj” index matching the color value
  5. Do something with the object

Viewport:

I’m stuck on step 3; how do I specifically call the Eye Dropper with the cursor position from code (or get the pixel color value in a cross platform way)?

Made a separate post about that over here:

Separately, does a “view mode” like this with unique object_ids already exist within Rhino.Display or within OpenGL or something? In other words, rather than needing a display mode or viewport bitmap rendered out per frame, can we just “trace” against an “invisible viewport layer of object_id colors” while keeping the actual viewport whatever regular display mode it is set to?

EDIT:

Sorry if this is going off topic from the original post, I figure it’s related but if it needs to be split off that’s totally fine!

EDIT 2:

Got it working for 3D Grasshopper Preview Objects thanks to @farouk.serragedine helping with the pixel color at mouse location over here.

Currently the test environment uses a wireframe display mode to ensure “flat” colors in the viewport. There needs to be a 1:1 correlation between the assigned/unique color_id values and the viewport pixel hit value for an exact match for it to work.

Therefore it doesn’t work in other viewport display modes currently.

My thoughts on next steps are that we need to be running this “color_id” viewport headless or in the background to get the object ID WITHOUT needing to actually set the primary viewport display mode.

In other words, you should be able to see your viewport as “rendered” or “raytraced” or whatever… but the selection logic is running in the background on a copied viewport that has the color_ids being evaluated at the mouse hit location… not sure how to achieve that?

I would love to hear your ideas!

So far, it seems quite performant on my test scene. I’ll need to test with a larger/more densely populated object file.

Video of logic in action:

@CallumSykes any ideas on how to get the pixel color logic from a “hidden” viewport or render_channel?

4 Likes

Very cool work! Nothing can stop you once you get a hold of a handy tool eh? :wink:

Still working on my little example project for ya, I’ll do things a smidge differently though :slight_smile:

2 Likes

Different is great! I look forward to seeing it!

Here’s what I’ve come up with so far, just sharing in case it’s interesting to anyone else.

I would love for the “color_id” selection logic to happen in the “backend” because obviously no one wants a rainbow viewport just to select stuff…

-I added the ability for click selection and silhouette highlighting.
-The hovered brep wireframes get temporarily cached so you can actually make use of the “osnaps” and draw Rhino geometry from the GH preview geometry

-The snapping is a little goofy sometimes as an object loses its hover selection if you aren’t “on” the object with the mouse.

-I’m unsure how to implement a “fuzzier” selection so that you can hover/select an object when you are close to it instead of DIRECTLY on it. Potentially expanding the pixel selection logic to 4x4 instead of 1x1 and then sampling the set of pixel colors for the “most” at that location and returning that result. For instance if you are near a purple object in an 4x4 grid lets say 6/16 pixels are purple, 4/16 are some other color(s), and 6/16 are the background mask color (black or whatever non-color) then, ignoring the background, we could infer that purple is the most likely nearest color_id to the cursor and return a valid selection result even if the mouse is just “near it”.

-Definitely open to other ideas and still want to figure out how to put this in the backend as a render pass or “duplicated” viewport in the backend that doesn’t effect the visuals of the main viewport.

Here’s a video of that:

Here’s the proof of concept GH script with the Mouse Callback color_id logic and GH components for now that handle the color_id matching and display colors. You can see that certain colors return multiple selections which was not intended initially but I left as is because I think it has validity for “color_by_group” based selection as well (logic wise)

FUNCTION_Select_GH_Preview_Geometry_By_Color_ID_Channel_01a.gh (968.4 KB)

2 Likes

Thanks for sharing @michaelvollrath!

What problem are you trying to solve with this approach? Is it just about selecting objects on mouse move? Wouldn’t simple ray-casting work here? OnMouseMove I’d cast a ray starting from the mouse position and following the camera’s frustum line.

If Rhino’s raycasting is too slow (which I doubt it is) you could always use embree for the job.

2 Likes

Hi @mrhe,

The use case is specifically for allowing mouse interaction, hover, select, “get pseudo object id”, on Grasshopper Preview Geometry that isn’t baked, cached, or referenced from the Rhino Model.

EDIT:

Maybe this can be solved with a Custom GetPoint class that “checks” against GH Preview Geometry instead of Rhino Objects? Hmm… I’m not sure but open to ideas

You can always raycast against objects which only live in memory and don’t have a RhinoObject counterpart.

1 Like

Oh awesome, that makes sense. What object type contains the GH preview geometry “in memory” or is that just the preview geometry itself?

I guess the steps are “cast ray on mouse callback, if gh_preview_obj hit, return hit result” more or less?

I would appreciate any leads to that direction, thanks!

GH components display previews of objects which are:

  1. passed to one of the output nodes
  2. added to the DrawViewportMeshes override

Internally, Rhino converts all geometry (other than points or lines) to meshes, which are referred to as a RenderMesh Rhino - Extract Render Mesh

This is perfect for ray casting - simply use a MeshRay intersection: https://developer.rhino3d.com/api/rhinocommon/rhino.geometry.intersect.intersection/meshray

I’m not sure what your exact use case is, but I’d simply make a list of all RenderMeshes in my drawing and raycast against them.

2 Likes

Okay here is my attempt to that effect… It works except that the .ClientToWorld method appears to only work properly in Fullscreen mode (on Windows), if not in fullscreen the location is offset. I’ve noticed this happens with Eto.Forms as well in screen space calculations of the active viewport.

@dale or @curtisw do you know why fullscreen effects screen space coordinates? Is that a bug/known limitation or a function of the methods being used and user error on my part?

Here’s my first attempt at the MeshRay intersection method @mrhe , thanks for the tip on going that route! It does seem this will get bottlenecked in files with high object counts because of the for loop checking for a hit against the full list? Any ideas there? I think you mentioned Embree but it seems the issue isn’t the ray method but more of an issue with list iteration?

Graph Space:

Video showing “offset” issue of fullscreen vs non-fullscreen:

Thanks for your help!

FUNCTION_GH_Preview_Geometry_Mesh_Ray_Intersection_01a.gh (25.9 KB)