"Face me" objects for Rhino?


I ask me when will Rhino get face me objects like known from SktechUp? This camera aligned 2D billboard a easy to use, light wight and nice looking. Could it be please implemented at Rhino too?



Rhino Dots and some text objects are always parallel to the view. Annotation Style determines text object properties: http://docs.mcneel.com.s3.amazonaws.com/rhino/6/help/en-us/documentproperties/annotation.htm

Rhino is compatible with Rich Photo-realistic Content (RPC). Most of these objects are 2D people who are always parallel to the view. You have to pay for these RPC objects.

I hope it can be a basic function without to extra pay like at SU.

Didn’t you have a script that does this? Or do you want the surfaces to automatically keep facing toward the camera while you modify the camera?

I have seen the face me use at SU again and ask me, why Rhino it doesn’t provide this little feature. Right, an automatically alignment could be great. My hope is that it could be used for animation and for example for Enscape. (Enscape for SU support the SU face me feature I hope it would work for Enscape for Rhino too.)

But do you still use the script or are you only asking for it to become a native Rhino object?

For Enscape, if Enscape supports it it shouldn’t be that hard to implement it for Rhino too I think.

Pascal posted a script a long time ago when I asked this question. Maybe it still works for R6?

@Micha put its content to a button - faceme.txt (1.2 KB)

Unfortunately, I don’t remember the source so I’m unable to recall the author here :frowning:

By the way RN has such option with whole ecosystem sticked to desired camera. However we’re still under closed beta.

alternatively: use the attached toolbar which has 1 button with 2 functions:
-name objects to ‘billboard’
-orient objects named ‘billboard’ to the camera
billboards.rui (6.3 KB)

Thank you. I know the script solution - my request was for really for standard implementation to Rhino with automatic alignment to the camera, for example working for simple screen presentations. I think, it’s such a basic feature that it should not be missed at the Rhino standard installation.

1 Like

@Micha it takes a bit more than you think to put it in - first of all it requires to “lock” facing to selected camera in SU there is only one valid camera since you have one viewport here you can have many viewports and to which it should stick? To active one or other or already named view since this in Rhino should be actually considered as a camera - nowhere else there is actually a camera “object” itself. So in general it takes another step with some UI changes and so on and i believe boys have more important things to do right now especially when it is just a matter of short script and one click and you have it.

a request which has been discussed for an entire decade or more which keeps popping up again and again sounds important to me.

1 Like

I’m curious what the McNeel team thinks. Maybe the little reminder is no problem and the feature could be added to the viz tools. For SU it is a very often used feature and I suppose so Rhino viz users will like it too.

While not exactly the solution you are asking for, you could do this from GH with a custom component. You could use the orient to camera component from Ladybug, or modify the python code for your own use.
for example: (simplified code)

import Rhino as rc

cameraX = rc.RhinoDoc.ActiveDoc.Views.ActiveView.ActiveViewport.CameraX
cameraY = rc.RhinoDoc.ActiveDoc.Views.ActiveView.ActiveViewport.CameraY
cameraZ = rc.Geometry.Vector3d(0.0,0.0,1.0)

def main(_pts):
    if _XYorXYZ == True:
        oriented = rc.Geometry.Plane(_pts, cameraX, cameraZ) #2D
        oriented = rc.Geometry.Plane(_pts, cameraX, cameraY) #3D
    return oriented

if _pts:
    oriented = main(_pts)

We made a minor modification (to the one we use in our shop), which was having an option to orient in XY or XYZ. To have it continually update, you can add a timer to the component.

(I do agree that it would be a nice feature to add in, but it’s not too difficult to achieve your desired outcome(.

Hi Micha - a reminder is no problem and I have linked your request to the open YT item: RH-32008


It’s been a long time wish of mine as well, however how to implement this considering Rhino environment is a good question as it may not be very straight-forward. Good point with SKP having only one viewport, so no need to decide what happens in inactive views.

I have seen at least several good scripts here that do it “on demand” with planar objects to face the camera (user has to run it for each view angle, not automated), and even I know there is a good script not published here that does it in a semi-automated way (objects transform automatically as soon as the view becomes static after change). However it would be nice to have it as smooth and fast as SKP has it.

I am wondering if this could be implemented not as real geometry that always faces the camera, but rather a “sprite” - 2D image, sort of like TextDots but capable to remembering real scale depending on camera distance and also sorted by depth. Maybe they could have an option to be bound to a plane (use Wold Z axis as “UP”) or free, to show as an image from any angle, top view etc.
Shadow casting in that case would also need to be considered - not sure what would be possible in case of that implementation.

Thinking out loud here but definitely would be nice to see it in Rhino at some point.



I do agree however as you pointed out it is somehow hard to decide which approach is the best due to many “side effects”. I know those since i’ve wrote it and “live sticking” to for eg. just Active View is bit… Umm at least weird so it really needs at least 4/5 constraints to be availible to have rough control over its behavior.

i dont want to blame anybody, but we are talking about something which was used in the earliest “3d” computer games to project positioned trees and other gadgets towards the camera. reading the youtrack discussion from Wim’s tracking number above it seems to make up for some major problems. it just seems a bit odd… naturally anything which moves in relation has to be iterated along with the frequencies of the screen, graphic card etc. but when they managed 30-40 years ago whats the problem now? with hardware that can gobble up entire 3d universes in real time…

1 Like

I think you went too far. It isn’t hard to write at all - besides as you see few ready to go script and you should know there are objects which follow camera like textdots.

The problem arises when you are suspecting some kind of behavior and each user can expect something different. If it is so obvious then tell me on which objects it should be applied then we’ll start “what if” part. For me this is rather usability/ux issue and don’t take me wrong but your words perfectly fit the concept of the Dunning-Kruger effect.

Just to add here. They aren’t thinking about 10 trees with the same texture but about the whole bunch of objects(!) which are individual pieces held by the doc and viewport and its not that you want just a randomly placed plane in space to imitate for eg. tree. Its more like let’s assume you want 10k of those which are individual objects so each one can have own texture or some other property so you just can’t assume that the user will want to use aways texture x or property y then the problem arises due to count of object so you should head towards sth called mesh batching (but wait should it be a single quad mesh or 3rd deg srf(?) or even 3d object [so wheres the front of it] - besides polysrf or mesh(?)) so if not batching maybe instancing well still not since each object has own set of properties what doesn’t suit instancing concept.

i haver never heard of such a syndrome but i am pretty sure you can throw it at anybody in the right moment to “win” a debate, which is not even there by the way. i think you are interpreting a little too much yourself into my words. maybe rethink the content in a different relation before you lock somebody into the padded cell.