RhinoVR - a sample plug-in for rendering Rhino viewports in virtual reality

Very tricky indeed. Many thanks for your help!

I actually had previously tried transforming the xforms by m_cam_to_world_xform. Actually, I’m using m_cam_to_world_xform from the first frame (first_cam_to_world), so the transformations are consistent. Here’s the updated code (replying to your second comment at the end):

if (dpad_up_pressed || dpad_down_pressed)
{
  // store relative transformation between coordinate systems before navigation
  first_cam_to_world = m_cam_to_world_xform;
  // store controller xform before navigation
  first_xform = m_device_data[m_device_index_left_hand].m_xform;
  first_xform = first_cam_to_world * first_xform;
}
else if (dpad_up || dpad_down)
{
  // store current controller Xform
  current_xform = m_device_data[m_device_index_left_hand].m_xform;
  current_xform = first_cam_to_world * current_xform;
  // relative transformation from first_xform to current_xform
  ON_Xform diff_xform = first_xform.Inverse() * current_xform;
  // apply independent transformations to cam_to_world
  //m_cam_to_world_xform = diff_xform * m_cam_to_world_xform; // transforms along local coordinates (rhino referential)
  m_cam_to_world_xform = m_cam_to_world_xform * diff_xform; // transforms along global coordinates ("physical space")   
  //resets first_xform
  first_xform = current_xform; // added
}

Although the orientations get corrected, I’m still getting the same effect, in which translations are dependent on controller orientation. Again, it seems like the controllers’ xforms are local, respective to their own referential.

I’m going to look further, like where are those xforms captured/calculated (would it be in RhinoVR or deeper, in OpenVR?), and where are the controller’s mesh absolute position in the VR/world space (maybe I could use those instead of m_device_data[m_device_index_left_hand].m_xform…). Any suggestions are most welcome :slight_smile:

@castroecosta The way I debugged this kind of stuff when I was writing RhinoVR is to start out by testing very simple things, make sure they behave exactly like they should, and then slowly add complexity.

The line which gets the transforms for all devices is inside UpdatePosesAndWaitForVSync():

m_compositor->WaitGetPoses(m_device_poses, vr::k_unMaxTrackedDeviceCount, nullptr, 0);

WaitGetPoses is an OpenVR function. These transforms are later converted into ON_Xforms.

You should find everything you’re looking for in the RhinoVR source code.

Hi @DavidEranen, thanks for your suggestion. Meanwhile I solved it by subtraction just the translation from first and current xforms. Best, Eduardo
[edited] Also, it is important to use cam_to_world for change basis! The actual code:

		  if (l_grip_pressed)
		  {
			  // store relative transformation between coordinate systems before navigation
			  l_first_cam_to_world = m_cam_to_world_xform;
			  // store controller xform before navigation
			  l_first_xform = m_device_data[m_device_index_left_hand].m_xform;
		  }
		  else if (l_grip_down)
		  {

			  // extract translation from first_xform matrix
			  ON_Xform l_first_translate = ExtractTranslation(l_first_xform);
			  //change basis for inverse of first_cam_to_world
			  l_first_translate = l_first_cam_to_world * l_first_translate * l_first_cam_to_world.Inverse();

			  // store current controller Xform
			  ON_Xform l_current_xform = m_device_data[m_device_index_left_hand].m_xform;
			  // extract translation from current_xform matrix
			  ON_Xform l_current_translate = ExtractTranslation(l_current_xform);
			  //change basis for inverse of first_cam_to_world
			  l_current_translate = l_first_cam_to_world * l_current_translate * l_first_cam_to_world.Inverse();

			  // relative transformation from first_xform to current_xform
			  ON_Xform l_diff_xform = l_first_xform.Inverse() * l_current_xform;
			  ON_Xform l_diff_translate = l_first_translate.Inverse() * l_current_translate;

			  // apply independent transformations to viewport
			  m_vp_hmd.Transform(l_diff_translate);

			  //resets first_xform
			  l_first_xform = l_current_xform; // added

Hello, does it support Rift S or only Rift?

Thank you.

I personally hope Windows Mixed Reality support will be added in the future, because:

  • Steam is a gaming platform. It’s a bit ridiculus to have this at a workplace.
  • Oculus is Facebook. That’s banned in many workplaces.
  • WMR is built straight into Windows 10. No set up, no accounts necessary.

(HP Reverb Pro is also currently the highest resolution headset you can get that doesn’t have any distortion, doesn’t require any tracking stations and doesn’t cost an arm and a leg. Just two wires to plug into your PC and you’re done.)

All of this would be moot, though, if OpenXR was the solution used instead of OpenVR.

@bryan.pereira,

I don’t have a Rift S, so I don’t know. I’m pretty sure it supports all Rifts that work with the OpenVR library.

We have HP Reverb Pros and they do run using Windows Mixed reality PLUS Steam VR. AFAIK we cannot run them without Steam VR, but maybe I’m missing something?

G

We’re thinking on buying a VR set for the office and your plug-in would be one of the tools we would use.

If anyone can confirm that it is compatible with Rift S, I’d greatly appreciate, in before we go and buy it :slight_smile:

Bryan

If you use Unity or Unreal you can.

We use Unreal with Datasmith to visualize our CAD models, and there you can check the two Microsoft Mixed Reality plugins, and uncheck the Steam/Oculus plugins and you’ll be able to view VR without any Steam at all (the OpenXR plugin is still experimental, I think).

It’s so liberating and more professional to do it this way.

Thanks, I’ll look into it, having to use Steam is not great, I rather avoid it.

Does the RhinoVR plug-in work with the HP Reverb Pro using Steam VR?

Hi David,
Thank you for putting the work into this -
I just tried installing the plug-in in Rhino 7, but it doesn’t seem to want to install ?

Hi @ben.storch,

I released a new version just now that should work in latest Rhino 7 (Service Release 2):

Please download the .zip file, unzip and then drag the .rhp file onto Rhino 7.

-David

2 Likes

Thanks, due to Rhino’s slow FPS we have not had success in using this on anything but very simple things. Could you consider a “simple” extract rendermesh, hide and draw those meshes in a pipeline workflow to speed up the redraw refresh rate?

I understand this would disable the ability to make modifications and create new objects in VR.

Also does it now support Oculus’ feature to “feed frames to the view-space” (in lack of the right term) or are all frames sent directly to the headsets screens as-is? (In TwinMotion we can have quite low FPS and still not get seasick because of this, you’ll see the black boundry around the frames if you turn around quickly though.

I understand and agree with you, it’s pretty sluggish on real-world models. It’s all about time and prioritization, and currently other things take priority.

I believe Oculus’ spacewarp and timewarp features are enabled by default but they only kick in if the frame rate is high enough, which it might not be most of the time.

Optimizing Rhino’s viewport display in the general case will of course also improve RhinoVR, and that we are working on.

-David

1 Like

Ok, thanks for the feedback!
It would be great if the plugin could develop into a presentation platform. If I have time I’ll look into speed comparing a conduit drawing meshes vs “real” objects to see how much it affects.

This is a 25K polygon model (Low poly), and it’s a single mesh. On this system I get 82FPS when I test -TestMaxSpeed with VerticalSync disabled.

Basically this means that using Rendered mode with VR is a challenge as 90FPS is the recommended minimum to avoid nausea. So if anything could be done to trick Rhino into drawing stuff faster, even if that means making everything static, then I think it’s worth exploring.

This system is a dual xeon x5650 with a 2060 super (the x5650 single core speed is about 65% of the single core speed of a 10900, so not super-hot, but not too limiting either. The 2060 Super is more than capable)

Edit:
I can get as high as 714 FPS in shaded mode with no mesh-wires. So the system feeds the GPU fast enough. I guess the shadows is slowing things down too much. If they could be cashed and reused then maybe that could be a solution?
(If I set shadows to the lowest value, then I get 236 FPS)

Point is: I think VR and a Rendered Mode with shadows goes hand in hand :slight_smile:

Oh, and by the way, I did make a script that hides the mesh, pops it in a conduit and measures the result, but that is slower than using the mesh directly, so that might be a dead end.

Hi David,
Thank you for your quick response ! I manged to get it running, so that’s a start !
I am not enough of a programmer to develop this, and maybe should just try Mindesk.
The only way I could develop this is if I could make it a Unity project - but I would probably have to start from scratch, would I ? I had found some projects a while ago where people were doing some scripting in Unity to link to RhinoInside (have to find them again), so I should probably start from there… though that’s different from using Unity to build a standalone VR app to run as a Rhino plug-in.

Hello David,
I have been testing out this plug-in and I seem to have tested most of the available functions correctly using a Rift S.

Is there a way of viewing and interacting with a HumanUI window from Grasshopper?