Hi, dale,
Thank you for your reply.
I just simply added new Rhino viewports into the rhino document and the projection will be keep changed based upon the change of Oculus orientation and position. And then when I try to do the normal implementation in rhino viewports, the viewports pause to update and implement the drawing process. Once the drawing process finish, the viewports keep updated again. But what I need, is that operating the dynamic draw (in Rhino interface, not in plug-in programming) when my viewports projections keep changing.
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//////////////////////// Here is my sample code how I add the viewports and update the projection ///////////////////////
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
for (int eyeIndex = 0; eyeIndex < 2; eyeIndex++)
{
Rhino.Display.RhinoView rView = RhinoDocument.Views.Find(viewportTitles[eyeIndex], false);
if (rView == null)
{
// Set viewport offset
int viewportOffset = 0;
switch (eyeIndex)
{
case 0:
viewportOffset = 0;
break;
case 1:
viewportOffset = riftHor / 2;
break;
}
// Create viewport
rView = RhinoDocument.Views.Add(viewportTitles[eyeIndex], Rhino.Display.DefinedViewportProjection.Perspective, new Rectangle(new System.Drawing.Point(x + viewportOffset, y), new Size(width, height)), true);
}
if (rView != null)
{
rView.TitleVisible = false;
rView.MainViewport.WorldAxesVisible = true;
ViewportInfo v_p_i = new Rhino.DocObjects.ViewportInfo(rView.MainViewport);
v_p_i.Camera35mmLensLength = 50;
v_p_i.SetCameraLocation(CamLoc[eyeIndex]);
v_p_i.SetCameraDirection(CamDir[eyeIndex]);
v_p_i.SetCameraUp(CamUp[eyeIndex]);
v_p_i.CameraAngle = (FOV_L[eyeIndex] + FOV_R[eyeIndex]) / 2;
double left, right, bottom, top, near, far;
// Get camera frustum
result = v_p_i.GetFrustum(out left, out right, out bottom, out top, out near, out far);
if (!result)
{
return;
}
// Set left and right eye difference
int dif1 = 1, dif2 = 1;
switch (eyeIndex)
{
case 0:
dif1 = -1;
dif2 = 1;
break;
case 1:
dif1 = 1;
dif2 = -1;
break;
}
// Set camera frustum
double offset = (((FOV_L[eyeIndex] * left) + (FOV_R[eyeIndex] * right)) / (FOV_L[eyeIndex] + FOV_R[eyeIndex])) * dif1;
left = left + (offset * dif2);
right = right + (offset * dif2);
v_p_i.UnlockFrustumSymmetry();
v_p_i.SetFrustum(left, right, bottom, top, near, far);
// Feed viewport information to the current viewport
result = rView.MainViewport.SetViewProjection(v_p_i, true);
if (!result)
{
return;
}
// Redraw the current viewport based on the new camera and frustum
rView.Redraw();
v_p_i.Dispose();
}
}
Another Question.
I have another question also related to the viewport. As I said earlier I simply add two viewports into Rhino document, so they are not distorted and color corrected which are required by the Oculus. So I was trying to find the color buffer from the pipeline in the PoseProcessFramBuffer stage to feed into Oculus and let the Oculus SDK do the job of distortion and color correction. I find RenderPass property in the pipeline class which might be the one I was looking for, but I could not find any resources online about this. So could you please tell me is RenderPass can do the job I want? If so, could you please give a simple instruction about how to use it? if not, is there any function or property that I can use for capturing the color buffer.
Forgive me if I asked too much. I have been spend so long time on it, but no any clue yet.
Thanks again for your efforts and I look forward your respond.
Singwyn,