Sunday, August 2, 2015

[MMc] Design for an in-HMD GUI

I'm Morgan McGuire, professor at Williams College and researcher at NVIDIA. For the game jam, I'm designing an adapter class that will allow using the G3D Innovation Engine GUI inside of the HMD view. I'll work on the G3D::VRApp class directly using Oculus DK2.

Avatar depicts world-space 3D UIs
G3D developers enjoy a great in-engine debugging environment, including:
  • Full windowed GUI
  • Profiler
  • Scene editor
  • Integrated entity manipulation widgets
  • On-screen performance HUD

However, when debugging in an HMD, we're currently reduced to the standards of ten years ago: using obscure key presses, minimal on-screen UI, looking at a separate console for text output (which requires taking off the HMD!). Rather than make ad-hoc key presses and screen overlays, I propose creating an adaptor to allow existing Widget, Surface2D, and GEvent classes to work in an HMD.

Iron Man depicts body-space 2.5D UIs
There are several possible ways to present the user interface. These raise questions for the experience and the design:

  • 3D, or 2.5D (i.e., with perspective but no depth test so that it isn't lost "behind" near walls)?
  • Attached to the avatar's head, the avatar's body, or fixed in world space?
  • Routed through the existing onGraphics2D method, or a separate rendering pass?
  • Mirrored to the traditional display as well?
  • The OSWindow has different dimensions than the Film. This will throw off event delivery, especially for ThirdPersonManipulator. How can this be handled?
  • At what depth for fixed the avatar's head?
  • How should the cursor be rendered? 2D in texture space? 3D in body space? 3D in camera space?
  • How should focus work, so that keyboard WASD control is not locked out of movement or stolen from the GUI?
Terminator 2 depicts a head-space 2.5D UI
Existing applications offer little guidance for using a 2D GUI in a 3D world. Oculus's demo uses the center of the head-space screen as the selection point, which is awkward even for hitting a large button. DOOM 3 has in-game computer screen UIs that take over the mouse when it crosses their borders, and then release it when it moves past. There is no standard VR mode for DOOM 3 to see how this interacts with the camera, however. SightlineVR's menu feels like having a mobile device screen pasted to your face (because it is). There's been a lot of research on VR GUIs, but most of it predates today's consumer HMD form factors and APIs, and was focused on how to make the optimal VR UI, not map an existing 2D UI into a place it could be used in an HMD.

Update: this Doom3 mod has a pretty good looking solution. I'm going to reverse engineer it and see if that's a good model for us.

I worked through the implications of a few designs for the G3D APIs and assumptions made by their implementations. Based on these considerations, my first implementation attempt will be a very simple head-space 2.5D implementation:

  1. Make VRApp::onGraphics bind a Framebuffer and render onGraphics2D to it, before onGraphics3D is invoked. This framebuffer will match the OSWindow's dimensions
  2. For each eye
    1. Overlay the onGraphics2D image as if it were a rectangle at 1.5m from the camera just before HMD compositing
    2. Render the cursor relative to this image, if it is currently visible on the OSWindow
  3. Optionally mirror the onGraphics2D image to the non-HMD display as well
  4. Leave event delivery unmodified from usual G3D!

As described, this will take about an hour to implement in VRApp. I foresee several potential problems with it, which is what I'll spend most of the jam working on:

  • The 2D event positions might end up relative to the upper-left corner of the film instead of centered.
  • The DK2 resolution won't match the on-screen windows, and will have all kinds of distortion applied. This may make text unreadable. We may need some kind of G3D "low-DPI" setting, or to adjust the OSWindow to a more practical size. Oculus now provides a UI layer that I'd like to take advantage of.
  • Timewarp may do horrible things to this, because it will be a transparent, head-space surface. The UI layer may handle this
  • This design won't allow selecting 3D objects or using ThirdPersonManipulator with them.
  • I think that body space may have some advantages over head space, including the ability to look past the UI by shifting your head when it occludes something. However, "Body space" doesn't work very well with nomad VR based on G3D's current definition of "body = camera entity". We may need another level of indirection in the G3D VR model, or some way of moving the body based on tracked data.
  • This design isn't nearly as exciting as Iron Man's


No comments:

Post a Comment