Avatar depicts world-space 3D UIs |
- Full windowed GUI
- Profiler
- Scene editor
- Integrated entity manipulation widgets
- On-screen performance HUD
However, when debugging in an HMD, we're currently reduced to the standards of ten years ago: using obscure key presses, minimal on-screen UI, looking at a separate console for text output (which requires taking off the HMD!). Rather than make ad-hoc key presses and screen overlays, I propose creating an adaptor to allow existing Widget, Surface2D, and GEvent classes to work in an HMD.
Iron Man depicts body-space 2.5D UIs |
- 3D, or 2.5D (i.e., with perspective but no depth test so that it isn't lost "behind" near walls)?
- Attached to the avatar's head, the avatar's body, or fixed in world space?
- Routed through the existing onGraphics2D method, or a separate rendering pass?
- Mirrored to the traditional display as well?
- The OSWindow has different dimensions than the Film. This will throw off event delivery, especially for ThirdPersonManipulator. How can this be handled?
- At what depth for fixed the avatar's head?
- How should the cursor be rendered? 2D in texture space? 3D in body space? 3D in camera space?
- How should focus work, so that keyboard WASD control is not locked out of movement or stolen from the GUI?
Terminator 2 depicts a head-space 2.5D UI |
Update: this Doom3 mod has a pretty good looking solution. I'm going to reverse engineer it and see if that's a good model for us.
I worked through the implications of a few designs for the G3D APIs and assumptions made by their implementations. Based on these considerations, my first implementation attempt will be a very simple head-space 2.5D implementation:
- Make VRApp::onGraphics bind a Framebuffer and render onGraphics2D to it, before onGraphics3D is invoked. This framebuffer will match the OSWindow's dimensions
- For each eye
- Overlay the onGraphics2D image as if it were a rectangle at 1.5m from the camera just before HMD compositing
- Render the cursor relative to this image, if it is currently visible on the OSWindow
- Optionally mirror the onGraphics2D image to the non-HMD display as well
- Leave event delivery unmodified from usual G3D!
As described, this will take about an hour to implement in VRApp. I foresee several potential problems with it, which is what I'll spend most of the jam working on:
- The 2D event positions might end up relative to the upper-left corner of the film instead of centered.
- The DK2 resolution won't match the on-screen windows, and will have all kinds of distortion applied. This may make text unreadable. We may need some kind of G3D "low-DPI" setting, or to adjust the OSWindow to a more practical size. Oculus now provides a UI layer that I'd like to take advantage of.
- Timewarp may do horrible things to this, because it will be a transparent, head-space surface. The UI layer may handle this
- This design won't allow selecting 3D objects or using ThirdPersonManipulator with them.
- I think that body space may have some advantages over head space, including the ability to look past the UI by shifting your head when it occludes something. However, "Body space" doesn't work very well with nomad VR based on G3D's current definition of "body = camera entity". We may need another level of indirection in the G3D VR model, or some way of moving the body based on tracked data.
- This design isn't nearly as exciting as Iron Man's
No comments:
Post a Comment