- AR by scanning room with kinect beforehand and importing model into VR scene.
- Music Visualization
- Big screen TV in VR
- Full virtual machine desktop in VR
- Godzilla stomping buildings (using kinect to track legs and a VR headset for display)
- Video Conferencing in VR (use a kinect or stereo camera to get color+depth, render a 3D floating head/full body with only moving head)
- VR Remote Control car/drone
- RTS game where you are explicitly a commander using VR to control troops (this plays into the limitations of current VR, but would require many jams worth of prereqs (screens in VR, gesture control))
- Soccer Header Game
- Short 3D film in VR movie theatre, with something completely coming out of the screen
- Getting a barbershop haircut
- VR Cards
- VR Ping Pong
- Cast magic spells
- Dodge bullets in the Matrix (Obviously would be better with bodytracking…)
Searching around earlier today for other projects to perhaps spur me towards one of these ideas I found this video:
I encourage you to watch the video, but the gist is that this person set up three Microsoft Kinects (color+depth cameras) in an equilateral triangle around his body while using an Oculus Rift. The depth and color data is then used to reconstruct a low quality mesh of his body every frame.
What this gives you is a blurry glitchy mess of an avatar... that is your real body, fully articulated, and has very low latency (just the Kinect latency for giving a frame and a frame to process it client side).
This was amazing to me, and I decided to drop my other ideas until I have implemented a rudimentary version of this. Matt Fisher, a postdoc in my research group at Stanford will be providing some assistance. I also want to take it a bit further, and further play into the "glitchy" feel of the resulting avatar by adding rendering effects that play this up and make the experience feel like one from a movie or game. A couple examples of "glitchy" non-photorealistic rendering (NPR) below:
|Glitch effect from the 2012 movie Wreck it Ralph|
|Sample image from ImageGlitcher|
Goal: Render a properly positioned avatar of myself in real time on a DK2 with low-latency, fairly accurate tracking, and non-photorealistic rendering.
- Up to three Microsoft Kinects (starting with just one)
- A Windows 8 computer with an NVIDIA GeForce 980 and Visual Studio 2015 as my main development computer
- An Oculus DK2
- The G3D Innovation Engine
- Any and all software resources referenced on Oliver Kreylos's blog post about the project I'm replicating
- shadertoy.com, for inspiration for the NPR effect
- Get color and depth data out of a single Kinect within a G3D VRApp.
- Render a point cloud based on the Kinect data
- Remove background points from the point cloud
- Calibrate the transform between the Kinect's camera space and the Oculus's
- Design a simple NPR shader to add "glitchiness" to the now existent avatar
- Add a second Kinect
- Add a third Kinect
- Investigate the method used by Oliver Kreylos to calibrate all three Kinect's and implement it
- Investigate the method used by Oliver Kreylos to construct a mesh from the multiview depth data, and implement it.
- Test using screen-space raytracing for rendering the avatar directly
- Use a Kinect v2 or other higher-quality sensor
- Improve the NPR shader
- Apply shadowing to the virtual scene using the rendered avatar
- Apply AO to the virtual scene using the rendered avatar
- Calculated Spherical Harmonic lighting for the real world, and subtract it out in the rendering, apply virtual lighting instead.
Several potential problems: