Virtual reality is taking off; indeed, it is on the verge of mass availability. But which of the many form factors and underlying principles will it be?
The late 1960s were a defining moment in the long evolution of virtual reality. At that time, Ivan Sutherland explored his dream of an immersive world. By creating the world's first head-worn display driven by 3D graphics, he opened a window into a computer-generated world. He combined his 'Sword of Damocles' display system with a method for tracking the user's head, allowing users to look around in a simple virtual world, creating a strong sense of immersion.
His idea, it turned out, was two ideas. If the device was operated in a dark room, users would see only the world generated by the computer. In the 1980s, we would get to call this "virtual reality." If the lights in the room were on, however, users would see the virtual world overlaid onto the backdrop of the physical worldan interaction concept today known as "augmented reality."
Over the years, both ideas become highly influential in research. And while the two modes of his prototype looked indistinguishable, they turned out to afford quite different applications. VR systems tended to find application in simulation and ultimately also games. AR research, in contrast, tended to find use in providing users with timely information about their surroundings.
One of the key limitations of head-worn VR displays was they did not allow users to see their own hands and bodies anymore and no matter how much of a virtual world people wanted, they generally expected their bodies to be part of that world. On the one hand, this led to the development of systems that tracked the user's hands or even the entire body in order to provide a virtual one. On the other hand, the early 1990s saw the advent of the so-called CAVE system (that projects the virtual world onto the walls of a small room surrounding the user).
A few years later, AR got its own projection-based approaches. Initially projectors were used to project onto props (Raskar and colleagues), then also onto walls. While the CAVE was intentionally designed to be void of any proper structure or contents, stronger projectors allowed researchers and artists to overlay their projected contents onto real-world background objects materials. This approach became known as "Spatial Augmented Reality." As a result, AR applications diversified from information displays to large-scale artistic installations and media facades, blurring the distinction between VR and AR.
Today, almost 50 years after Sutherland's prototype, VR and AR are both becoming reality in the form of consumer devices, such as the VR headsets by Oculus and the Google Glass device. While Glass ships as a mere "heads-up" display, Microsoft's Hololens runs AR software on a collection of sensors including a depth camera that allows it to register its contents with the physical world.
Both devices are head-worn, but whether that ultimately is the way to go is subject to debate. The reason is that AR/VR are facing yet another requirement: as the computing world has become "social," VR and AR must follow. Consumers exploring virtual worlds will expect to be able to see their friends there. And while the promotion of head-worn VR displays by a social networking company suggests that "social" today may well mean "social with people located elsewhere," one might argue or at least hope that collocated friends and immediate social interaction will play a role in the future. Such collocated social use, however, clashes with head-worn VR displays, but even see-through glasses impact in part how people can perceive each other.
But if head-worn gear is not the solution, what will it be?
This is where the authors of IllumiRoom come in. They investigate what AR/VR could look like in a truly social environmentthe living room. The project builds on an eight-year series of research projects in which the authors explore the boundaries between the virtual and the physical and I would encourage readers to read them all.
A key ingredient in the authors' approach to tackling AR/VR at home is the use of depth cameras, a type of device already available in the living room through Microsoft's Xbox gaming console. The camera is a key component, because it allows the authors to integrate their system with the living room. Rather than shutting out the room, as head-worn VR displays would, the authors digitize the living room and integrate it into the user's experience. They achieve all this with "living room-compatible" components. And rather than replace what they find in the living room, the authors augment it. They appropriate the TV and use an augmented reality approach to creating what might be considered more of a VR experience.
Ultimately, what impresses me about IllumiRoom is that it asks how virtual and real reality should come together. When I was a research scientist at Xerox PARC in the early 2000s, I was in the process of creating very large screens by combining flat panel displays with projection. I showed my prototype to the late Rich Gold, who liked the idea and agreed with my pitch about the productivity gains such big screens could provide. However, he was critical of my plan to simply maximize screen real estate. Bigger screens, he argued, were a trade-off: each piece allows users to access more virtuality, but at the expense of giving up a piece of the physical reality. And that is what the IllumiRoom authors excel at: creating a virtual reality while being mindful of the physical world.
To view the accompanying paper, visit doi.acm.org/10.1145/2754391
The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.