Markerless AR with a Project Tango
I have recently been experimenting with a Project Tango device, primarily in its use as a possible technology for Augmented Reality and interior wayfinding – a topic that routinely comes up with some of our retail clients. In past AR projects I have used marker images which naturally contain patterns that can be recognized and interpreted by a computer vision algorithm to determine the orientation of a device in space in relation to the target image. For the most part, this is quite effective but there are limitations. Aside from the need for a printed image, target occlusion, tracking speed, and lighting conditions are all potential pitfalls. The Tango, an experimental android device released by Google, has depth-sensing cameras that help determine device orientation and create a dimensional data map of the physical environment. The device can then orient itself in part by matching the current feed from the depth cameras against this map. This gives us essentially markerless AR with a much-improved ability to track objects to physical geometry and scale.
What’s great is that this spatial data can be persisted in area definition files and potentially shared across devices. This has all sorts of obvious applications for games and other interactions in the physical world. Imagine, say, you run a museum. You can now, at least in theory, create an app that overlays digital information on top of exhibits or even help visitors quickly locate the gift shop.
But of course, the Tango has its own limitations. Just as is the case with AR marker images, a lot of irregular detail in the physical space is important. Also, spaces are not static and so area definitions presumably need to be updated or multiple files should be created for different circumstances.
Whenever I have access to a new piece of technology I try to build something for it, however trivial, to begin to learn it’s limitations and how it works. So here is a video of my first attempt at a Tango app which allows people to paint an Augmented Reality garden. It also highlights my biggest gripe with the Tango so far. After only a few weeks the color camera has already started to malfunction causing the video layer to really wash out which destroys any realism of the augmented elements in the scene.
When the user taps the screen it’s casting a ray against Tango’s point cloud and returning a surface normal if there is an intersection with depth map information. It’s set up so that relatively flat surfaces spawn flowers and vertical surfaces spawn vines. AR apps like this are very easy to build using Tango’s API’s.While AR is certainly important it is far from the only use case for the technology found in the Tango. Device pose tracking can be mapped as input for movement in virtual space. There are already a handful of demos showing this with first-person games. But I expect this will prove useful for VR as well. Devices like the GearVR generally limit the user to a fixed position with only 3 degrees of movement or require a controller. By using the spatial tracking capability of the tango in conjunction with VR users would have full freedom of movement without the need for external IR emitters and clumsy chords.
Presumably, in the coming years, Tango hardware will improve as it moves out of the dev kit phase and onto consumer devices. This is a technology that is absolutely worth keeping an eye on and is something I hope to experiment with more deeply.