Did you grow up with a children’s book called Harold and the Purple Crayon? Harold is four years old and uses a magic purple crayon to add amazing things to his world simply by drawing them. When he goes out at night for a walk, he draws a moon to light his way. When he’s hungry, he draws pies to eat. When he’s tired, he draws his bed then goes to sleep.
For 50 years, Harold has used his purple crayon to navigate and enhance his surroundings to the delight of millions of fans attracted to the idea of superimposing their objects of desire onto the physical world and watching them miraculously spring to life.
Mobile augmented reality (AR) attempts to make a smartphone act much like Harold’s purple crayon. Media-centric smartphones with advanced cameras plus GPS/compass and tilt sensors are overlaying digital information onto the physical world. Users of the iPhone 3GS and various Android models can now use their camera function to experience a mixed reality, in which real time media and information merges with physical locations, objects or even people. Mobile AR transforms the see and touch physical world into a palette upon which users can draw with virtual information accessed via their smartphones.
However, for all its promise to turn the outside world into the ultimate desktop, mobile AR suffers from significant teething pains. Some of mobile AR’s issues, such as educating users about it, are typical for any new technology system. More specific and fundamental challenges for mobile AR result from its promise that the user will see the surrounding world in a compelling and useful way.
Mobile AR makes no claim that the information or media it provides is any more accurate than what the user can get from mobile versions of MapQuest or Google Maps. The primary selling point for mobile AR 1.0 is its output. A mobile AR session doesn’t require the user to toggle their attention between the physical world and a display screen. Instead, mobile AR integrates its output directly with the user’s visual perception in real time. The smartphone interface, information and services plus the outside world blend into one user experience. Whether a value proposition based on perception rather than utility will be enough to create a stand-alone market isn’t clear at the end of 2009.
Moreover, there are internal tensions within the traditional AR community over whether current mobile AR qualifies as true augmented reality. Military grade heads-up displays and industrial AR applications often understand what they are “seeing.” They can analyze a stream of visual data to identify that a chassis on an assembly line is an actual chassis, or that a missile streaking toward a fighter plane is indeed a missile.
Conversely, the current crop of mobile AR browsers for smartphones use location technologies in order to display information on a video stream. The smartphone doesn’t “see” what is in front of it; it just knows its geographic position. It uses geolocation data to place digital content, 2-D and 3-D objects along with links to other information and services into the user’s field-of-vision. When mobile AR works well, the user has the illusion of an optical see-through experience when actually they are viewing a video stream that’s been annotated with digital assets by the mobile AR browser. If the mobile AR user covers the camera lens with their finger, they will still see digital information about their immediate physical surroundings.
It’s not surprising, therefore, that location accuracy is the lynchpin — and potentially the Achilles heel — of consumer-focused mobile AR. Without accurate location data, it’s extremely difficult to align digital objects or information with the physical objects and landmarks being captured by the smartphone’s video camera. Objects might be placed in the wrong location or the mobile AR app returns the wrong information in the first place. Both instances have been observed anecdotally by mobile AR researchers. The best sustained level of location accuracy for standard smartphones in 2009 is about 20 meters.
The current state of location accuracy makes for rather primitive mobile AR, but there are use cases — such as outdoor festivals — in which mobile AR has begun to prove itself. One of the case studies in this report examines how mobile AR was set up and used for the Voodoo Experience music festival in New Orleans during Halloween weekend 2009. The organizers and mobile AR application developers had the advantage of knowing when people would be there, where they would congregate and what interested them (e.g. food, music, restrooms, first aid, beer, etc). This narrow focus made the development of a useful and compelling mobile AR user experience far more tractable.
These early efforts are important for establishing a toe hold for mobile AR. However, there are two larger location plays for mobile AR. The first involves how to mine the rich vein of geo-located data available on the web such as all the location-searchable content on Google or Bing, Flickr images, Twitter messages or Gowalla and Foursquare maps. A famous landmark or hot café that is repeatedly visited and annotated from different angles on different social networks offers a geolocation opportunity for harvesting and then error-correcting those geographic coordinates to more tightly align digital content and objects to an attraction or location. A second case study in this report looks at a social mobile AR app called Junaio to drill deeper into how that is done.
However, the major area for location innovation is the great indoors. Currently, there is no effective mass-market solution for indoor GPS location-based navigation or services. None. Zip. This is a giant and obviously lucrative hole to fill for a host of service models, not just mobile AR.
Given the potential value of lashing location data more tightly with content and services, it’s hardly surprising that the big ecosystem players are already placing bets with mobile AR in mind. For example, Apple filed patents in July 2009 for something called the ID App, an application that helps an iPhone identify objects in a user’s surroundings in order to present additional information about the object. According to Apple Insider and U.S. government filings, ID App would automatically determine a user’s current outside environment and allow them to identify the object by selecting from a list of detection technologies such as an RFID reader, a camera or a GPS/compass reading. Based on the selection, the iPhone would then search a collection of databases to return information about the object. Whether the information returned by ID App will render as a mobile AR type output isn’t clear from the filing. However, every mobile AR developer interviewed by GigaOM Pro for this report mentioned that Apple still keeps the iPhone’s video feed API effectively closed, in marked contrast to every other smartphone OS environment. This suggests that Apple sees image recognition as a major next step for the iPhone platform.
Apple isn’t alone. Google announced the launch of Google Goggles for Android in early December 2009. The system works by enabling the mobile user to capture an image with their smartphone camera. Google then decomposes the image into object-based parts or signatures. Google then analyzes those image parts against its vast image database. Google Goggles also integrates GPS and compass functionality that can help identify a major landmark, such as the Golden Gate Bridge. Nokia also launched its Point and Find system in March 2009. In a similar vein, the Nokia solution captures images using the phone and compares them against an image database to return links and associated information. In the case of both Google and Nokia, the accuracy of the image results depends heavily on the degree that the object in question has image attributes such as standard or 2-D barcodes, UPC codes or other formal means of identification. However, it seems reasonably clear that all these companies are racing to create a general purpose image recognition–based search engine for smartphones.
The hurried pace of announcements related to mobile AR by players large and small is part of a larger trend, namely that the ecosystems surrounding mobile content/services, mobile location and mobile social networking are taking a hard turn toward the real-time mobile web. As geographic data resources expand, the scope for spinning out new businesses and business models based around the smartphone platform continues to grow.
If mobile AR follows a similar path as other web businesses, value creation in the first iteration is likely to concentrate around the technology platform that enables the experience. Not surprisingly, the current focus on mobile AR browsers and other associated technologies like image recognition and location is important for stabilizing mobile AR for the mass market. But the end game will likely be a contest to own and manage the location data and metadata used and often generated by mobile customers.