11 Comments

Summary:

Locations identified within the 10 or 20 meters possible by GPS today are far too inaccurate — we need to know where we are we are right down to the millimeter! One futurist says with millimeter accuracy enabled by photographs, augmented reality will actually become possible.

Locations identified within the 10 or 20 meters possible by GPS today are far too inaccurate — we need to know where we are we are right down to the millimeter! That was the gauntlet thrown down by Michael Liebhold, distinguished fellow at the Institute for the Future, speaking at a GigaOM Pro Bunker Session on location at the GigaOM office this week. With millimeter accuracy, augmented reality — digital information overlaid on a real-time view of the world — will actually become possible. “Right now we have all this toy AR,” said Liebhold. “This is useless.”

So how do we get to millimeter accuracy? To find out, we followed up with Liebhold for a video interview. He said the most promising technique is to build model of the world using photographs, some of them geo-coded automatically, and the rest of them mapped using an understanding of where they are by comparing them to other images. So a photograph of vacationers in front of the Golden Gate bridge could be pinpointed in position using the precise angle of the orange arches in the background. Google Goggles is embarking on this very project, building a point cloud reference database using publicly available images like the ones from Flickr, said Liebhold, referencing remarks made by a member of the Goggles team at the recent Where 2.0 conference. (As is Microsoft, with its Photosynth product.)

The Google project is scary, said Liebhold. Scary because of the privacy implications, I asked? No, he said, because if Google wants to do this, it will, and it will be hard to compete. Everyone wanting to use the most accurate location data will have to depend on Google.

Liebhold did mention one promising startup effort in the space: Earthmine out of Berkeley, Calif., is building a set of street-view images captured in 3-D with every pixel geo-coded. (See our interview with them from a couple years back.)

Intrigued as to how soon millimeter accuracy might happen and what it could enable? Here’s the video:

Related content from GigaOM Pro (sub req’d):

Report: Mobile Augmented Reality Today and Tomorrow

Image courtesy of Flickr user jmlawlor

By Liz Gannes
  1. Mike is spot on re: the impact of image recognition on mobile devices. In a recent GigaOm Pro report on Location-based Innovation (overview at http://bit.ly/9ugm2M), I referred to this as “a sort of ‘reverse lookup’ using images and object recognition” and concluded that (i) “object recognition on mobile devices will be regarded as one of the most significant developments of the decade” and (ii) “3-D geodata will enable new location-based applications in much the same way as early maps opened up new routes and navigation.”

    Image recognition on mobile devices is a fascinating capability and leverages a number of key assets that Google and Microsoft, in particular, are amassing. Through Picasa and Panoramio – used by millions of individuals to store and tag photos – as well as Google Images and Street View, Google is building a vast database of images. With a grand vision to hyperlink “any recognizable object, text string, logo, face, etc. with multimedia information,” Google is racing to geotag these digital assets.

    Microsoft, IBM and Google are refining image recognition capabilities that allow mobile phones to “recognize” their location by matching the “fingerprint” of image(s) viewed through the image sensor (camera phone) against those of images in a geotagged image database. Blaise Agüera y Arcas of Microsoft has given several presentations recently (TED, Where 2.0) demonstrating how images can be “stitched together” using Photosynth – the resulting representations are stunning. In addition, digital image processing solutions for mobile devices from companies such as Realeyes3D and imsense are reducing noise, correcting for bad pixels and improving image quality, which in turn enhances the accuracy of image recognition.

    Share
  2. Galileo Navigation system (the alternative to GPS currently being built by the EU) will offer resolution of <10cm (<4inches)

    Share
  3. [...] What's More Accurate Than GPS? Photographs [...]

    Share
  4. This sounds really interesting…I have struggled hard to read GPS route maps and my engineer friends have always put this moniker on me: “only a kid in tech-world”…

    Photographs sound great when it comes to locating places…I am eager to see how this technology evolves.

    Share
  5. [...] What's More Accurate Than GPS? Photographs [...]

    Share
  6. “Google Goggles is embarking on this very project, building a point cloud reference database using publicly available images like the ones from Flickr, said Liebhold,”

    Google has used lidar for years to ‘georeference’ their street view product with the background images. They use lidar units from Topcon. So, they already have the point cloud part in house. The ability to match user photographs precisely against their streetview image points to photo recognition on such a large scale that it makes facial recognition look like baby stuff.

    Share
  7. You might also want to check out http://www.lookthatup.com -an iPhone app powered by image recognition technology from LTU Technologies. LTU recently opened up their API to developers so they can develop their own google goggles-like application.

    I just recently got an account with them and it seems really easy to integrate.

    Share
  8. [...] recently interviewed futurist Michael Liebhold about the implications of Goggles, which he expects will be used to create a map of pictures of the [...]

    Share
  9. [...] our recent GigaOM Pro Bunker Series session on location, Michael Liebhold of the Institute for the Future proposed that every place page be written in HTML 5, have an independent URI and freedom of [...]

    Share
  10. [...] connection that sends location data to offer information, but one day may rely on a database of imagery and photos sent by the camera to get a more accurate sense of where people are. Sending photos, especially those taken with [...]

    Share

Comments have been disabled for this post