What's More Accurate Than GPS? Photographs

11 Comments

Locations identified within the 10 or 20 meters possible by GPS today are far too inaccurate — we need to know where we are we are right down to the millimeter! That was the gauntlet thrown down by Michael Liebhold, distinguished fellow at the Institute for the Future, speaking at a GigaOM Pro Bunker Session on location at the GigaOM office this week. With millimeter accuracy, augmented reality — digital information overlaid on a real-time view of the world — will actually become possible. “Right now we have all this toy AR,” said Liebhold. “This is useless.”

So how do we get to millimeter accuracy? To find out, we followed up with Liebhold for a video interview. He said the most promising technique is to build model of the world using photographs, some of them geo-coded automatically, and the rest of them mapped using an understanding of where they are by comparing them to other images. So a photograph of vacationers in front of the Golden Gate bridge could be pinpointed in position using the precise angle of the orange arches in the background. Google (s GOOG) Goggles is embarking on this very project, building a point cloud reference database using publicly available images like the ones from Flickr, said Liebhold, referencing remarks made by a member of the Goggles team at the recent Where 2.0 conference. (As is Microsoft (s msft), with its Photosynth product.)

The Google project is scary, said Liebhold. Scary because of the privacy implications, I asked? No, he said, because if Google wants to do this, it will, and it will be hard to compete. Everyone wanting to use the most accurate location data will have to depend on Google.

Liebhold did mention one promising startup effort in the space: Earthmine out of Berkeley, Calif., is building a set of street-view images captured in 3-D with every pixel geo-coded. (See our interview with them from a couple years back.)

Intrigued as to how soon millimeter accuracy might happen and what it could enable? Here’s the video:

[youtube=http://www.youtube.com/watch?v=FZFgg6e7QmI]

Related content from GigaOM Pro (sub req’d):

Report: Mobile Augmented Reality Today and Tomorrow

Image courtesy of Flickr user jmlawlor

11 Comments

Steve

Interesting. However, someone with expert knowledge in the field may want to comment about the statement that GPS accuracy is “10 to 20 meters”. The accuracy has to be quite a bit better than 10 to 20 meters or GPS would be problematic for practical use in vehicle navigation at highway speeds, wouldn’t it? Likewise, having spent a bit of time using Google Earth, serious errors in meshing images together are not at all uncommon. Not a huge issue if you are just visually perusing a geographic area, but if using such data for pinpoint (mm) accuracy in navigation, one might be much further off than 10 to 20 meters for some of the “bugs” in the database that have existed. This does not discount the use of such technology as it evolves. But isn’t GPS also evolving? Someone with intimate knowledge of GPS should weigh in for a definitive comment.

Kyle

You might also want to check out http://www.lookthatup.com -an iPhone app powered by image recognition technology from LTU Technologies. LTU recently opened up their API to developers so they can develop their own google goggles-like application.

I just recently got an account with them and it seems really easy to integrate.

KurtAZ

“Google Goggles is embarking on this very project, building a point cloud reference database using publicly available images like the ones from Flickr, said Liebhold,”

Google has used lidar for years to ‘georeference’ their street view product with the background images. They use lidar units from Topcon. So, they already have the point cloud part in house. The ability to match user photographs precisely against their streetview image points to photo recognition on such a large scale that it makes facial recognition look like baby stuff.

Vivek Kumar Singhal

This sounds really interesting…I have struggled hard to read GPS route maps and my engineer friends have always put this moniker on me: “only a kid in tech-world”…

Photographs sound great when it comes to locating places…I am eager to see how this technology evolves.

Phil Hendrix, Ph.D.

Mike is spot on re: the impact of image recognition on mobile devices. In a recent GigaOm Pro report on Location-based Innovation (overview at http://bit.ly/9ugm2M), I referred to this as “a sort of ‘reverse lookup’ using images and object recognition” and concluded that (i) “object recognition on mobile devices will be regarded as one of the most significant developments of the decade” and (ii) “3-D geodata will enable new location-based applications in much the same way as early maps opened up new routes and navigation.”

Image recognition on mobile devices is a fascinating capability and leverages a number of key assets that Google and Microsoft, in particular, are amassing. Through Picasa and Panoramio – used by millions of individuals to store and tag photos – as well as Google Images and Street View, Google is building a vast database of images. With a grand vision to hyperlink “any recognizable object, text string, logo, face, etc. with multimedia information,” Google is racing to geotag these digital assets.

Microsoft, IBM and Google are refining image recognition capabilities that allow mobile phones to “recognize” their location by matching the “fingerprint” of image(s) viewed through the image sensor (camera phone) against those of images in a geotagged image database. Blaise Agüera y Arcas of Microsoft has given several presentations recently (TED, Where 2.0) demonstrating how images can be “stitched together” using Photosynth – the resulting representations are stunning. In addition, digital image processing solutions for mobile devices from companies such as Realeyes3D and imsense are reducing noise, correcting for bad pixels and improving image quality, which in turn enhances the accuracy of image recognition.

Comments are closed.