Blog Post

Google Goggles Now on the iPhone

It’s not all war and competition between tech giants Google (s goog) and Apple (s aapl). Sometimes, the companies can come together, and the winner each time that happens is invariably the consumer. Today Google brings Google Goggles to the iPhone. Try saying that five times fast.

Goggles is a Labs product that Google introduced back in December of last year for Android devices. As its name implies, it involves the visual spectrum, allowing you to snap a photo using your device’s camera and using that to initiate a search. Now you can both talk to, and show Google’s iPhone app what it is you’re looking for.

Just download Google Mobile App from the App Store for free, and tap the camera button to search using Goggles. Goggles will then highlight elements of the image it recognizes, and you can tap on those areas to find out more. Google has a short video explaining the process:

Before you start taking pictures of your friends and your dog, though, be aware that this technology is still relatively new, hence the Labs designation that Google affixes to all its experimental software. It should work great for recognizing things like landmarks, or DVD and video game artwork, though.

If you don’t have it yet, don’t worry, the update’s being pushed out gradually to all the international App Stores. If you do have it, how’s it working for you? Let us know below.

Related content from GigaOM Pro (sub req’d):

One Response to “Google Goggles Now on the iPhone”

  1. I’ve had it since yesterday evening, and I’ve tried it on a few things so far, with vastly different levels of accuracy.

    I took a picture of the Olympus logo on a box for a digital camera, and the Logitech logo on the grill of my computer speaker, and it recognized both as logos, and more importantly, accurately with respect to what logos they were.

    I also took a closeup picture of my Doctor Who TARDIS USB hub on my desk, and it not only read the text correctly (“POLICE BOX”) but it correctly identified it in an image search, finding instances of Doctor Who and notably, that very USB hub via Google Shopping search. I was pretty impressed that it could distinguish between the hub and other images of just the object from the show. However, in trying to replicate that search, I could get an exact match again, either getting the text, or “dr who tardis” as a search. These results were still pretty impressive, but didn’t have the wow-factor of the first attempt.

    I also tried taking a picture of some Italian text on the back of an Italian graphic novel of mine to see how the translation feature might work, but that was a complete failure. The text was just jumbled, and it barely got any of it right in Italian, so I wouldn’t depend on it to translate correctly.

    Today I tried taking a picture of my business card, and even after three attempts, I couldn’t get it to read my name correctly. It got the phone number, address, fax, and website, but not my name or my email, even with what looked like a very clear closeup shot of it.

    So my verdict so far: it’s a very cool feature that I’ll make sure to try to use more often (I’ve even moved the Google app to my dock, when I had it tucked in a folder and never used it before), but it needs work before it can be depended on for some of the use-case scenarios that Google has suggested.