Google (s goog) may have made its name off text searches in the past but it is increasingly showing how voice is going to be a key component of its future. The company announced today that it is releasing an alpha version of its Google Translate conversations mode, a promising technology that allows two people to speak in different languages and have their words translated in near real time.
The conversations mode, which the search giant showed off in September, holds the promise of knocking down language barriers at home and abroad, allowing people to communicate in ways unheard of before. The initial version is limited to English and Spanish but the service should support a wider variety of languages soon. Google warns that the technology is still being developed and may have difficulty in parsing regional accents, background noise and rapid speech. But when it works, it gives people the closest approximation to a Star Trek universal translator. As Google showed in a demo from the IFA show in September, two people can take turns speaking into a phone with their words quickly translated through a robot voice.
The conversations mode highlights new layout designs for the Google Translate app, which is available on Android 2.1 devices and supports text translations between 53 languages. Users can also input text by voice for 15 languages. Translating conversations really showcases the power of mobile devices when connected to cloud-services and highlights the need for fast networks to facilitate these conversations. And perhaps more importantly for Google, it highlights the work it’s done in speech technology, which is increasingly finding a way into Google products.
Last year, Google launched Voice Actions for Android devices, allowing people to conduct a dozen activities like search, text messaging and navigation through their voice. Last month, Google released a Google TV remote app that allows users to voice-control their Google TV. The company also recently snapped up speech technology company Phonetic Arts to help make speech output sounds less robotic. Last month, the search giant released a personalized voice recognition option on its Android Voice Search software to improve voice transcriptions.
The recognition option requires users to join their voice input to their Google account, allowing Google to build a speed model tailored to a specific user, like Dragon Naturally Speaking by Nuance does for desktop users. But as I wrote, this also allows Google to also tie a user’s utterances from different points into one account and allows Google to learn the preferences of a user. That could ultimately help Google anticipate what a user wants and potentially allow Google to better target advertising for that user. For Google, speech input not only shows off the work it’s doing in cloud computing, it creates new ways to gather information on users that could ultimately help it improve its bottom line. Adding real-time translations is just another way for people to get into the habit of telling Google what they want so Google can deliver that to them.
Related content from GigaOM Pro (sub req’d):
- How Speech Technologies Will Transform Mobile Use
- Transient Apps: The Consumer Influence on Enterprise Mobility, Part 2
- Rogue Devices: The Consumer Influence on Enterprise Mobility, Part 1