Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
Ford (s f) today is accelerating the intelligence of SYNC, the Microsoft-powered communications platform that allows drivers to control the driving environment by voice. MyFord Touch, the latest generation of SYNC, raises the number of voice commands from 100 to 10,000 first-level commands. This wider range of speech recognition is the result of technology from Nuance (s nuan), a Massachusetts-based company, and provides a more conversational experience for drivers.
That conversational tone is achieved in two ways: an improvement of the menu-based commands in SYNC and the use of aliases, which allow a driver to make the same request using different words. Instead of delving through multiple voice-command levels, for example, the new SYNC is capable of doing more with a single command. Previously a driver would have to say “Phone” to navigate through to the phone commands — now a driver can simply say “Call Liz Gannes,” which saves a step.
The use of aliases is probably the biggest reason for the hundred-fold increase in understood commands, however. By allowing a variance of voice commands for the same task, drivers don’t have to remember the specific commands to take a certain action. An excellent example of this is shown in the video demonstration by Ford Voice Recognition Engineer, Bridgitte Richardson, embedded below. To raise the climate temperate, Richardson uses three distinctly different — but very logical and conversational — commands: “warmer”, “increase temp”, and “temp up.”
While other companies are working to integrate voice and mobile technologies in vehicles — Research In Motion (s rimm) wants to power cars with BlackBerry devices, for example — Ford is embracing this vision with a passion. Why is that? Human interaction with smart devices has generally been stuck on the keyboard paradigm for years while voice input has lagged due to the processing power required. But as smaller chips gain intelligence, computers in cars can begin to interpret language and offer more hands-free functionality.
Indeed, our own GigaOM PRO analyst, Dr. Phil Hendrix, recently penned a 34-page report outlining how speech technologies will transform mobile use (subscription required). Dr. Hendrix believes that majority of smartphones will offer a fully voice-activated interface by 2012. If that prediction holds true, look for voice control to hit the gas pedal on cars at an increasing rate. And maybe by then the Ford SYNC program will answer that age-old question of “are we there yet?”