Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
Ford(s f) Motor Company is teaming up with the Massachusetts Institute of Technology and Stanford University to research the future brains of its autonomous cars. Projects like Ford’s research vehicles are putting the sensors and computing power into cars that would allow them to read and analyze their surroundings, but these two universities are developing the technology that will allow them to make driving decisions from that data.
“Our goal is to provide the vehicle with common sense,” Ford Research global manager for driver assistance and active safety Greg Stevens said in a statement. “Drivers are good at using the cues around them to predict what will happen next, and they know that what you can’t see is often as important as what you can see. Our goal in working with MIT and Stanford is to bring a similar type of intuition to the vehicle.”
In December, Ford unveiled its latest research vehicle, a Ford Fusion Hybrid equipped with Lidar (laser-radar) rigs, cameras and other sensor arrays, all intended to generate a real-time representation of the world around the car. Such a car can “see” in all directions, allowing it not only to take in far more stimuli than even the most alert driver, but also to react to that information far more quickly. That’s where Stanford and MIT come in.
MIT is developing algorithms that will allow an autonomous driving system to predict the future locations of cars, pedestrians and other obstacles. It’s not good enough for a car to merely sense the location of nearby vehicles when it switches lanes or swerves to avoid an accident. It has to know where those vehicles will be a split-second later. Otherwise the car will avoid one accident only to cause another.
That means not only measuring other vehicles’ current speed and trajectory but anticipating how their drivers – or their autonomous vehicle systems – will react to the situation. Basically MIT is trying to create a vehicle brain smart enough to assess risks and outcomes and navigate its course accordingly.
Stanford is doing something a bit different. It’s trying to extend the sensory field of the car by helping it see around obstacles so it can react to dangers the driver can’t immediately see. Stanford and Ford didn’t offer any specifics on just how they would accomplish that feat, by my bet is it has to do with Ford and the automotive industry’s work on inter-vehicle networking.
Future autonomous cars won’t just be able to sense their surroundings, they’ll be able to communicate with other vehicles using a secure form of Wi-Fi. For instance, Australian startup Cohda Wireless is developing to vehicle-to-vehicle networking technology that would allow two cars to let each other know they’re approaching one another at a blind intersection.
Ford and other major automakers are working with the University of Michigan and the National Highway Traffic Safety Administration to build vehicle-to-infrastructure grids that would allow cars to tap into highway sensors, giving them a kind of omniscient view of the overall road. With such technology other cars could reveal their intentions before they even take action, making other connected vehicles much more responsive. They could also share their sensor data, so even if only one of the cars far ahead of you is connected to the vehicle grid, that lone vehicle could still tell you what the other cars around it are doing.
While every major automaker is working on autonomous driving technology, Ford has been particularly aggressive. In a recent interview, executive chairman Bill Ford told me how the automaker is trying to use connected vehicle technology to propel the company into a new golden age of automotive innovation.