No Comments

Summary:

A new system developed at MIT, which was inspired by how a group of super-fast quadcopter drones gathered information, uses a camera to analyze robots’ surroundings a 1,000 times a second.

USC Viterbi robot open house
photo: Signe Brewster

It is safe driving practice to keep your eyes on the road constantly. But when a robot relies on a camera to see its surroundings, it might only update its view of the world every .2 seconds. That leaves enough time for an unexpected obstacle to appear and bam! — a collision.

A new system out of MIT updates camera vision for robots so that they can update their view of the world up to 1,000 times a second. Instead of comparing one image to the next to judge if an obstacle has entered the frame, a computer looks for changes in light in what the camera sees.

“Each pixel acts as an independent sensor,” research scientist Andrea Censi said in a release. “When a change in luminance — in either the plus or minus direction — is larger than a threshold, the pixel says, ‘I see something interesting’ and communicates this information as an event. And then it waits until it sees another change.”

The camera’s pixels report the light they see 1,000,000 times a second. Instead of generating an exact view of the entire picture, the computer picks out several interesting examples of light changes and makes estimates about the robot’s location. Every thousandth of a second, it then selects the most likely estimate and treats it as the robot’s actual location.

The MIT team is still tweaking the system to help it understand what is should do when it spots an obstacle. But if it succeeds, it could help the development of robots that can respond to the world even when moving at high speeds; an important quality if they are expected to work safely with and around humans.