Blog Post

Algorithm takes the ‘average’ of photos, perhaps proving that is how you always look

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

Computer scientists at the University of California, Berkeley, have created a new computer vision algorithm that can display the “average” of a group of photographs by analyzing the key features in each one. If you’re someone who’s regularly accused of making the wrong face at the wrong time or looking crabby even when you’re not (ahem, Derrick Harris), this could help your case.

“I wasn’t making a face. That’s how I always look. See, I have an algorithm to prove it!”

Of course, the creators of the tool, called AverageExplorer, have slightly more useful applications in mind. They want to help us make sense of, or maybe even learn something from, the billions of photos currently stored somewhere online. That might mean figuring out what the average internet cat picture looks like, or perhaps improving e-commerce by determining the average men’s shoe. And you could conduct actually meaningful analysis of facial expressions, like how newscasters look when reporting certain types of stories. (In their research, the AverageExplorer team analyzed Stephen Colbert’s tie selection, as well as his body posture, while Barack Obama’s image is on the screen behind him).

Source: UC Berkeley / Jun-Yan Zhu, Yong Jae Lee, Alexei Efros

Or, one of the authors noted, the technique could be used to train object-recognition models faster by letting them train on features in the average image — which has already taken into account many other images of the same thing — rather than on thousands or millions of individual images.

Visually, and somewhat conceptually, AverageExplorer is similar to another project I recently covered — the Learn Everything About Anything, or LEVAN, algorithm that came out of the Allen Institute for Artificial Intelligence. In that case, the system sought out images tagged to match a certain term (jumping horse, for example) and then taught itself what the term looked like by analyzing photos of it. (Allen Institute for AI Executive Director Oren Etzioni has just been added to the agenda for our Sept. 17 artificial intelligence and deep learning meetup.)

As the advances in computer vision and machine learning keep on coming, we’ll likely see new applications for these types of algorithms keep popping up. Some will be useful (like better image search and tagging), some will be fun (like some of the stuff Google+ already does for photos) and some will be scary (think surveillance), but they’re all going to come. There’s just too much information hidden in all those digital photos to let it go to waste.

Check out the video below for a demonstration of how AverageExplorer works and its user interface, and read the full paper for a bunch more examples and, of course, the technical explanation of the algorithm.