Summary:

New research highlights a computer vision system that’s much better at telling when people are faking expressions of pain than are other humans. It’s the latest in a series of computer vision advances that foretell a brave, new and possibly creepy world.

A group of researchers from the University of California, San Diego, and the University of Toronto have built a computer that’s better than humans at detecting when someone is faking facial expressions. Trained on enough images of real expressions versus faked ones, the system is able to discern the differences between voluntary and involuntary facial movements that indicate actual pain.

The research is actually not too surprising considering recent advances in deep learning models coming out of places such as Google and Facebook. The more and better training data that these computer vision systems have, the better their models become at detecting the many tiny features that represent any given thing. They’re better than humans in instances like spotting fake expressions because they’re focused on subtleties our brains can overlook.

According to a press release describing the research (the full paper is published in the March issue of Current Biology) the computer vision system was accurate 85 percent of the time compared with a top of 55 percent for human judges. It could have applications in a variety of areas, including “homeland security, psychopathology, job screening, medicine, and law.” Presumably, it could also determine once and for all whether professional wrestlers are acting.

Source: Current Biology

Source: Current Biology

The study out of UCSD and UT comes around the same time that Facebook published a study demonstrating how its DeepFace facial recognition system is nearly as accurate as humans when it comes to knowing whether two images are of the same person. In production at Facebook scale, it’s probably more more effective than human judges because Facebook’s databases can store, access and analyze many more names and faces than any human brain can.

However, as exciting as this type of research is from a computer science perspective, it’s also the kind of stuff that can understandably get peoples’ privacy defenses up. There are some great potential applications for face and object recognition, and knowing whether someone is faking an emotion or an experience, but it’s not difficult to envision the downsides in a world of ubiquitous video cameras, smartphone photos and NSA spying.

Feature image courtesy of Flickr user Mike Kalasnick.

Comments have been disabled for this post