Flixor Patents Your Face (Inside of Videos)

5 Comments

Flixor, a four-year-old startup that enables the insertion of a 3-D version of a digital photo of face in a video, has received a patent for its technology.

Hollywood-based Flixor has no customers yet, but CEO Blake Senftner said the company is in advanced negotiations with a television network to offer paid digital merchandise, which would see fans insert their faces into clips of their favorite show. The Flixor system is pretty neat because unlike in ElfYourself-type videos, where inserted faces are static, Flixor effectively wraps a 3-D version of the face around the head it’s replacing. The result is a somewhat eery, animated lookalike.

Senftner also said he believes companies such as JibJab and Kideo are infringing on the patent, but he doesn’t have the financial resources to enforce it.

Senftner isn’t an intellectual property fiend, he just wants to use his technology. Flixor has four founders as well as a dozen contractors, and has been self-funded by Senftner, who is running out of savings. “We’re at a point now where the technology is fully developed and ready to go into the marketplace,” he said today. “I think it would be a real shame if what we have never gets out in public, because it’s fun.”

If you want to try out the Flixor technology to insert yourself into a Christina Aguilera video, go to http://rnd.flixor.com/, click on “I Agree,” and offer these credentials: access code: mobilityPR_2; password: LennonLives. This will only work for 10 people.

5 Comments

Technicalfool

That patent was filed in 1997. http://www.kideo.com has “personalised video” archives dating back to at least 1996 (try http://web.archive.org/web/19961221130447/http://www.kideo.com/ to see). I’d call this prior art.. but then I don’t suppose that stopped Microsoft from trying to patent Sudo (despite sudo being around since the 1970s).

Don’t get me wrong, it looks funky. But software patents suck, especially when the USPTO seems to not know what it’s doing.

Moose

Blake, in all due respect, I don’t feel threatened by what your company is doing and you may have misinterpreted my angle (which may be my fault). My concerns stem far more from a aesthetics point of view than a worry of being outmoded. In fact I’m not even sure we’re on different sides of the argument. You say your software allows for creative interpretation, does that include the real time transport of expression data onto a non-photo realistic or stylized mesh? From what I’ve seen in the video provided it seems your system is more about fusing a photo realistic replication on to existent footage.

The reason photo realistic systems bother me is that as a viewer I find it much harder to empathize with a photo realistically rendered animated character because it causes a certain degree of unease. This is a widely documented effect in computer graphics known as the uncanny valley, where audiences find characters very near to photo realism unsettling because they feel there’s something just “wrong” about the character. This isn’t the fault of the animator or software, rather its due to the immense amount of minute details which humans are used to observing in everyday interactions with other humans, but are things which only register on a sub-conscious level. Because of this these effects are extremely hard to replicate and even harder for viewers to pin down.

So in short, I apologize if I came across as overly brusque. You and yours have chosen one heck of a fiddle to saw, and the more power to you. I just fear you may end up getting painted into a corner by customer expectations after the novelty of the technique wears off. Good luck!

Blake

Moose: your job and career is still intact with our platform – our algorithm does not do any animating. Our platform is composed of a method for the automated creation of consumer digital actors, and the high-speed network infrastructure necessary to enable global consumer demand for personalized media. We still use the complete visual effects pipeline to produce the media, and all the digital artists within – except some of the modelers, because our software creates the faces.
You may find our platform quite interesting as an animator, because you can create media that allows anyone to insert themselves into it. You’re free to be as creative as you like, within the bounds of taste – we don’t allow adult media in our platform. But beyond that, creating media that includes the viewer is something you may find creatively interesting.

-Blake Senftner, Flixor CEO.

Moose

frankly as an animator, this kind of irks me. I’m a huge advocate for camera enabled expression detection, but my problem with this is it wallows horribly in the uncanny valley. While the algorithm is obviously doing a excellent job mimicking the expressions, the end result is just downright creepy. Stylized abstraction has always worked better in these situations.

Comments are closed.