How AI can help detect & filter offensive images and videos

Advances in Artificial Intelligence and Deep Learning have transformed the way computers understand images and videos. Over the last few years, innovative neural network structures and high-end hardware have helped research teams achieve groundbreaking results in object detection and scene description. Those structures have in turned been used to build generalist models aiming to recognize any object in any image.

Those breakthroughs are now being applied to specific use-cases, one of which is Content Moderation. Sightengine, an A.I. company, is making its image and video moderation service available worldwide through a simple API. Built upon specialist Neural Networks, the API analyzes incoming images or videos and detects if they contain offensive material such as nudity, adult content or suggestive scenes. Just like Human moderators would. As opposed to the networks that companies like Google, Facebook or Microsoft use for object detection, these neural networks are specialists, designed and trained to excel at one specific task.

Historically, content moderation has been mostly in demand with Dating and Social Networking websites. They relied either on staff who had to go through user-submitted content manually, or on their community who had to flag and report content. But today, content moderation is no longer restricted to niche markets. As camera-equipped smartphones have become ubiquitous, and as the usage of social networks and self-expression tools have continued to rise, photo generation and sharing have literally exploded over the last few years.

It is estimated that more than 3 Billion images are shared every day online, along with millions of hours of video streams. Which is why more and more app owners, publishers and developers are looking for solutions to make sure their audience and users are not exposed to unwanted content. This is a moral as well as a legal imperative, and is key to building a product users trust and like.

Sightengine’s Image Moderation and Nudity Detection Technology is a ready-to-use SaaS offering, accessible via a simple and fast API.

8 Responses to “How AI can help detect & filter offensive images and videos”

  1. I think this is a good idea, and I don’t think it impinges on our freedom of speech. There are situations where it can come in very handy. For example, if you run any kind of website where visitors can create an account and upload a profile picture. I’m sure 99% of people would upload a natural photo of themselves but you’re always going to get that tiny minority that just think it’s fun to upload something offensive and if this software can catch it before it goes live – great!

  2. Gustavo Woltmann

    It’s amazing how the technology changes the world. Ther are, of course, good and bad effects but that’s how things are supposed to be–there should always be a good and bad side of everything. AI has played an important role in most of our daily lives since majority of the people are using internet, computers and other products of technology nowadays.

  3. Farmer Dave

    awaiting moderation? Are my words offensive? Am I not to use the words human body because they invoke potential naked images in people’s minds? Oh no I used naked human body all in one, ops now two sentences. Moderators be aware.

  4. Farmer Dave

    And who declared the human body taboo anyway? Why is the sight of it offensive? wow the questions keep coming. I find it offensive that yu are telling me that I can’t find beauty in the human form. Why do you allow any photos of nature’s beauty? Or is it just life forms, rocks and water is ok? Who is making these rules?

  5. Farmer Dave

    ?? Freedom of speech? I can walk down the street without a shirt on buy can’t post a picture of it? And a computer is going to be morally judgemental? Wow this is just raising so many questions questions in my head and I’m not liking it.