Google has announced new violence and profanity filters for YouTube, an opt-in series of features it is calling “Safety Mode,” in what appears to be an ongoing attempt to clean up the murkier parts of the video-sharing site and make it a little more appealing to advertisers — and possibly to legislators as well. The site’s choice of wording in the announcement on the YouTube blog seems a little odd, however: It says that the feature is designed for those who don’t want to stumble across (or have family members stumble across) an otherwise newsworthy video that might have objectionable content “such as a political protest.” It doesn’t give an example, but Google might be thinking of a video such as the one that showed the graphic death of Neda Agha-Soltan, the Iranian demonstrator who was shot and killed during a protest in Tehran last year.
When you try to view a video like the one of Neda’s death, you already run into the YouTube “18 and over” wall, which asks you to log in and verify that you are old enough to see the content. But YouTube probably knows that these types of blocks are quite easy to get around, since the site doesn’t verify anyone’s age in any real sense. The introduction of “Safety Mode” allows users to specifically block violent videos and to “lock” those settings into a YouTube account. Safety Mode also has a number of other features, including one that applies to comments on videos, a part of the site that routinely draws objectionable content (and was even voted “Worst Thing on the Internet”). Safety Mode hides all comments by default, and replaces profanity in comments with asterisks.
In the video below, a YouTube staffer describes how Safety Mode works, including the fact that if it is turned on (which can be done by clicking a button at the bottom of any YouTube page), certain searches — such as one for the word “naked” — will return zero results.
Post and thumbnail photos courtesy of Flickr user slagheap