Watson the Watchdog: How AI is Changing Media Compliance

How AI is Changing Media Compliance

In media, context is everything: Just as a hand signal that means “V for victory” in one country may be incredibly offensive in another, what’s culturally acceptable in a television show in the U.S. may be verboten elsewhere around the world.

By flagging questionable content for media compliance, Watson not only saves time and costs, but opens up opportunities in new markets for content creators and broadcasters alike. For more information about how AI can save time and money with compliance, also be sure to  download IBM Watson Media’s ROI analysis paper: From AI to ROI: When playback means payback.

Current processes and the Watson solution

Normally, when production companies send movies or TV shows for broadcast overseas, someone at the receiving end is responsible for combing through the content and flagging anything that violates local standards. This can take awhile—often as much as four times the length of the program itself.

But IBM Watson promises to make regulation quicker and easier for media compliance by arming businesses with the tools to ensure that media content complies with overseas standards and cultural norms. Leveraging its leading artificial intelligence capabilities, Watson can flag potentially objectionable content, allowing the human operator to alter it or edit it out entirely.

“If you bought a 1.5 hour movie, there could be four places Watson tells you that you need to watch out for something—and this could be because of nudity, [or] cigarettes,” explains Pete Mastin, who directs Watson Media product marketing and market strategy.

 

Watson watches TV

To be effective, Watson needed to not only identify things like onscreen kissing, drug use and extreme violence, but also to understand them in context. This requires training Watson through a taxonomy of the system. First, Watson learns to identify objects visually. Then, programmers demonstrate use cases, which add context.

For example, imagine a TV show features a knife. It could be labeled as a weapon or a tool or a historical object, depending on its use and depiction in the show. How the knife is classified can alter its meaning—and determine whether it’s a compliance concern. With the proper use cases in the taxonomy, Watson can differentiate whether a knife is cutting, say, a steak or being used as a murder weapon.

 

Bad words, blurred images

Beyond identifying images in context, it’s also key for Watson to identify offensive images that aren’t in the focal point of the frame. For instance, an actress might be center screen and be inoffensive in her body language and clothing. However, the shot could take place in a strip club, with nudity in the background, which would create issues in certain overseas countries.

To make out-of-focus background content easier to spot, Watson strips away dramatic lighting and raises the brightness level of the scene to help identify objects and people.

Foul language? That’s the easy part for Watson. “That’s just a list of words in various languages,” Mastin says.

 

The high cost of offending viewers

International syndication of TV shows and movies is a multi-billion dollar business for networks, making compliance a critical task.

In the U.S., FCC regulations allow a wide range of risqué content to reach the airwaves, so it can be easy to dismiss the regulations of other countries as overly restrictive. But a slight to Islamic beliefs or the appearance of alcohol or tobacco in the UAE—or a show that uses a denigrating ethnic term in India—is a serious violation in those and other countries. It could cause controversy and lead to the TV series or movie getting yanked from the airwaves.

Watson’s ability to quickly flag objectionable content, can prevent those snafus. Moreover, it can open the market for additional shows to find an overseas audience—by making it easier for broadcasters in other countries to screen new programming.

 

Summary

Media companies eager to stream their content internationally face an unavoidable foe: A complex web of regional and national regulators who control what’s acceptable—and what isn’t. What the FCC deems an innocent on-screen kiss might be judged unacceptable by regulators elsewhere around the world. But with the help of artificial intelligence, broadcasters can ensure their video content meets local regulations. AI technology can be trained to understand images in context, listen for forbidden words and even spot something inappropriate lurking in the background of a shot.

For more information on how Watson can be used for video, enriching content and standing out from your peers in the space, be sure to check out or white paper: Outsmart Your Video Competition with Watson.