CSI Awards 2018: Best Use of AI or Machine Learning in Video

CSI Awards 2018: Best Use of AI or Machine Learning in Video

Artificial intelligence is transforming industries, increasing efficiency and improving our ability to manage complex issues. That technology also has many benefits within the video space, ranging from language processing to metadata management and enrichment.

As the market and technology continues to improve, those that are shaping it are being honored. This weekend, two of Watson Media’s video solutions were highlighted at the CSI Awards 2018. Announced in Amsterdam, IBM Watson Video Enrichment  and IBM Watson Captioning and were cited as the ultimate winners for the “Best Use of AI or Machine Learning in Video” category. These solutions bring to market key advantages for tackling ongoing issues in the video space: the scalable generation of accurate closed captions and effective analyzation of large libraries of content.

Read on for more details on this technology and the awards themselves. For a greater elaboration of these concepts, also be sure to download our Uncovering Dark Video Data with AI white paper as well.

 

IBM Watson Video Enrichment

Categorizing, tagging and managing vast libraries of content can be challenging, if not incredibly time consuming. As services, such as those delivering OTT (over the top) experiences, continue to quickly expand libraries, the importance of proper content categorization continues to increase, as audiences expect a certain level of discovery.

IBM Watson Video Enrichment helps to categorize video content by deploying AI capabilities that provide audio and visual analysis. This solution automates the process of creating searchable metadata related to video assets by extracting information gleaned from the AI analyzing assets.

Produced metadata includes a transcription of spoken words, using speech to text processes and audio recognition to separate sounds from actual words. This transcript will be used to tag the asset with underlying themes, which are generated through a combination of transcript assessment, as well as the AI’s object recognition capabilities. For example, if a program about growing apples, oranges and pears doesn’t verbally mention the items by name, the AI can still conclude that the content contains “fruit” and classify it as such.

The technology can also detect sentiment and emotions, allowing it to categorize assets on if the material is happy or might convey another emotion to viewers. The technology can also be setup to detect profanity or other criteria to help give an age rating. This could also provide valuable information about how perceivably “offensive” material might be, or aid users in determining what subject matters they want to avoid, for example if one aspect of either language, nudity or violence is seen as more offensive.

Once finished, this can then be used by an end user to quickly find highly relevant and interesting assets for them, improving viewer engagement and satisfaction. In the case of OTT subscription services, this can foster happier end users, reducing churn and raising user retention on the service. For content monetized through advertising, increasing discoverability can get more viewers on assets, in turn, allowing content to be more successful and generate more revenue.

 

CSI Awards 2018: Best Use of AI or Machine Learning in Video

IBM Watson Captioning

There are many reasons why broadcasters need to feature closed captions with their content. These range from obvious benefits, like increased accessibility for those who are deaf or hard of hearing, to catering to changing viewing habits, where watching content on your phone in a nosy environment favors having captions on. The manual generation of captions, though, is a time consuming process… one that can greatly benefit from the introduction of artificial intelligence.

Launched in February of 2018, IBM Watson Captioning is a potentially stand alone offering that automates the process of caption generation. This is done through using ASR (Automated Speech Recognition), which includes speech recognition, speech-to-text and audio recognition.

The service stands out due to the integration of IBM Watson into the service, taking advantage of artificial intelligence. This includes the ability for the AI to learn, which can be done both passively or through manual training. Passive includes allowing the AI to observe edits done to the captions it generates, setting it up to improve as it captions more assets. It can also be manually taught through teaching it new vocabulary, such as names, unique spellings, industry terms and more.

Although initially launched with support for on-demand, previously recorded content, the service has since expanded to support live content for broadcast TV. Due to the challenges of live content, where captions no longer have the luxury of being edited for accuracy after generation, rigorous training is involved to get the technology to needed accuracy levels for live material. To learn more about the process for real-time caption generation, register for our Live Closed Captioning Services for Broadcast TV webinar.

 

Cable and Satellite International: CSI Awards

Short for the Cable and Satellite International Awards, the CSI Awards first started back in 2003. They have aimed to honor excellence and achievement in the broadcast, video, OTT and IoT sectors. The awards are decided by a panel of nine different judges, who run the gamut from strategists to writers in the space.

The winners were presented their awards at the International Broadcasting Conference (IBC) in Amsterdam, which runs from September 13th through the17th. The awards themselves, a total of 17 this year, were given out on the evening of September 14th, with categories ranging from AI to network delivery technology. A full list of winners from the 2018 ceremony can be seen on the awards winners page.

 

IBM at IBC 2018

IBM at IBC 2018

While this award celebrates IBM’s achievements in the realm of using AI for video, the IBC show where it was presented was focused on the advancements IBM continues to bring to this space. This included the introduction of two new offerings:

  • IBM Video Highlights
    Enables broadcasters to use cloud based software that employs Watson APIs to automate and enhance the aspect of highlight clip creation. This includes the ability for the service to detect exciting moments, like a sports game when the winning point is made or during an intense match that might be reflected visually and through audio by the crowd cheering.
  • IBM Video Recommendations
    A cloud-based personalized video programming system that provides a solution for organizations to tap into cognitive methods that allow them to give their viewers well informed recommendations. It works by transforming video assets into pools of data descriptors and then using new ways of categorizing content into taxonomies. This is then used to create customized video streams that keep audiences watching longer, increasing viewer engagement.

A lot of attention was also focused on the live broadcast closed captioning capabilities of IBM Watson Captioning. Recently launched, this enables AI driven live captions for broadcast TV. The solution includes an on-premise component, while for increased accuracy the AI can be trained in advance, expanding both vocabulary and relevant, hyper-localized context.

 

Summary

IBM continues to invest in the video space, while infusing practices with AI to broaden how people are managing video assets today. The results are more value for viewers, with increased discoverability and accessibility, and with added scale and speed by those managing the assets, which has allowed IBM to be honored by these awards for its work.

If you want to learn more about the captioning technology, including how it works and evolves with you, register for our Auto Closed Captions and AI Training webinar.