Looking for a way to speed up the generation of accurate captions? Interested in AI vocabulary training?
Earlier, IBM introduced Watson Captioning to generate captions for videos using speech to text. These captions could then be edited for accuracy, or to adhere to personal preferences. Those capabilities are being expanded with the addition of the ability for Watson to learn based on those edits or to be taught. As a result, this can speed up the process of accurate caption generation through removing previously repeated tasks.
Note that this feature for Watson to learn based on edits or to be manually taught is currently only available from the stand alone Watson Captioning solution. It is not currently available for Streaming Manager or Streaming Manager for Enterprise, although will be coming to those in the future.
Considering AES video encryption for your assets at rest and during delivery? Curious on the merits of AES-256 vs AES-128 for video?
A security audit, a systematic evaluation of the security of an organization’s information system, can measure many things to see how it conforms to established practices and criteria. In relation to video, this can include virtually every state of the content, from data at rest to in transit. This article covers what is video encryption, explains AES (Advanced Encryption Standard) and why it’s discussed about what bit key is ideal to use for video.
From more information on this topic, and on the greater concept of video security, also be sure to check out our Enterprise Video Security Components & Services white paper.
With the news moving at lightning speeds, consumers are more tuned into current events than ever while media companies are challenged to keep pace. Broadcast networks are under intense pressure to respond quickly to breaking news, world events, and sporting games in order to satisfy consumer demand for instant, quality digital experiences.
However, delivering accurate captions for live broadcast is both time and resource intensive for broadcast networks, given that production teams must manually transcribe live programming in real-time – which often leads to delayed or incorrect captions. To solve these challenges, IBM launched Watson Captioning – a flexible, scalable solution that leverages AI to automate the captioning process and uses machine learning to improve accuracy over time. As outlined in this white paper, Captioning Goes Cognitive: A New Approach to an Old Challenge, Watson is bringing greater context to video assets while removing some of the challenge associated with closed captioning.
Through its Live Captioning functionality, Watson Captioning empowers closed captions for broadcast networks, unlocking value from live video content and optimizing the viewer experience. By accurately captioning live video content, broadcasters can provide premium experiences for local viewers, increase accessibility for the hearing-impaired community, and adhere to compliance standards.
One of our core tenants at IBM is to find and work with great partners who share our vision and values. The ecosystem is important. At Watson Media, we’re focused on bringing powerful portfolio of AI (artificial intelligence) video solutions to market. To meet the unique needs of our customers, Watson Media works with a number of partners to augment our solutions and build robust solutions that aid in driving efficiencies, delivering elevated video experiences and delighting consumers.