The Salesforce platform provides users a 360 degree view of their customers, helping them connect and nurture relationships in a whole new way. An initiative at the company is Salesforce Live, which has webcasts, productions and events, including a live broadcast of Dreamforce.
Hosted in downtown San Francisco by Salesforce, Dreamforce is an annual, massively attended conference. Bringing together a mixture of thought leaders and professionals, the event has historically attracted over 170,000 people to attend. The focus of the show is often on keynote addresses, training sessions and networking events among many other activities that take place at the conference.
When it came to the topic of expanding the audience for the venue, online proved a natural fit. “Streaming is a huge part of Dreamforce, has been for many years,” said Michael Rivo, Business Director of Salesforce Live. “We put a big effort into driving large audiences for our Dreamforce broadcast. For the past several years we’ve had millions of viewers in real time watching the live stream of the broadcast.”
To help achieve this goal, the underlying technology had to be accessible and reliable while creating an overall high quality production for online viewers. For more information on the infrastructure used to support the large audiences, be sure to download this Live Video Delivery System Built for Scalability white paper as well.
Artificial intelligence is transforming industries, increasing efficiency and improving our ability to manage complex issues. That technology also has many benefits within the video space, ranging from language processing to metadata management and enrichment.
As the market and technology continues to improve, those that are shaping it are being honored. This weekend, two of Watson Media’s video solutions were highlighted at the CSI Awards 2018. Announced in Amsterdam, IBM Watson Video Enrichment and IBM Watson Captioning and were cited as the ultimate winners for the “Best Use of AI or Machine Learning in Video” category. These solutions bring to market key advantages for tackling ongoing issues in the video space: the scalable generation of accurate closed captions and effective analyzation of large libraries of content.
Read on for more details on this technology and the awards themselves. For a greater elaboration of these concepts, also be sure to download our Uncovering Dark Video Data with AI white paper as well.
Whether content owner or service provider, the amount of time it takes to make content available in the required formats has consistently and significantly decreased. Today’s status quo is 24 hours or less for non-live broadcast offerings; however in many cases customers need it in less than six hours . This means that a finished “program” is transmitted via file-based terrestrial IP from a post house in Los Angeles to New York where it is then transcoded, packaged, and distributed. It’s not unusual for there to be 300 aggregation points (e.g. post houses and facilities from around the world), packaged into 150 different format permutations, and distributed to at least 100 worldwide partners.
This article talks about how to intelligently manage and distribute content to virtually any platform or screen—multiplied by the power of Watson. It includes an example around the logistics of VOD and OTT distribution and how it can function as part of a larger workflow as well. For more strategies and information around managing large libraries of content, also download this Video Metadata: Management and Tools white paper.
This has been a big year for IBM Watson Media. Since launching a year ago, we’ve worked with the Grammy’s, FOX Sports and The World Cup, The Masters, and introduced a new product: Watson Captioning. A highly trainable, AI-powered offering, Watson Captioning provides broadcasters and publishers alike with a new tool to take closed captions to the next level. Today, we’re excited to kick off a new collaboration with Sinclair Broadcast Group that will roll out Watson Captioning to all of their local stations across the United States, making live programming more accessible to local viewers, including the Deaf community, senior citizens, and anyone experiencing hearing loss.
Television requirements for closed captioning were established in 1996, but more than two decades later, live captioning remains both challenging and labor-intensive for production teams to deliver in real time. As a result, breaking news, weather, and live sports segments often have delayed or incorrect captions, leading to a confusing and occasionally frustrating viewing experience. With our Watson Captioning technology, Sinclair will be able to improve caption accuracy, automate time-intensive manual processes, and reduce production costs, all while providing captions in real time at scale.