The Salesforce platform provides users a 360 degree view of their customers, helping them connect and nurture relationships in a whole new way. An initiative at the company is Salesforce Live, which has webcasts, productions and events, including a live broadcast of Dreamforce.
Hosted in downtown San Francisco by Salesforce, Dreamforce is an annual, massively attended conference. Bringing together a mixture of thought leaders and professionals, the event has historically attracted over 170,000 people to attend. The focus of the show is often on keynote addresses, training sessions and networking events among many other activities that take place at the conference.
When it came to the topic of expanding the audience for the venue, online proved a natural fit. “Streaming is a huge part of Dreamforce, has been for many years,” said Michael Rivo, Business Director of Salesforce Live. “We put a big effort into driving large audiences for our Dreamforce broadcast. For the past several years we’ve had millions of viewers in real time watching the live stream of the broadcast.”
To help achieve this goal, the underlying technology had to be accessible and reliable while creating an overall high quality production for online viewers. For more information on the infrastructure used to support the large audiences, be sure to download this Live Video Delivery System Built for Scalability white paper as well.
Artificial intelligence is transforming industries, increasing efficiency and improving our ability to manage complex issues. That technology also has many benefits within the video space, ranging from language processing to metadata management and enrichment.
As the market and technology continues to improve, those that are shaping it are being honored. This weekend, two of Watson Media’s video solutions were highlighted at the CSI Awards 2018. Announced in Amsterdam, IBM Watson Video Enrichment and IBM Watson Captioning and were cited as the ultimate winners for the “Best Use of AI or Machine Learning in Video” category. These solutions bring to market key advantages for tackling ongoing issues in the video space: the scalable generation of accurate closed captions and effective analyzation of large libraries of content.
Read on for more details on this technology and the awards themselves. For a greater elaboration of these concepts, also be sure to download our Uncovering Dark Video Data with AI white paper as well.
Whether content owner or service provider, the amount of time it takes to make content available in the required formats has consistently and significantly decreased. Today’s status quo is 24 hours or less for non-live broadcast offerings; however in many cases customers need it in less than six hours . This means that a finished “program” is transmitted via file-based terrestrial IP from a post house in Los Angeles to New York where it is then transcoded, packaged, and distributed. It’s not unusual for there to be 300 aggregation points (e.g. post houses and facilities from around the world), packaged into 150 different format permutations, and distributed to at least 100 worldwide partners.
This article talks about how to intelligently manage and distribute content to virtually any platform or screen—multiplied by the power of Watson. It includes an example around the logistics of VOD and OTT distribution and how it can function as part of a larger workflow as well. For more strategies and information around managing large libraries of content, also download this Video Metadata: Management and Tools white paper.
This has been a big year for IBM Watson Media. Since launching a year ago, we’ve worked with the Grammy’s, FOX Sports and The World Cup, The Masters, and introduced a new product: Watson Captioning. A highly trainable, AI-powered offering, Watson Captioning provides broadcasters and publishers alike with a new tool to take closed captions to the next level. Today, we’re excited to kick off a new collaboration with Sinclair Broadcast Group that will roll out Watson Captioning to all of their local stations across the United States, making live programming more accessible to local viewers, including the Deaf community, senior citizens, and anyone experiencing hearing loss.
Television requirements for closed captioning were established in 1996, but more than two decades later, live captioning remains both challenging and labor-intensive for production teams to deliver in real time. As a result, breaking news, weather, and live sports segments often have delayed or incorrect captions, leading to a confusing and occasionally frustrating viewing experience. With our Watson Captioning technology, Sinclair will be able to improve caption accuracy, automate time-intensive manual processes, and reduce production costs, all while providing captions in real time at scale.
Events have exploded beyond the stage with live streaming. From company announcements, to press conferences and award ceremonies, most events today have two audiences: the one in the room, and the one behind their screens.
For organizers, the expanded reach is a dream come true, as are the insights from live stream analytics. But live streaming also requires a new attention to detail: even the Super Bowl and Apple keynotes have fallen victim to seemingly minor mistakes, amplified by the real-time nature of streaming.
To make sure live streams go off without a hitch, organizers should follow this high quality live streaming checklist to ensure a secure connection, reliable equipment and to define a protocol in the event something needs troubleshooting. If you are looking more for assistance on which gear to get, though, check out our Video Studio Recommendations white paper.
Looking to monetize your video assets or live streams? Interested in pay-per-view (PPV)? Pay-per-view video and paywalls solutions offer content owners a method to create a revenue stream from live broadcasts or on-demand video libraries. This enables organizations to sell their content to viewers, having them pay to access content.
This article describes this process, talks about adding a paywall to your content, strategies and use cases for pay-per-view.
Want additional strategies on video monetization and also methods to reduce churn, increasing predictable revenue from your content? Also be sure to register for our How to Monetize Videos & Reduce Subscriber Churn webinar.
Curious on the artificial intelligence capabilities of the IBM Watson Media solutions for managing video content, but looking for a way to develop them into existing workflows?
APIs are available to integrate IBM Watson Video Enrichment and IBM Watson Captioning into other applications, such as existing dashboards and interfaces. This includes both generating metadata using the artificial intelligence and managing training the AI to be better attuned to a use case. In addition, the APIs are launching with new, additional features, some currently unavailable elsewhere.
If you are interested in putting these APIs to use, contact us to learn more.
Need a guide for creating lower thirds for live video?
This article walks through, briefly, what are lowers thirds before discussing what makes a good lower third and use cases. It finishes off with instructions on adding a lower third to a live stream with instructions for several popular encoders.
Looking for some additional advice as part of your live streaming strategy? Download these 5 Pro Tips for Live Video Production.
Looking for live broadcast closed captioning solutions?
IBM Watson Captioning offers a service for broadcast television to caption their live content. This uses a combination of artificial intelligence in the cloud and hardware on location. For the on-premise component, the Watson Live Captioning RS-160 is hardware created specifically for this use case by the Weather Company to complement the Captioning service. For accuracy, the AI can be trained in advance, expanding both vocabulary and relevant, hyper-localized context through providing corpus.
This delivers a solution that can not only be highly accurate, but one that is both scalable and built for high availability.
To learn more about automatic closed captioning, register for our Auto Closed Captions and AI Training webinar.
The future of human resources, from hiring to training and on-boarding, is getting a digital overhaul. The credit goes to HR streaming video use cases, improving scale and time efficiency. And for young jobseekers, that’s great news.
More than 50% of employees are applying online using a mobile device, says Andre Lavoie, CEO of ClearCompany, a Boston-based talent management firm. And according to a new survey by HR software firm Yello, 85% of respondents appreciate the use of text messages in the hiring process, and 76% feel positively about video interviews.
“There is no question that this generation’s use of mobile, video and text is pervasive now and will only continue to increase in popularity,” says Dan Bartfield, co-founder and president of Yello.
One trend is clear: The digital tools today’s job seekers are using in their everyday lives are rewriting the rules for HR. In turn, human resources departments are using video to transform their processes. In fact, 79% of organizations plan to use video for HR and corporate communications, equipping themselves to better break down geographic barriers and serve a large, worldwide workforce.