Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Opinion: AI in broadcast and media production workflows

Simon Clarke, CTO at Telestream, on the evolution of AI in the media and entertainment landscape

Artificial Intelligence (AI) and Machine Learning (ML) are now critical tools for empowering creators, optimising resource allocation, enhancing distribution, and delivering personalised viewer experiences. A report by Grand View Research projects that the AI and ML market in media will grow at a compound annual growth rate (CAGR) of 38.1% from 2022 to 2030. There is little doubt that AI and ML technologies are transforming broadcast and media production workflows. At present, the use of AI is still largely in the content creation phase, where the technology is used extensively in special effects and to create virtual on-screen talent.

Simon Clarke, Telestream CTO
Content creation

However, AI is impacting content across every step of the journey. One of the pioneering examples of AI at the start of the content creation phase is in scriptwriting and story development. Warner Bros. has employed AI to analyse scripts and predict its box office success. This AI tool provides data-driven insights that support decision-making processes, such as content and talent valuation. By leveraging these insights, Warner Bros. can make more informed decisions on green-lighting projects, potentially reducing the risk of box office failures and optimising resource allocation.

Production automation

Production is the next stage where AI is also being utilised in broadcast automation and management. For example, the BBC, the UK’s public broadcaster, has integrated AI into its news studio to automate camera operations and lighting. This integration enhances efficiency and consistency in news broadcasting. Automated production control systems can adjust camera angles, focus, and lighting in real-time, ensuring high production standards while reducing the need for manual intervention.

Content editing

The next stage of the content journey where AI has shown significant promise is video editing. The technology is not new; it was highlighted in 2018 when Fox Sports utilised IBM Watson to create highlight packages for the 2018 FIFA World Cup. Today, the technology has been supercharged and featured in the Paris 2024 Olympics producing automated highlights for 14 sports, generating real-time audience engagement analytics, and creating fast, relevant data on athlete movements. Additionally, AI systems assisted in creating and tagging video summaries but did not auto-publish. This technology supported the Olympic Broadcasting Service in delivering extensive and personalised coverage across various platforms, enhancing viewer engagement and understanding of the events.

Accessibility

Before content can be distributed, AI has proven effective in helping with content accessibility. Telestream’s Timed Text Speech service is a great real-world example that uses machine learning to create accurate captions and subtitles quickly. Users upload media files transcribed and returned as timed text in formats like SRT, JSON, CSV, and TXT. It supports multiple languages and custom vocabularies, dramatically speeding up the process with higher levels of accuracy than traditional speech recognition technologies. Ultimately, AI technology is not about replacing people; it’s finding ways to help people be more productive and freer to do the creative tasks they are suited for.

AI is also impacting content distribution. In the US, Sinclair Broadcast Group has implemented AI-driven automation systems to optimise its playout and scheduling. These systems analyse vast amounts of data to determine the most efficient scheduling and playout strategies, resulting in more streamlined operations and reduced human error. This level of automation improves operational efficiency and ensures that content is delivered to audiences seamlessly.

Content provenance

In parallel to the content creation to distribution workflow, an emerging industry is the provenance of content and AI’s role in ensuring its authenticity. There is a growing focus on the provenance chain of content, which tracks every step of the media supply chain lifecycle. This initiative, supported by industry leaders like Microsoft and Adobe, through projects like C2PA, aims to provide transparency about where and how content was created and processed. While this is not directly about using AI to detect other AI-generated content, it ensures viewers can trust the origin and integrity of what they see on their screens. After all, ensuring customer data remains secure and is not used for unintended purposes is paramount, alongside maintaining alignment with customer expectations and data privacy standards.

Ready but evolving

For all the advances in AI, it’s crucial to acknowledge the areas where this technology is still maturing. AI systems are still largely standalone, and the lack of universal standards can make seamless integration with existing systems problematic. As AI starts to create or edit content autonomously, there is still the fear of “hallucinators” that can create inaccurate or potentially offensive output.

Yet these challenges are being overcome with initiatives like the Open Neural Network Exchange (ONNX), an open-source standard supported by major industry players like Microsoft, Facebook, IBM, and Amazon, and governed by the LF AI & Data Foundation. It aims to promote interoperability between AI frameworks, allowing developers to train models in one framework and deploy them in another without rewriting code.  Its main benefits include enhanced flexibility, optimised performance, and simplified model portability and cross-platform deployment, making it a crucial standard in the AI ecosystem.

It is clear that AI is set to play an increasingly important role in the media and entertainment industry and now is the perfect time for organisations across the content journey to begin to test the waters of innovation.