Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Opinion: Action! AI strikes landmark deal

TVBEurope's artificial intelligence columnist Graham Lovelace explains why a landmark studio pact is a big deal for AI

This year’s IBC was abuzz with artificial intelligence as vendors demonstrated time-saving automation and showcased integrations of the technology’s content-creating sibling, generative AI. But as exciting as those innovations are, a deal signed thousands of miles away from Amsterdam is the one that could determine the trajectory of the still-emerging technology’s relationship with broadcast media.

Before I explain why, let’s all get on the same page: generative AI creates words, audio, stills and video in response to text or image prompts. Doing the hard work is a model, a complex set of machine-learning algorithms, which has been trained on vast amounts of data. AI models analyse patterns in that training data, building a set of statistical correlations and using it to create content based on the probability that a particular word, sound, or cluster of pixels will be followed by another. 

Image and video generators have come a long way in a short time: people with six fingers, three legs and blurred hairlines have transitioned to photo-realistic outputs. Generative video remains a work in progress but is good enough to persuade Hollywood studios to take a look, while several directors and filmmakers are experimenting with Google’s video generator Veo and OpenAI’s rival product, Sora. In June, Sora was used to create the first generative TV ad for Toys ‘R’ Us. Veo is being rolled out to YouTube creators. 

So far, so good. But there’s a problem. Much of the data used to train generative AI models has been scraped from the web without the consent of copyright holders. Several AIs admit model training infringes creators’ intellectual property rights but argue they’re covered by copyright law exemptions. Until the courts decide the outcome of multiple lawsuits filed on both sides of the Atlantic the commercial use of generative AI comes with a health warning that at least a portion of an image or frames in a video might infringe copyright. It’s a reputational and legal risk that no right-minded broadcaster would ever want to take. 

This is why the agreement between Lionsgate Studios and Runway AI – the first to be struck between a major studio and an AI – is such a big deal. The model that will be used to generate content exclusively for Lionsgate productions will be trained on its own archive of more than 20,000 TV and film titles. So long as that model is 100 per cent Lionsgate-trained then there’s no risk that clusters of pixels (if not entire clips) from rival studio productions will make their way to Lionsgate’s generative outputs. It also means that Lionsgate can claim ownership of the synthetic content, solving another AI legal bugbear – though I suspect copyright will be shared with Runway since its system is co-creating the material. 

Lionsgate – co-producer of Mad Men, Nurse Jackie, and Orange is the New Black, and home to the John Wick, Hunger Games and Twilight franchises – stands to save what its vice chair Michael Burns predicts will be “millions and millions of dollars” as generative AI is used as a tool for “augmenting, enhancing and supplementing our current operations”. 

Runway is working on other licensing arrangements for individual creators so they too can build and train their own proprietary models, giving them what co-founder and CEO Cristóbal Valenzuela says will be the “best and most powerful tools”. The Lionsgate deal endorses that move, and signals Runway’s potential strategic shift away from deployments powered by its large model which was allegedly trained on a dataset built by scraping billions of images and artworks without creator consent or compensation. That allegation is at the heart of a class action lawsuit being brought by a group of artists against Runway and image generators Stability AI, Midjourney and DeviantArt. Developing big B2B relationships futureproofs its business in the event that the AIs are forced to remove data allegedly scraped without consent, or pay for it – a financial burden that’s unlikely to be viable, even for start-ups worth hundreds of millions (if not several billions), and hi-techs with stratospheric market caps. 

Generative AI developers have much to gain by pursuing a more ethical path, building trusted relationships with media groups that are symbiotic rather than parasitic, and devising solutions underpinned by tech that’s as transparent as possible rather than being hidden in a black box. If they’re able to do this and demonstrate their products are ethically sound and commercially safe, then the TV and film sector has an opportunity to do something else: welcome a new generation of visual storytellers who are already using generative tools to create new content. They’ll bring new skills, a fresh perspective and youthful creative flair to an industry that knows it must adapt in the era of generative AI.

Graham charts the global impacts of generative AI on human-made media at grahamlovelace.substack.com