Synthetic Hollywood: a future with no actors?

Alli Magidsohn
May 18, 2023
How romantic can studios afford to be about the virtue of analogue artistry?

There's no doubt that AI will transform Hollywood. But how will an industry built on the simulation of life grapple with its responsibility to 'the real' — and will it even bother?

Picture a Hollywood, fully powered by AI. No lights, no camera, no action. No filming on location, back lots, or even in front of green screens. No set builders or lighting technicians. No hair stylists or makeup artists. Craft services, no longer needed. Security, gone. Even actors themselves rendered obsolete, replaced by animated generative images trained across a vast range of human expression. Vision of a distant future? Perhaps. But some commentators are predicting that almost all of the world’s video content will be AI-generated by as soon as 2025.

Today, Hollywood’s leading AI companies such as Flawless, Metaphysic, Deep Voodoo, and Vanity AI are already using machine learning to streamline production across a variety of applications. AI now allows film actors to appear as if they’re native speakers of any language in their very own voice, all but replacing the global audio dubbing industry and the thousands of voice actors who depend upon it. AI can age or de-age actors in a far more cost-effective way than CGI, virtual effects, or hours in a prosthetic artist’s chair.

It can also allow for artificial “reshoots” to generate new on-screen dialogue if the script changes, without needing to reassemble cast, crew, and infrastructure back on set. While, at the moment, these emerging technologies are mainly used to augment human performance, given the exponential rate of innovation in the space, it’s easy to imagine a future where they could replace human talent altogether.

In a recent CES panel titled ‘AI Goes to Hollywood’ the moderator asked panellists if any of them had seen a synthetic actor that could truly pass, side-by-side, for a real actor, and Scott Mann, Director of the film Heist and CEO of the AI company Flawless, was quick to respond, “I certainly have.”

The benefits of such a performer are clear: AI talent can deliver iterations of a single line of dialogue, expressed across hundreds of emotional valences, instantaneously. It can be seamlessly reskinned if its image doesn’t test well in focus groups. And, best of all, extraordinary efficiency: AI can do 24-hour shifts, back-to-back, without fatigue, complaint, or labour union violation. 

“The people who are paying you for your art would rather not pay you,” warned Keanu Reeves in the February issue of WIRED. “They’re actively seeking a way around you, because artists are tricky. Humans are messy.”

But isn’t that part of our charm?

Not if you ask adidas, whose value took a $4b nosedive when Kanye West’s antisemitic tirade sent his namesake brand, Yeezy, off the rails. Or, MRC, the production company that estimated $31m in damages when Kevin Spacey’s sexual misconduct allegations forced House of Cards to kill off one of Netflix’s most-watched main characters. When you work with humans, you get human traits. Moods, insecurities, urges — and public relations risks. But, apparently, you also get a certain compelling something that’s fully unique to our specific variety of biped. 

“As much as you can create something entirely synthetic, I think it’s the humanity piece that is always going to be under there,” said Mann on the panel at CES. “That’s what people connect to.”

Though hearing the CEO of a synthetic media company advocating for a uniquely human je ne c'est quoi is surprising, it gets to the real question: at some point, can that ‘humanity piece’ be unhinged from the actual actor?

What even is it that moves us in a masterful performance? Is it simply the vibrating aliveness of a human being exploring the fullness of their corporeal instrument? Or can that same subtle choreography of gestures and intonations be analysed into replicable data models? With a culture that demands a constantly-updated pipeline of fresh content, how romantic can studios afford to be about the virtue of analogue artistry? It is, after all, called show business.     

Conscious of these pressures and long before the current proliferation of deepfakes, Keanu Reeves inserted a clause into his contract, prohibiting his performances from being digitally manipulated without his consent. 

“I don’t mind if someone takes a blink out during an edit,” he told WIRED, “But… I had a performance changed. They added a tear to my face, and I was just like, “Huh?!” It was like, I don’t even have to be here.”

And if you visit the unauthorised deepfake account that spoofs him on TikTok, you’ll soon realise that he’s right. He doesn’t.

But as much as AI is a threat, it’s also an opportunity. In 2021, Bruce Willis became the first actor to licence rights to his likeness, appearing in an ad for the Russian telecoms operator, MegaFon, via the deepfake company, Deepcake. By training a neural network on his countless blockbusters, Deepcake was able to graft a convincingly Bruce Willis-like head onto a local actor’s body, giving Willis an unexpected payday, months after formally retiring from the limelight. 

On set in the MegaFon advert. In 2021, Bruce Willis became one of the first actors to licence rights to his likeness, appearing in an ad (but not on set) for the Russian telecoms company MegaFon, via the deepfake company, Deepcake.

Synthetic media can extend an actor’s career well-beyond their prime, and allow them to book multiple projects simultaneously, but the monetisation opportunities of these digital doubles don’t stop there. They are, quite literally, endless. 

‘Digital resurrection’ offers celebrity estates an entirely new revenue stream, allowing them to invoke the likeness of our favourite stars, long after they’ve left this mortal coil. Though it may seem unnatural to conjure an actor’s disembodied essence into a new role, doing so gets to the ouroboros-like heart of the matter: Is there a difference between a living actor performing their craft, and a re-animated AI version of that same actor, trained on their very own behavioural nuances? 

In a recent statement proposing regulation around AI-generated written content, the Writer’s Guild of America wrote, “It is important to note that AI software does not create anything. It generates a regurgitation of what it’s fed.”

I don’t even have to be here.”

— Keanu Reeves, referring to AI’s ability to edit scenes without the need for actors.

“Plagiarism,” writes the Guild, “is a feature of the AI process.” But is it really so unlike our own creative process? Doesn’t each human artist, no matter their medium, cull inspiration directly from the vast canon that came before them? Isn’t there, actually, nothing new under the sun?

In his newsletter Understanding Understanding, cultural commentator Michael Ventura, zooms out to a longer time horizon. He suggests that, ultimately, we may find parity between the way we look at AI and how we view our own programmed intelligence. “After all, none of us was born with the knowledge we consider innate to us today,” he writes, “It was human-influenced and trained into the meaty machine inside our skulls.”

AI can do 24-hour shifts, back-to-back, without fatigue, complaint, or labour union violation. 

Speaking at the same CES panel, Duncan Crabtree-Ireland, Executive Director of the Screen Actors Union, SAG-AFTRA, disagrees. “I believe there’s a point of human creation where humans can create something that is purely new and unique,” he says. “AI can't really quite get there. It can take thousands of ideas, bring them together, and then turn them into something that is seemingly new, but is actually an iteration of things that have been its inputs.”

As the technology continues to mature, though, perspectives may shift. The industry players to keep an eye on will be the talent agencies. In late 2022, Hollywood’s top power brokers, Creative Artists Agency (CAA), installed Joanna Popper as its first Chief Metaverse Officer. Tasked with finding ways for the company to maximise the opportunities of AI and web3, she has since brokered deals for Anthony Hopkins’ first NFT drop and a fully-branded Roblox world for electronic music duo, The Chainsmokers.

The monetisation opportunities of digital doubles are, quite literally, endless. 

At CES, Popper positioned the agency as the protector — in both the analogue and the digital words — of an actor’s name, image, and likeness. She also announced the company’s $20m joint investment in Deep Voodoo, stressing the importance of how talent-friendly and ethics-focused the synthetic media platform is.

But one can’t help but wonder if CAA is playing all sides of the field: equipping their clients with cutting-edge opportunities to monetise and scale their fame now, while also investing in the tech that may one day replace them. If it’s any indication of what’s to come, CAA recently added its first fully-synthetic client to the company roster, signing triple-threat digital influencer, singer, and model, Lil Miquela.

So, if a decade from now, AI does evolve from enhancing performances to fully generating them, what will that constellation of fully-synthetic stars look like? Will the model harken back to the days of Old Hollywood, with studio-crafted personas engineered for broad appeal, but distinct from the various roles they play? Or will this New Hollywood offer up an entirely new paradigm? One where a brand new synthetic actor is generated for each singular role it plays, fully indistinct from the character it inhabits? Just like us. 

Some commentators are predicting that almost all of the world’s video content will be AI-generated by as soon as 2025.
Some commentators are predicting that almost all of the world’s video content will be AI-generated by as soon as 2025.
Click to view our article about art.
Written by
Alli Magidsohn
Click to view our article about art, music, film, and storytelling..
More about
Technology
Click to view our article about art.
More about
Questions

Alli is the Principal of Meaning Maker, a brand communications consultancy that works with early stage start-ups, enterprise tech companies, and creative design agencies. Since serving her first web3 client back in 2016, she has been awed by the power and promise of how these emerging technologies have the capacity to improve life for all.

Suggested