by Ashley Bowler

More than a year after SAG-AFTRA members ended their 118-day strike and ratified a historic contract, the deal’s legacy goes beyond higher wages—it reshaped how Hollywood views a performer’s identity in the age of artificial intelligence (AI). The 2023 agreement marked a turning point in recognizing a performer’s voice and likeness as digital assets requiring consent and control. However, its implications have extended far beyond the deal itself—shaping industry practices, prompting new legislation, and intensifying legal debates over AI, copyright, and consent.

At the heart of the contract is a major shift: studios can no longer create or reuse a digital replica of a performer—whether that means a face swap, voice clone, or full-body double—without their explicit, informed consent. Performers must be informed and compensated for each specific use of their digital replica. If a replica is created for one project and later reused in a different project, the performer must give new consent and receive additional compensation. 

This shift addresses a major gap in U.S. law. Copyright protects original works of authorship like scripts, films, and music, while trademark protects names, symbols, or designs used in commerce to identify and distinguish a brand’s goods or services. But there is no federal law that gives someone ownership of their face or voice. SAG-AFTRA filled that gap—at least for union members—by writing these protections into a labor contract.

These protections came at a time when AI-generated content was becoming more widespread and controversial. Scarlett Johansson publicly criticized OpenAI in May 2024 after it released a voice assistant that sounded strikingly like her character in the film Her (2013). Johansson had twice declined to license her voice to the company, yet the voice assistant closely mirrored her tone and cadence. Her experience highlights how even well-known actors face challenges when AI-generated likenesses are used without approval.

Other examples underscore the legal gray area. In 2024, the estate of comedian George Carlin sued over an AI-generated performance that mimicked his voice and comedic style, alleging violations of his right of publicity and copyright. While California law offers posthumous publicity protections, there is no federal right of publicity. As the estate’s attorney noted, this leaves a “gaping hole” in the law and no guarantee that platforms like YouTube must comply with takedown requests. 

On the other hand, in the 2021 documentary Roadrunner, director Morgan Neville used AI to recreate Anthony Bourdain’s voice for a few lines of narration. While Bourdain had written the words in personal correspondence, he had never actually spoken them aloud. Neville stated that he used the technology “in a few places where I thought it was important to make Tony’s words come alive,” and did so with the blessing of Bourdain’s estate and literary agent. Still, the use sparked backlash from some viewers and critics, who felt that digitally generating Bourdain’s voice—without public disclosure—crossed an ethical line. The incident highlighted the emotional and moral implications of synthetic performances, even when the content is drawn from an individual’s own words.

Some productions have responded by promoting a “no AI” stance. The 2025 thriller Sinners, starring Michael B. Jordan, used camera tricks—not machine learning—to create Michael B. Jordan’s dual-role performance. Likewise, Heretic (2024), a horror film starring Hugh Grant, includes a clear end-credit statement: “No generative AI was used in the making of this film.” Directors Scott Beck and Bryan Woods explained that they intentionally added the disclaimer to spark an urgent conversation about the ethical use of AI in filmmaking.

Meanwhile, some productions are embracing AI–but not without scrutiny.  In The Irishman (2019), Industrial Light & Magic (ILM) used AI-powered software to digitally de-age Robert De Niro, Al Pacino, and Joe Pesci, so they could portray younger versions of themselves. More recently, The Brutalist—a film acclaimed during awards season—used Respeecher’s AI voice technology to subtly refine Adrien Brody’s and Felicity Jones’s Hungarian pronunciation. The film went on to win a Golden Globe and secure an Oscar win for Brody. Director Brady Corbet and his team emphasized that the actors’ performances remained intact, and the AI was applied only to polish specific vowel sounds.

As AI adoption grows not just in film production but across the industry, talent agencies are stepping in to shape how these tools affect their clients. In 2023, Creative Artists Agency (CAA) partnered with technology vendor Veritone to launch the CAA Vault. The Vault scans and securely stores digital replicas of a client’s face, body, and voice—allowing talent to maintain ownership, control, and informed consent over how their likeness is used. CAA described the initiative as an “ethics-led and talent-friendly” application of AI, developed to maintain the security of artists’ assets while working to ensure the technology is responsibly integrated into opportunities across the entertainment landscape.

But these private-sector solutions only go so far. Broader legal protections are now starting to take shape. The proposed No Fakes Act would create a federal right to sue when someone’s digital likeness is used without permission—extending up to 70 years after death. The Take It Down Act, signed into law in May 2025, gives people the ability to request the removal of non-consensual synthetic images from online platforms within 48 hours. States are stepping up too. Tennessee passed the Ensuring Likeness Voice and Image Security Act (ELVIS Act) in 2024 to outlaw unauthorized commercial voice clones, and California has introduced a package of bills in 2025 aimed at regulating AI-generated likenesses, protecting against exploitative contracts, and increasing transparency in synthetic media use.

Even as unions, studios, and lawmakers respond to these changes, the legal framework around AI and creative content is still catching up. A key unresolved question is how copyright law applies to AI. The Copyright Act allows for “fair use” of protected works in contexts like criticism, commentary, news reporting, teaching and research. Tech companies argue that training AI models on existing creative works qualifies as transformative fair use—a doctrine that permits reuse when the new work adds new expression, meaning, or purpose. For example, parodies or reviews are often protected because they alter the original’s intent. Similarly, AI developers argue that their systems analyze patterns to generate original content—not replicate existing works. 

Still, actors and copyright holders push back. Performers say their likeness and voice—while not protected by copyright—shouldn’t be used without consent. The SAG-AFTRA contract helps fill this gap by requiring clear consent and compensation for any digital replicas of covered performers. Meanwhile, copyright holders like studios and estates argue that AI companies should not be allowed to train models on past performances or copyrighted material without authorization. And what about background actors, influencers, or athletes who aren’t covered by SAG-AFTRA? Without a union contract, there’s no guaranteed protection for their digital likenesses.

Hollywood is clearly in a period of change, as seen in how different productions are either embracing or rejecting AI technologies. Some films are using digital tools for de-aging and voice modulation, while others are making a point to distance themselves from synthetic media altogether. This evolving landscape signals a broader shift in how the industry views a performer’s image, voice, and likeness—not just as elements of a role, but as assets with long-term value. An actor’s face, voice, and even mannerisms can now be digitally replicated, reused, and monetized—often without a performer’s direct involvement. The SAG-AFTRA deal codified one approach to managing these changes, but the legal, ethical, and creative questions surrounding AI in entertainment remain far from settled.

U.S. Law Group routinely advises clients on issues related to copyright infringement and Fair Use. Our attorneys are at the forefront of cutting-edge areas of the law as applied to emergence of Artificial Intelligence. Please visit our site to learn more about our Intellectual Property practice areas.  Content in this article should not be considered legal advice.