Protecting Public Figures and Artists' Likeness in the Age of AI

Protecting Public Figures and Artists’ Likeness in the Age of AI

Lauren Hendrickson
January 7, 2026

Table of Contents

Key Takeaways:

  • AI has made it easy to copy faces, voices, and styles, increasing the risk of impersonation and unauthorized use for public figures and artists. Protecting digital likeness helps maintain control over identity and limits long-term damage to reputation and trust.
  • As AI-generated media spreads, it becomes harder to tell what is real and what is fabricated. This creates new challenges for creative ownership and audience confidence, making safeguards essential to protect the value of personal image and voice.
  • Licensing digital likeness allows creators to control how their identity is used, receive compensation for commercial use, and protect both reputation and revenue.

 

Artificial intelligence is changing how identities are copied, shared, and monetized. In early 2025 alone, celebrities were targeted 47 times by AI generated impersonations, an 81 percent increase compared to all of 2024. This rise highlights how unregulated likeness use is affecting entertainment and weakening trust between public figures and their audiences.

These incidents include voice clones used in fake advertisements, unauthorized AI videos, and synthetic appearances that spread across platforms faster than they can be removed. Today, convincing replicas can be produced using only a small set of images and commonly available AI tools.

In a previous article, How AI Is Affecting the Music Industry, we examined how AI is reshaping music creation. Similar patterns are now appearing across other parts of the creative world. From realistic avatars to convincing deepfake videos, AI systems can now replicate recognizable individuals at scale.

Protecting likeness has become essential for public figures and artists. It helps preserve reputation, maintain creative control, and limit the harm caused by unauthorized use. This article examines why proactive protection matters, how misuse affects public figures and artists, and what is needed to safeguard identity and creative work in an AI driven media environment.

What Are Public Figures’ and Artists’ Likenesses?

In the past, a person’s likeness was commonly associated with their face, name, or signature. Today, it includes a broader set of identifiable traits. Visual appearance, voice, and other recognizable characteristics can be captured, analyzed, and reproduced using AI systems trained on publicly available material.

For public figures and artists, this expanded definition matters because likeness is no longer limited to static images or recordings. A short clip from an interview, combined with a small set of photos, can be enough to generate a video that places a person in a new context. These replicas may appear in advertisements, promotional content, or social media posts that imply participation or endorsement.

In this context, likeness refers to the combination of visual, auditory, and expressive traits that audiences associate with a specific individual. It reflects how a person is recognized and identified across media, regardless of whether that representation is accurate or authorized.

The Cost of AI Misuse for Public Figures and Artists

As AI generated replicas become easier to produce, misuse has shifted from occasional viral incidents to a recurring issue across media and entertainment. The market for AI generated visual content continues to grow, with the AI image generator sector projected to reach $1.08 billion by 2030, up from about $349.6 million in 2023, according to Grand View Research. As access expands, so does the likelihood that likeness will be reused without approval.

Below are some of the most common ways public figures’ and artists’ likenesses are misused today:

1. Fake Endorsements and Eroded Credibility

Unauthorized advertisements featuring public figures have become increasingly common. Tom Hanks publicly warned followers after an AI generated ad promoted a dental plan using a digitally altered version of him, stating that he had no involvement. Scarlett Johansson has faced similar misuse and has spoken publicly about the need for stronger protections against unauthorized AI generated likeness.

These campaigns mislead audiences and damage personal credibility. Once false endorsements circulate, corrections often fail to reach the same audience that saw the original content, leaving lasting confusion behind.

2. AI-Generated Performances Undermining Artists’ Revenue

Synthetic performances have also created new risks for artists. In 2023, AI generated tracks imitating Drake and The Weeknd spread rapidly online and gained millions of streams before being taken down. Even short periods of exposure can divert attention and income away from legitimate work, while weakening the distinctiveness of an artist’s sound and public image.

3. Misinformation That Spreads Faster Than Verification

AI generated videos and audio clips often travel across social platforms faster than official responses can keep up. Once reposted, these materials are difficult to contain. For public figures, this speed can distort public perception and create long term reputational harm, even after content is proven false.

4. Loss of Creative and Personal Integrity

Beyond revenue and reputation, AI misuse affects how creative work is understood and valued. When a person’s likeness can be reproduced without approval, the boundaries around ownership and representation become harder to enforce. Repeated misuse weakens the connection between creators and their audiences and blurs what content can be trusted as legitimate.

Protecting Public Figures and Artists from AI Exploitation

Preventing AI misuse requires measures that operate before unauthorized content circulates widely. Once a deepfake or voice clone spreads, the damage is often difficult to reverse. Effective protection depends on systems that reduce the risk of unauthorized use and give public figures greater control over how their likeness is handled.

Several approaches play a role in limiting misuse:

1. Preventing Misuse at the Source

At a foundational level, protection starts with mechanisms that establish consent and provide context around how media was created. Verification tools can help confirm the source of content, while provenance and tracking systems can record when and how media was generated. Consent based frameworks add another layer by ensuring creators approve how their likeness is used before content is shared.

Together, these measures help public figures and artists retain control over their identity and reduce the likelihood of misuse at scale.

2. Platform Responsibilities in Likeness Protection

Platforms and content hosts are often the first place AI generated likenesses appear. Requiring proof of consent before AI generated media is uploaded can help prevent unauthorized content at the source. Clear labeling of synthetic or altered material also gives audiences better context and reduces confusion about what content is legitimate.

Some platforms are beginning to test consent driven approaches. YouTube’s Dream Track is one example, allowing creators to use participating artists’ voices within defined limits, with approval and compensation built into the process. While initiatives like this show what is possible, consistent enforcement across platforms remains uneven.

3. Early Detection and Reporting Systems

Even with preventive controls in place, new AI generated likenesses continue to surface. Detection systems that rely on watermarking, hashing, or provenance signals can help identify suspicious content earlier in its lifecycle. Flagging potential violations before widespread distribution reduces reliance on repeated takedown requests and limits long term harm.

4. The Role of Regulation

Regulation also shapes how these safeguards are adopted. Laws such as the ELVIS Act are beginning to establish clearer expectations around consent and accountability for AI generated voice and likeness. These efforts help influence platform behavior and provide a legal backdrop for prevention, even as standards continue to develop.

Why Licensing Likeness Is Essential for Control and Compensation

Once protections are in place, public figures and artists need a clear way to define how their likeness can be used when permission is granted. Licensing provides that structure by setting boundaries around approved use and establishing accountability from the outset.

By clarifying expectations early, licensing helps ensure the following:

  • Control and representation: Licensing gives public figures the ability to approve or deny specific uses of their likeness before content is created or shared. This helps ensure identity is represented accurately and reduces the risk of misleading or unauthorized appearances.
  • Clear consent and accountability: Licensing turns consent into a defined agreement that sets expectations for how likeness can be used. When terms are established in advance, it becomes easier to address responsibility if boundaries are crossed.
  • Fair compensation for approved use: When likeness is used commercially, licensing ensures creators are compensated under agreed conditions. This helps protect the long term value of personal image and voice while reducing the risk of unapproved exploitation.

How Public Figures Are Collaborating With AI on Their Own Terms

Licensing does more than set boundaries. When consent and accountability are clearly defined, it also creates space for approved collaboration. Some public figures are choosing to work with AI tools under specific conditions, using licensing to maintain control over how their likeness is accessed and applied.

These examples show how artists and public figures are engaging with AI in ways that preserve consent, oversight, and clarity.

1. Music: Setting Clear Standards for Voice Use

In music, AI voice tools raise direct questions about ownership and reuse. Musician Grimes partnered with TuneCore to launch GrimesAI, a platform that allows creators to generate music using her voice with explicit permission. Use of the voice is governed by defined terms, and approved tracks share royalties with the artist.

This approach matters because it replaces informal scraping with a structured agreement. Instead of voice models being trained or deployed without approval, access is limited to specific uses under clear conditions. For musicians, this model demonstrates how voice rights can be managed without relying on takedowns or after-the-fact enforcement.

2. Voice: Preserving Legacy Through Licensed Agreements

Voice presents a unique challenge because it is closely tied to personal identity and legacy. Actor James Earl Jones, known for voicing characters such as Darth Vader and Mufasa, licensed his voice for use in AI supported projects under agreed terms. The arrangement allows his voice to continue appearing in future productions while remaining aligned with his wishes.

This type of agreement shows how voice licensing can address long term concerns. It provides a way to manage how a voice is used over time, including after an artist steps away from active performance, while maintaining consent and oversight.

3. Film: Using Digital Doubles with Actor Participation

In film and television, digital likeness has been used for years, but AI tools increase the speed and realism of replication. In Tron: Legacy, Jeff Bridges approved the use of a digitally altered version of himself for flashback scenes. The process involved his participation and consent, combining performance capture with visual effects.

This example highlights an important distinction. Digital doubles can be used responsibly when actors are involved in the process and approve how their likeness appears on screen. Without that involvement, similar techniques can easily cross into unauthorized use. Clear agreements and direct participation help set boundaries around acceptable use.

The Path Forward In Protecting Creativity in the AI Era

Protecting creativity in the AI era depends on systems that respect the people behind the work while providing clear signals about how content is created and used. Tools such as watermarking and labeling can help add context, but their value depends on consistent standards and enforcement.

Uncertainty around how AI systems are trained remains part of the challenge. Ongoing debate over fair use and permission affects not only creative works but also personal likeness, especially as voices and identities become easier to reproduce at scale.

Ultimately, protecting digital likeness comes down to accountability and design choices. When platforms and developers prioritize transparency and consent, individuals retain greater control over how they are represented, how their work is used, and how their legacy is preserved.

Identity.org

Privacy-first identity verification for businesses and developers. Verify users securely—without contracts, minimums, or data collection risks.

Join the Identity Community