Preventing Consumer Deception in the Age of AI: Tackling ‘Synthetic Performers’ in Digital Advertising
The rapid rise of synthetic media, including generative artificial intelligence (AI) and deepfakes, has revolutionized content creation. Generative AI has empowered advertisers to produce visually convincing images, voices, and even virtual characters inexpensively and at scale. However, this technological leap has also blurred the line between what is real and what is fabricated, creating new pathways for consumer deception. New York has been first to lead the charge in addressing this regulatory gap, enacting a law requiring conspicuous disclosure when adverts contain “synthetic performers”, which are AI-generated individuals who appear human but are not real persons. This law represents a first step to protecting consumers from deception in commercial media with AI. By clarifying statutory language in the New York law and advocating for a federal disclosure standard, we can take the next step to safeguard consumer trust and transparency in the digital age.
Generative AI encompasses a family of machine learning models, including large language models and deep generative networks, capable of producing text, images, and video that mimic human creativity. In advertising, the use of these technologies is becoming widespread, with AI being able to automate ad copy, tailor visuals to micro-audiences, and even create “synthetic influencers” that never existed. Research has even shown that AI can achieve parity with human work in personalization and even outperform humans in persuasive elements, potentially enhancing advertisers’ competitive edge. [1] Despite these benefits, AI-generated content carries a risk of deception because it can portray fake representations indistinguishable from reality to lay audiences.
Historically, the regulation of false advertising in the United States has focused on preventing materially misleading commercial practices that would deceive reasonable consumers. The Federal Trade Commission Act (“FTC Act”), 15 U.S.C. § 45 (2024), prohibits “unfair or deceptive acts or practices in or affecting commerce,” which the Federal Trade Commission has interpreted to include deceptive advertising that misleads consumers about the nature, quality, or characteristics of products and services. [2] The FTC’s Deception Policy Statement further clarifies that “any qualifying information necessary to prevent deception must be disclosed prominently and unambiguously” to counteract misleading impressions created by an advertisement. [3] This enforcement framework has long served as the backbone of US advertising law, shaping how advertisers substantiate claims and avoid deception. However, because the FTC Act’s focus is on material factual misrepresentation rather than the authenticity of visual or audiovisual portrayals themselves, it does not explicitly require advertisers to disclose that an image or persona is generated by artificial intelligence, a gap that became increasingly apparent as generative technologies advanced and deepfake-style media became more prevalent.
On December 11, 2025, New York Governor Kathy Hochul signed one of the first AI-specific advertising transparency statutes in the United States, amending General Business Law § 396-B to require conspicuous disclosure when AI-generated “synthetic performers” appear in commercial advertisements, which is defined as an AI-generated figure that appears to be a human to a reasonable viewer, when actual knowledge of a synthetic performer exists. [4] The law will take effect June 9th, 2026 and penalize non-compliance with civil fines of $1,000 for a first violation and $5,000 for subsequent violations. [5] Expressive works, such as movies, television, or video games, along with audio-only ads and AI used exclusively for language translation, are largely exempt. [6] The law applies to any person, firm, corporation, or association creating or disseminating commercial advertisements containing synthetic performers in New York. It covers advertising across mediums, from print to online to digital displays, so long as the content appears in the state and the advertiser has knowledge of the synthetic performer’s use. The statute also establishes that media publishers are not automatically liable unless they have actual knowledge and fail to act within a reasonable timeframe.
The law’s central premise is that compelled disclosure can reduce consumer deception and enhance informed choice. By requiring advertisers to label AI-generated performers explicitly, the statute parallels other regulatory disclosures, like nutritional labeling in consumer products or sponsored-content disclosures in influencer marketing. However, despite its promise, the statute’s language does present ambiguities. One question is what constitutes “actual knowledge” of a synthetic performer, as advertisers increasingly outsource creative production to third parties and AI tools, blurring when actionable knowledge arises. Another question is that the term “conspicuous disclosure” has no fixed template, leaving marketers to interpret its sufficiency on a case-by-case basis. Additionally, the expressive work exemption could produce unpredictable results if an ad tangentially relates to a work featuring AI, yet has significant commercial goals. Overall, these interpretive uncertainties may lead to over-inclusion by capturing benign media with minimal deception risk or under-inclusion by excluding subtle AI influences that still misleads consumers.
The implications of this law are immense. For consumers, the introduction of New York’s synthetic performer disclosure requirement promises a shift toward greater transparency and informed choice in an era of increasingly sophisticated AI-generated advertising. AI content can be highly convincing, often making it difficult for viewers to distinguish between real and synthetic representations, which in turn can undermine confidence in commercial media and distort perceptions of authenticity. This law can empower consumers to understand the nature of the content they encounter and to make purchasing decisions with fuller awareness of its origins. For advertisers and brands, the law will prompt reconsideration of creative workflows. Firms may need to audit AI use, adopt compliance protocols, and engage legal counsel to align advertising strategies with statutory obligations. The potential reputational upside of being transparent about AI use, especially in an era of increasing public concern about misinformation and deepfakes, may also influence behavior beyond mere avoidance of penalties. New York’s law is also part of a broader regulatory trend toward transparency mandates for AI content. For example, jurisdictions outside the US like South Korea have proposed requiring clear labeling of AI-generated advertisements to counter deceptive practices globally. [7]
To better ensure consumer transparency, one recommendation is to clarify and refine the statutory language of New York’s synthetic performer disclosure law to reduce compliance uncertainty and enhance enforceability. The current statute requires a “conspicuous disclosure,” but without specific guidance on placement, formatting, or standardized terminology, advertisers may struggle to interpret what constitutes sufficient disclosure in practice. This lack of specificity could lead to inconsistent implementation, uneven consumer experiences, and avoidable litigation. By specifying parameters such as minimum font size, timing (e.g., before user interaction), and language clarity, we can ensure disclosures meaningfully inform consumers about AI use. Moreover, clarifying when an advertiser is deemed to have “actual knowledge” of synthetic performers would strengthen compliance incentives and reduce loopholes that might otherwise undermine the statute’s transparency goals. Additionally, providing safe harbors or compliance checklists could also help smaller advertisers avoid inadvertent violations while maintaining a consistent baseline of consumer notice.
Beyond these statutory clarifications, pushing for a federal disclosure standard can harmonize AI content transparency rules across jurisdictions. New York’s law is a landmark first step, but a patchwork of state regulations burdens advertisers with complex compliance obligations and increases the risk that consumers in some states receive less protection than others. A federal framework, whether enacted by Congress or implemented through a federal agency like the Federal Trade Commission, could establish uniform disclosure requirements for AI-generated content in advertising. Expanding federal involvement could promote legal clarity, market certainty, and equitable consumer protections nationwide, ensuring that the benefits of generative AI do not come at the expense of trust and fairness in advertising.
New York’s synthetic performer disclosure law represents a groundbreaking effort to adapt consumer protection law to the realities of generative AI advertising. By mandating transparency when AI-generated individuals appear in ads, the statute seeks to ensure that consumers understand what they see and make informed choices in the marketplace. Although challenges remain in interpretation and implementation, the law’s emphasis on disclosure is critical to safeguard trust and transparency in the digital age. As AI continues to evolve and influence content creation, regulatory frameworks like New York’s will play a pivotal role in shaping how society balances innovation with consumer protection.
Edited by Rylee Pachman
Endnotes
[1] Elyas Meguellati, Stefano Civelli, Lei Han, Abraham Bernstein, Shazia Sadiq & Gianluca Demartini, LLM-Generated Ads: From Personalization Parity to Persuasion Superiority, arXiv:2512.03373 (Dec. 3, 2025), online at https://arxiv.org/abs/2512.03373 (visited Feb. 15, 2026).
[2] Federal Trade Commission Act (“FTC Act”), 15 U.S.C. § 45 (2024).
[3] Enforcement Policy Statement on Deceptively Formatted Advertisements, 81 Fed. Reg. 22596 (Apr. 18, 2016), online at https://www.ftc.gov/system/files/documents/public_statements/896923/151222deceptiveenforcement.pdf (visited Feb. 15, 2026).
[4] N.Y. Gen. Bus. Law § 396-B (McKinney 2026). https://www.nysenate.gov/legislation/laws/GBS/396-B
[5] Ibid.
[6] Ibid.
[7] Kim Tong-Hyung, South Korea to require advertisers to label AI-generated ads, PBS NewsHour (Dec. 10, 2025), online at https://www.pbs.org/newshour/world/south-korea-to-require-advertisers-to-label-ai-generated-ads (visited Feb. 15, 2026).