Artificial Intelligence Regulations Impacting the Hollywood Industry in 2025
In the rapidly evolving world of artificial intelligence (AI), the use of AI to create digital replicas of people has become a contentious issue, with various legal implications.
Federal Legislation
The current federal legislation addressing this issue is primarily the Take It Down Act, enacted in May 2025 by President Trump. This law criminalizes the nonconsensual creation and publication of deepfake images, particularly intimate visual depictions, and requires online platforms to remove such content upon victim notification. The Federal Trade Commission (FTC) is tasked with enforcement, and failure of platforms to comply may be treated as an unfair or deceptive practice.
However, the No FAKES Act, which aims to include all AI-created look-alikes without consent, is still languishing in Congress and does not look likely to pass anytime soon.
State Laws
In the absence of federal legislation, states have taken the lead in regulating AI-generated likenesses. Two notable examples are California and New York.
California
California, known for its strong privacy and publicity rights protections, has emphasized consumer protection and privacy safeguards. The state's approach criminalizes the nonconsensual creation or distribution of intimate deepfakes and potentially provides civil remedies. However, the precise text of the laws regarding digital replicas was not found in the search results.
New York
New York has enacted a digital replica law that requires written consent, clear contracts, and compensation for the use of AI-created likenesses. This means that before someone’s likeness can be digitally replicated or used via AI, explicit prior permission must be obtained under contract terms that typically include payment or other negotiated conditions.
Key Differences
The key differences between New York and California laws lie in the consent requirement, compensation, and scope. New York requires written consent and contracts for digital replicas, while California emphasizes consent in intimate and privacy contexts, often criminalizing nonconsensual deepfakes. New York explicitly requires compensation for the use of AI likeness, while California does not necessarily require compensation, focusing more on rights violation and harm prevention. New York's law covers AI-created digital replicas broadly, including contractual terms, while California's laws have a strong focus on sexual and intimate image deepfake laws, expanding to general nonconsensual use.
Challenges Ahead
These developments reflect a complex legal landscape with varied protections and enforcement mechanisms, creating challenges for individuals and platforms dealing with AI-generated likenesses. The need for uniform federal legislation on the right of publicity, as there is for copyright and trademark, is increasingly apparent.
In addition, the use of AI in creating copyrighted content has led to a large statutory loophole, with the outcome of cases depending on the fair use defense. There have been defamation claims against AI companies due to their tendency to "hallucinate" and report untrue facts about real people.
As AI continues to evolve, it is crucial that legislation keeps pace to ensure protection for individuals' rights and privacy while promoting innovation in technology.
[1] [Source 1] [2] [Source 2] [3] [Source 3] [4] [Source 4]
Hollywood's entertainment industry is exploring the potential of artificial intelligence (AI) to create digital replicas of celebrities, prompting discussions about technology ethics.
With federal legislation, such as the Take It Down Act, criminalizing nonconsensual creation and distribution of deepfakes, and states like California and New York establishing their own regulations for AI-generated likenesses, the entertainment industry must tread carefully to avoid legal implications.