Report on deepfakes: what the Copyright Office found and what comes next in AI regulation

BYD Denza N9, Bao 3 Design Patent for India Revealed, Check Top USPs of EVs
December 31, 2024
Delhi High Court Grants Interim Relief in Trademark Dispute Over ‘HCL’
January 3, 2025

The Copyright Office released part one of its Report on Copyright and Artificial Intelligence, specifically addressing the topic of digital replicas, or “deepfakes” — i.e., AI-generated video, image, or audio recordings that realistically but falsely depict an individual. The report is the result of a broad initiative to explore the intersection of copyright and AI, informed by a series of listening sessions and meetings with stakeholders, as well as more than 10,000 public comments from authors, artists, publishers, lawyers, academics, industry groups, and more.

The report’s conclusions are stark: It finds that existing laws, in copyright and other intellectual property areas, are vastly insufficient to redress the harm posed by unauthorized digital replicas, which have the potential to threaten not only those in entertainment and politics, but private individuals, too.

Digital replicas are not new; the technology to produce fake images or recordings has been around for years. But digital replicas of the generative AI era are nothing like their predecessors. They are more sophisticated, leveraging generative AI’s powerful technology to create uncannily convincing material.

As the report notes, AI-generated replicas can have positive applications; they can be “accessibility tools for people with disabilities, … support creative work, or allow individuals to license, and be compensated for, the use of their voice, image, and likeness.” Deepfakes, however, can offer “a potent means to perpetrate fraudulent activities with alarming ease and sophistication,” the report finds. Moreover, because AI tools are broadly accessible, the potential for wrongdoing increases.

The examples cited by the Copyright Office are sobering. In 2023, researchers concluded that explicit images — overwhelmingly of women — make up 98% of all deepfake videos online. Scams involving the use of deepfakes have featured fraudsters luring individuals into bogus financial transactions or into buying products that they falsely claimed were endorsed by celebrities. Most alarming, the report warns that digital replicas pose a danger to our political system and news reporting “by making disinformation impossible to discern.”

With these emerging problems in mind, the report surveyed existing laws, at both the federal and state level, with the goal of determining whether the necessary tools already existed or if there is a need for new legislation.

The report found existing federal laws largely inapplicable. For example, it is black-letter law that copyright does not “protect an individual’s identity in itself, even when incorporated into a work of authorship.” Thus, while it might be a copyright violation to reproduce a copyrighted image or song that contains the copyright owner’s likeness or voice, merely replicating someone’s image or voice in a deepfake would not implicate copyright protections.