RPJ Partner Deena Merlen on AI, Deepfakes and the Copyright Office Recommendations
By Deena R. Merlen
In today’s digital age, the question of whether we can trust what we see or hear has become increasingly complex. As generative artificial intelligence (“AI”) technologies advance, distinguishing between reality and AI-generated fakes is more challenging than ever. This controversy touches various aspects of our lives, from news and politics to personal interactions and entertainment, with profound implications for trust and credibility. When audiences can no longer distinguish between genuine and artificial content, skepticism and confusion can erode confidence in all forms of media. The risk is not just that people will see something fake and believe it is real, but also the risk that people will see something real but believe or suspect it was faked.
As AI has rocketed into our lives, the law is racing to catch up. In 2023 the United States Copyright Office (the “Copyright Office”) sought input from the public relating to the intersection of AI and copyright and received over 10,000 responses. Commentators in every state in the US and 67 countries provided input, including authors, composers, performers, artists, publishers, producers, lawyers, academics, technology companies, libraries, sports leagues, trade groups, public interest organizations and more, representing a wide variety of perspectives. These responses, supplemented by additional research conducted by the Copyright Office and information it received from other government agencies, form the basis for Copyright and Artificial Intelligence: a Report of the Register of Copyrights (the “Copyright and AI Report”), which will be published by the Copyright Office in multiple parts.
On July 31, 2024, the Copyright Office published Part 1 of the Copyright and AI Report, the first of the series (“Part 1”). Part 1 addresses realistic AI-generated replicas of a person’s voice or appearance, commonly referred to as deepfakes. (Subsequent parts of the Copyright and AI Report will be published to address the copyrightability of works created using generative AI, training of AI models on copyrighted works, licensing considerations, and allocation of any potential liability.)
The Copyright Office recognizes in Part 1 that while there are great benefits to be realized from AI’s digital replication of a person’s image or voice, at the same time this rapidly-advancing technology poses great risks, as further discussed in the following excerpt (footnotes omitted):
Digital replicas may have both beneficial and harmful uses. On the positive side, they can serve as accessibility tools for people with disabilities, enable “performances” by deceased or non-touring artists, support creative work, or allow individuals to license, and be compensated for, the use of their voice, image, and likeness. In one noted example, musician Randy Travis, who has limited speech function since suffering a stroke, was able to use generative AI to release his first song in over a decade.
At the same time, a broad range of actual or potential harms arising from unauthorized digital replicas has emerged. Across the creative sector, the surge of voice clones and image generators has stoked fears that performers and other artists will lose work or income. There have already been film projects that use digital replica extras in lieu of background actors, and situations where voice actors have been replaced by AI replicas. . . .Beyond the creative sector, the harms from unauthorized digital replicas largely fall into three categories. First, there have been many reports of generative AI systems being used to produce sexually explicit deepfake imagery. In 2023, researchers concluded that explicit images make up 98% of all deepfake videos online, with 99% of the individuals represented being women. Instances of students creating and posting deepfake explicit images of classmates appear to be multiplying.
Second, the ability to create deepfakes offers a “potent means to perpetrate fraudulent activities with alarming ease and sophistication.” The media has reported on scams in which defrauders replicated the images and voices of a multinational financial firm’s CEO and its employees to steal $25.6 million; replicated loved ones’ voices to demand a ransom; and replicated the voice of an attorney’s son asking him to wire $9,000 to post a bond. Digital replicas of celebrities have been used to falsely portray them as endorsing products.
Finally, there is a danger that digital replicas will undermine our political system and news reporting by making misinformation impossible to discern. Recent examples involving politicians include a voice replica of a Chicago mayoral candidate appearing to condone police brutality; a robocall with a replica of President Biden’s voice discouraging voters from participating in a primary election; and a campaign ad that used AI-generated images to depict former President Trump appearing with former Director of the National Institute of Allergy and Infectious Diseases, Anthony Fauci. Deepfake videos were even used to influence a high profile union vote by falsely showing a union leader urging members to oppose the contract that he had “negotiated and . . . strongly supported.”
Summarizing the challenges to the information ecosystem, one digital forensics scholar cautioned, “[i]f we enter a world where any story, any audio recording, any image, any video can be fake . . . then nothing has to be real.”
As AI technology continues to improve, it will foreseeably grow ever more difficult to distinguish between what is fake and what is real. In light of such concerns, the Copyright Office recommends in Part 1 as follows:
- Congress should establish a federal right that protects all living people from the knowing distribution of unauthorized digital replicas.
- The right should be licensable, subject to guardrails, but not assignable, with effective remedies including monetary damages and injunctive relief.
- Traditional rules of secondary liability should apply, but with an appropriately conditioned safe harbor for online service providers that transmit, cache, host, or link to user content.
- The law should contain explicit First Amendment accommodations.
- Finally, in recognition of well-developed state rights of publicity, the Copyright Office recommends against full preemption of state laws.
Pursuant to President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, each time the Copyright Office publishes one of the multiple parts of the Copyright and AI Report, it triggers an obligation of the United States Patent and Trademark (the “PTO”) to consult with the Director of the Copyright Office and to issue recommendations to the President, within 180 days of each such publication, on potential executive actions relating to the issues discussed in the Copyright Office’s publication. We anticipate that the PTO will largely echo the recommendations published in Part 1 by the Copyright Office.
We shall continue to monitor for developments in this important emerging area of law.
This article is intended as a general discussion of these issues only and is not to be considered legal advice or relied upon. For more information, please contact RPJ Partner Deena R. Merlen, who counsels clients in areas of employment and labor law, intellectual property, media and entertainment, general business law, commercial transactions and dispute resolution. Ms. Merlen is admitted to practice law in Connecticut and New York.