Jess Miers serves as senior counsel, legal advocacy at Chamber of Progress. Views are the author’s own.
Copyright protections are far from dead — just ask Andy Warhol's ghost.
This past spring, the Supreme Court grabbed headlines when it ruled against the Andy Warhol Foundation for its licensing of the late artist's infamous Prince pop-art prints.
In a decisive 7-2 vote, the court flexed the power of U.S. copyright law, declaring that Warhol's colorful variation on another artist's photo wasn't transformative enough to meet fair use standards.
The Warhol Foundation case demonstrates that when applied correctly, fair use doctrine effectively protects artists, writers, and other creators from the wrongful use of their work, no matter how famous or beloved a derivative work may be. AI-generated content should be treated no differently.
In the last year, generative AI tools have exploded in popularity. Tools like DALL-E are now helping creators produce high-quality images from text prompts, and ChatGPT is assisting writers in penning works from sonnets to novels.
But as legal challenges against AI models and AI-generated writing and art arise, it's critical that courts and regulators avoid piling on new legislative frameworks.
Existing U.S. copyright law is well-equipped to handle AI cases, and heavy-handed regulations on the nascent industry will only hamstring competition and harm creators and consumers alike.
Concerns raised
Artists and other creators have raised legal concerns over two aspects of AI: the use of copyrighted works to train models and the seemingly derivative works created by these models. Both of these concerns are adequately covered under the law today.
When it comes to training AI, engineers take the same approach as any other teacher — they expose their student to examples of good work.
In the case of large language models, like those underlying new chat bots like Bard or ChatGPT, that means scraping publicly available data — some of which is copyrighted — to teach models how to write, code, illustrate and more.
While some creators challenge the use of their work as teaching tools, the courts agree: intermediate copying is, and always has been, fair use.
Courts have held that even if an entire copyrighted work is copied during an intermediate step in the transformative process, it may still be considered fair use if the eventual output doesn’t infringe on any rights.
In fact, we're already seeing this concept applied to cases of AI training. In the Northern District of California, a judge rejected copyright claims from plaintiffs who alleged that AI companies infringe on rights by scraping publicly available data.
As one videogame designer who uses AI tools put it in recent comments to the U.S. Copyright Office: "The referencing of known things, copyrighted and otherwise, is the beginning of the creative process within an artist's mind. No one has ever suggested to me that this process is a breach of copyright, only that the result cannot resemble the inspiration in a dramatic way."
The outputs from generative AI models are, often, also dramatically different from their sources of inspiration.
Despite the futuristic hype around artificial intelligence, AI models aren't functioning on their own. Human artists, writers, and creators are often using AI as a tool to aid in their work, and in many cases, AI outputs are nowhere near the final product.
Any amateur photographer with an expensive camera can tell you the same thing: sophisticated tools don't beget sophisticated results.
Journalism implications
Just like any other tool, creators using AI inherently place their own unique fingerprint on the work through original prompts and manipulating outputs.
While AI is revolutionizing many industries, there's no reason for creators to assume it will take their place. And nowhere is this more relevant than in journalism.
Already, AI is enhancing journalistic practices and streamlining operations by enabling faster content generation, aiding in complex data analysis, and providing new ways to engage audiences.
The integration of generative AI in journalism is not just an innovation, it's a leap forward in how newsrooms can operate more effectively and creatively.
At the same time, news publishers have raised concerns regarding generative AI's impact on news consumption and distribution. AI tools integrated into search engines are able to quickly aggregate and summarize information scraped from news articles and websites, diverting web traffic from traditional news outlets.
Just last month, The New York Times filed a lawsuit against OpenAI and Microsoft, alleging copyright infringement for ChatGPT's ability to summarize information from the outlet's articles.
But in practice, AI is democratizing the field of journalism. By automating routine tasks and providing advanced tools for content creation and analysis, AI is equipping newsrooms of all sizes to produce high-quality content more efficiently.
This technology can level the playing field, allowing smaller outlets to compete more effectively with larger organizations.
Not to mention that generative AI can enhance the depth and breadth of news coverage, ensuring that important stories are not overlooked and that diverse perspectives are represented in the media landscape.
And despite publishers' concerns, the bottom line remains: AI-generated content, particularly in search engine contexts, is unlikely to replace the nuanced reporting and thought-provoking analysis provided by human journalists. Publications like the New York Times continue to offer in-depth content that AI simply can't compete with.
Legal options
However, in the instances where AI-assisted art or writing does resemble a source of inspiration too closely, creators have clear paths for legal recourse.
As with any tool, bad actors will find ways to skirt guardrails and produce infringing works. And as in any other copyright case, creators whose works are infringed upon have strong legal protections under the fair use doctrine.
In typical cases of direct copyright infringement, courts evaluate whether the defendant had access to the original work and if their creation is "substantially similar" to it. These same criteria apply to AI models.
In evaluating an AI case, a court may investigate whether the AI model had access to the original work and assess similarities between the two pieces, just the same as if a human had created the piece of writing or art.
In this way, the fact-intensive and flexible nature of fair use makes it a powerful tool against any infringement. If fair use can successfully challenge a famous artist like Andy Warhol, it can certainly protect against violations by AI users, too.
With strong protections for artists against AI infringement already well established, additional regulations wouldn't just be redundant but damaging to competition and consumers.
Calls for new, AI-focused copyright regulations fail to account for the real-life cost of litigation. The more complex it becomes for AI providers to comply with regulations, the costlier it becomes to produce and offer AI tools, weeding out smaller innovators in the industry.
For consumers, this means fewer AI tools on the market and fewer creative works from artists.
When it comes to AI, existing fair use doctrine precludes the need for lawmakers to pass additional copyright laws.
Current copyright law has protected against infringement without stifling innovation, and we can apply existing regulations in a way that fosters AI innovation while protecting creators.