In the same way that patent trolls look for companies to hit with license violations, armies of trolls will look for companies using AI-generated code to hit them with claims the code includes proprietary content that the AI app trained on, predicts Sean O'Brien, founder of the Yale Privacy Lab.
“There will soon be an entire sub-industry of trolling that mirrors patent trolls,” O’Brien told David Gewirtz of ZDNet. “As more authors use AI-powered tools to ship code under proprietary licenses, a feedback loop is created. There will be software ecosystems polluted with proprietary code that will be the subject of cease-and-desist claims by enterprising firms.”
The problem will be compounded as generative AI applications like ChatGPT and Copilot spread wider in use.
The effectiveness of the applications is dependent on the breadth and depth of the content on which they’re trained, opening the door to claims as the apps produce content that includes the proprietary content.
“The chances that AI prompts might output proprietary code are very high, if we're talking about tools such as ChatGPT and Copilot which have been trained on a massive trove of code of both the open source and proprietary variety,” O’Brien said.
Corrupt code
AI trolls are only the tip of the problem, says Gewirtz; look for troublemakers to go out of their way to corrupt code for no other reason than to throw a wrench into the system.
“One of the things we humans do is find ways to game the system,” said Gewirtz. “So not only will there be armies of legal trolls trying to find folks to sue, there will be hackers, criminals, rogue nation states, high school students, and crackpots, all attempting to feed erroneous data into every AI they can find, either for the [fun of the prank] or for much more nefarious reasons.”
Sorting out liability risk in this new environment will be messy, Gewirtz says; because the technology is so new, courts could have trouble finding parallels to draw on to decide who is responsible for what.
"If the AI draws on incorrect, deficient or biased information, the output of the AI tool may give rise to various potential claims depending on the nature of the potential damage or harm that the output may have caused,” Robert Piasentin, a partner in the Technology Group at McMillan, a Canadian business law firm, told Gewirtz.
It’s not clear who would be at fault if the code generated by an AI results in some catastrophic outcome, Gewirtz said.
“If a product ships using a library that has known exploits, and that product causes something that results in tangible harm, who owns that failure?” he said. “The product maker, the library coder, or the company that chose the product?”
It could be all three, he said.
For now, expect liability for AI-created code to be the same as for human-created code, says Richard Santalesa of SmartEdgeLaw group.
"Until cases grind through the courts to definitively answer this question, the legal implications of AI-generated code are the same as with human-created code," Santalesa told Gewirtz.