Home Technology Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms?

Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms?

Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms?

Before we all get to deep into using ChatGPT or other AI tools to create things for us, we need to address some of the questions raised around content custody, ownership, and attribution.

Some have breathlessly proclaimed ChatGPT to be the most important development since the invention of the printing press or the splitting of the atom. We’ll see. But there are issues with the accuracy and truthfulness of the materials that AI platforms such as ChatGPT generate. In another matter, since there is speculation that ChatGPT or other AI platforms could take over at least some of the work of writers, analysts, and other content creators, we need to also understand its legal ramifications.

There’s no issue around personal use of ChatGPT as a conversational assistant. And the rules around using ChatGPT to generate term papers seem pretty clear (don’t even think about it). But when it comes to applying AI-generated prose in content intended for wider distribution — say marketing materials, white papers, or even articles — the legalities get a little murky. When it comes to intellectual property, the model for ChatGPT “is trained on a corpus of created works and it is still unclear what the legal precedent may be for reuse of this content, if it was derived from the intellectual property of others,” according to Bern Elliot, analyst at Gartner.

To get a better sense of where things stand with the use of ChatGPT for distributed content, I sought some legal opinions on where the law (such as copyright or intellectual property law) stands. Here are their insights:

ChatGPT tends not to include citations or attributions to the original sources and IP used or synthesized. Is this an issue?

“From an intellectual property [IP] perspective, if the source material is not specifically quoted, there would be no requirement to include citations,” says Michael Kelber, co-chair of Neal Gerber Eisenberg’s IP practice. “If ideas are used but not copied, the use would not implicate copyright or other protected IP. That said, from a research standpoint, citation or attribution would be helpful for identifying biases and credibility, just like any other citation to authority.”

It’s unclear who can copyright or claim ownership AI-generated works. The requester, who simply used a tool to generate text, or OpenAI? Who?

For a work to enjoy copyright protection under current U.S. law, “the work must be the result of original and creative authorship by a human author,” says Margaret Esquenet, partner with Finnegan, Henderson, Farabow, Garrett & Dunner, LLP. “Absent human creative input, a work is not entitled to copyright protection. As a result, the U.S. Copyright Office will not register a work that was created by an autonomous artificial intelligence tool.”

This question is also “presently being litigated, e.g., in a case involving a photograph taken by a primate and also cases exploring the concept of inventorship of AI in the context of patents,” says Kelber. “So far, the courts have been hostile to the notion of non-humans claiming authorship or inventorship, and in both cases, thereby ownership of the IP.”

Things get more complicated if an active copyright or IP-infringement challenge to AI-generated content takes place. “Because a U.S. author needs to secure a registration or a refusal from the Copyright Office to enforce rights, a potential path to challenging the human authorship obligation is to either appeal a Copyright Office registration refusal or pursue an infringer after attempting to register rights with the Office,” says Esquenet. “This strategy will likely face strong headwinds in light of the legislative history of the human authorship prerequisite and subsequent court decisions affirming the requirement.”

As a result of the human authorship standard, “under U.S. current law, an AI-created work is likely either (1) a public domain work immediately upon creation and without a copyright owner capable of asserting rights or (2) a derivative work of the materials the AI tool was exposed to during training,” Esquenet continues. “Who owns the rights in such a derivative would likely be dependent on various issues, including where the dataset for training the AI tool originated, who, if anyone, owns the training dataset (or its individual components), and the level of similarity between any particular work in the training set and the AI work.”

Assuming the prior cases precluding authorship for non-humans is followed, “it could preclude anyone from owning the output, essentially dedicating such works to the public,” Kelber says. “And that result could also be supported, since the vast amount of source materials are at least publicly available.”

A compelling argument, Kelber adds, “may be made that AI is simply a tool and that the human who is directing the AI should be able to claim ownership of the output. For example, a graphic artist can claim artwork made through the use of drawing software. However, in the case of ChatGPT, the operator’s control of the output is limited, and perhaps a stronger argument could be made that the output is controlled more by the creator(s) of the ChatGPT software than the operator who initiates an input.”

What about when ChatGPT generates the exact same passages for someone else?

“Even assuming that the rightful copyright owner is the person whose queries generated the AI work, the concept of independent creation may preclude two parties whose queries generated the same work from being able to enforce rights against each other,” says Esquenet. “Specifically, a successful copyright infringement claim requires proof of copying—independent creation is a complete defense. Under this hypothetical, neither party copied the other’s work, so no infringement claim is likely to succeed.”

However, if the content created is damaging, who gets the blame? “An exact copy of protected work could create potential liability, which raises another question: who is liable, the creator of the AI — such as ChatGPT — or the user who posed the query?” Kelber asks.

If generated text is used in an article or paper, even partially, should ChatGPT be treated and cited as a source in itself? As in “ChatGPT-generated response, accessed 12/15/2022”?

Correctly citing AI-generated content is okay. “From a Blue Book citation standpoint — citing in court briefs — as well as for literary purposes, a citation to the source as ChatGPT would be appropriate,” Kelber relates.

#Ultimately #Owns #Content #Generated #ChatGPT #Platforms


Please enter your comment!
Please enter your name here