Lawyer Blames ChatGPT For Fake Citations In Court Filing

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," the judge overseeing the case said.
LOADINGERROR LOADING

A lawyer who relied on ChatGPT to prepare a court filing for his client is finding out the hard way that the artificial intelligence tool has a tendency to fabricate information.

Steven Schwartz, a lawyer for a man suing the Colombian airline Avianca over a metal beverage cart allegedly injuring his knee, is facing a sanctions hearing on June 8 after admitting last week that several of the cases he supplied the court as evidence of precedent were invented by ChatGPT, a large language model created by OpenAI.

Lawyers for Avianca first brought the concerns to the judge overseeing the case.

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” U.S. District Judge P. Kevin Castel said earlier this month after reviewing Avianca’s complaint, calling the situation an “unprecedented circumstance.”

The invented cases included decisions titled “Varghese v. China Southern Airlines Ltd.,” “Miller v. United Airlines Inc.” and “Petersen v. Iran Air.”

Schwartz ― an attorney with Levidow, Levidow & Oberman who’s been licensed in New York for more than 30 years ― then confessed in an affidavit that he’d used ChatGPT to produce the cases in support of his client and was “unaware of the possibility that its content could be false.”

Schwartz “greatly regrets having utilized generative artificial intelligence to supplement to the legal research performed herein and will never do so in the future without absolute verification of its authenticity,” he stated in the affidavit.

Peter LoDuca, another lawyer at Schwartz’s firm, argued in a separate affidavit that “sanctions are not appropriate in this instance as there was no bad faith nor intent to deceive either the Court or the defendant.”

The sanctions may involve Schwartz paying the attorneys’ fees that the other side incurred while uncovering the false information.

This isn’t the first time ChatGPT has “hallucinated” information, as AI researchers refer to the phenomenon. Last month, The Washington Post reported on ChatGPT putting a professor on a list of legal scholars who had sexually harassed someone, citing a Post article that didn’t exist.

“It was quite chilling,” the law professor, Jonathan Turley, said in an interview with the Post. “An allegation of this kind is incredibly harmful.”

Close