News

Lawyer Faces Sanctions After Admitting Using ChatGPT For ‘Bogus’ Legal Research

   DailyWire.com
Nikos Pekiaridis/NurPhoto via Getty Images

A New York attorney has to convince a judge that the he doesn’t deserve sanctions after admitting his firm used “bogus” legal research obtained through ChatGPT for a personal injury case.

Attorney Steven Schwartz, an attorney with Levidow, Levidow & Oberman, submitted a brief containing several references to non-existent cases his legal team gathered through the artificial intelligence chatbot program.

Schwartz, who has been an attorney for more than 30 years, had helped prepare legal research for his colleague Peter LoDuca for a case involving a man suing Avianca Airlines for injuries he argues were sustained from a serving cart that was being pushed by an employee on the airline in 2019.

But U.S. Judge Kevin Castel of the Southern District of New York said in an order that the submission contained six cases that ‘appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

“The Court is presented with an unprecedented circumstance,” Castel said.

The non-existent cases in the filing included Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines.

In a written statement to Castel, the lawyer attached screenshots showing a conversation between Schwartz and ChatGPT.

“Is varghese a real case,” reads one message, referencing Varghese v. China Southern Airlines Co Ltd.

“Yes,” ChatGPT responded. It “is a real case.”

“What is your source,” the user replied.

“I apologize for the confusion earlier,” ChatGPT replied. “Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.”

The user asked ChatGPT to confirm if the cases provided were real, and the system doubled down on finding the lawsuits in the legal databases.

Schwartz accepted responsibility for not confirming the sources, saying that it was the first time using ChatGPT as legal research and “was unaware of the possibility that its content could be false.”

ChatGPT is an artificial intelligence chatbot developed by OpenAI and released in November 2022. The system, which has sparked massive criticism for its role in several industries, has warned users that it could produce inaccurate information.

Schwartz said he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”

His colleague LoDuca “had no reason to doubt the sincerity,” Schwartz said, adding he did not have direct knowledge of how the legal team acquired the research.

LoDuca must show cause why the court shouldn’t sanction him “for the use of a false and fraudulent notarization” in a hearing on June 8.

Got a tip worth investigating?

Your information could be the missing piece to an important story. Submit your tip today and make a difference.

Submit Tip
Download Daily Wire Plus

Don't miss anything

Download our App

Stay up-to-date on the latest
news, podcasts, and more.

Download on the app storeGet it on Google Play
The Daily Wire   >  Read   >  Lawyer Faces Sanctions After Admitting Using ChatGPT For ‘Bogus’ Legal Research