Attorneys face cause using ChatGPT after AI makes up citations in court documents

May 31, 2023 Rob Abruzzese
Share this:

Two attorneys from the reputed New York firm, Levidow, Levidow & Oberman, P.C., are getting set to respond to an unusual order to show cause after a judge found them using ChatGPT to help them with their legal research.

They were only caught after a judge noticed the inclusion of fictitious case citations in a formal court document.

The saga originated from an innocuous personal injury lawsuit where attorneys Peter LoDuca and Steven A. Schwartz sought to strengthen their case using precedent legal decisions and used the artificial intelligence model ChatGPT. Although designed to provide human-like text generation capabilities, it made up non-existent court cases as case law references.

This discovery came to light when the defense counsel, during their due diligence, could not locate the cited cases in any standard legal databases. They alerted Judge P. Kevin Castel, leading to a shocking realization — the supposedly precedent cases were utterly fabricated.

News for those who live, work and play in Brooklyn and beyond

In response, Judge Castel issued an ‘Order to Show Cause’ to both attorneys and their law firm. They are now required to provide an explanation at a scheduled hearing in the Southern District of New York on June 8, as to why they should not be penalized for their unconventional research methods and the resulting inclusion of bogus citations in their court filings.

This unexpected situation holds serious consequences for the implicated attorneys and their firm. If it turns out that they’ve broken professional conduct rules by using artificial intelligence to create false citations, they could find themselves in hot water, facing stiff penalties.

Moreover, this incident represents a wake-up call for the legal community at large. As law practices increasingly integrate AI into their processes, it’s crucial to tread with caution. As this situation has demonstrated, even AI tools, despite their sophisticated capabilities, can yield incorrect information, potentially leading to severe professional and legal ramifications.

Many members of the local legal community have already begun making jokes about attorneys relying too heavily on the use of ChatGPT to assist them at their jobs.

When asked how prevalent the use of AI is in the legal community, one judge told the Brooklyn Eagle that, “There have likely already been judges who have used it to help them write decisions.”

This bizarre incident underscores the importance of verifying the accuracy of AI-generated content, especially in critical areas such as legal proceedings. It’s a stark reminder of the potential pitfalls of overreliance on technology and the need for due diligence, irrespective of how advanced or reliable the tool may appear.

Leave a Comment

Leave a Comment