A 74-year-old man has apologized to a panel of appellate New York State judges after presenting an AI-generated digital avatar in a legal appeal — a move that has sparked concerns over transparency and ethics in the courtroom.
At the March 26 hearing, plaintiff Jerome Dewald, representing himself in an employment-related appeal, submitted a video of a youthful-looking man speaking directly to the court on a video screen. After the video played for a few seconds, one of the judges expressed confusion about who the man was. Dewald then admitted that the man was not real but an AI-generated creation.
“It would have been nice to know that when you made your application,” said Justice Sallie Manzanet-Daniels, according to The New York Times. “I don’t appreciate being misled.”
Defendant issues a mea culpa
Dewald acknowledged his actions had “inadvertently misled” the court, per the Times report. After stumbling over his words in previous legal proceedings, Dewald said he felt the AI might help ease the pressure he felt in the courtroom during his presentation.
In his letter to the judges, Dewald wrote, “My intent was never to deceive but rather to present my arguments in the most efficient manner possible. However, I recognize that proper disclosure and transparency must always take precedence.”
Dewald, a self-described entrepreneur, was appealing a ruling in a contract dispute with a former employer. During an oral argument at the appellate hearing, Dewald stammered and frequently paused to regroup and read prepared remarks from his cellphone, the Times reported.
AI in the courtroom: A pattern of misuse emerges
This isn’t the first time AI has made its way into the courtroom.
In 2023, a New York lawyer faced significant repercussions for using ChatGPT to create a legal brief that contained several fake judicial opinions and legal citations. The case demonstrated that AI has flaws and cannot always be relied upon.
Also in 2023, President Trump’s former lawyer and fixer, Michael Cohen, gave his lawyer phony legal citations he had gotten from Google Bard (now Gemini), an AI chatbot. Cohen claimed he had not known the generative AI service could provide false information and ultimately pleaded for mercy from the federal judge presiding over his case.
Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, told the New York Post he wasn’t surprised to hear that Dewald introduced an AI-generated person to argue an appeals case, calling it “inevitable.”
Dewald’s case was still pending before the appeals court as of last week.