Jerome Dewald sat with his legs crossed, his hands folded in his lap before a New York judge’s appeal panel, ready to argue for a reversal of the lower court’s decision in a dispute with his former employer.
The court had allowed Mr Dewald, who represented himself, not his lawyer, to involve his arguments in a pre-recorded video presentation.
When the video began to play, it showed that a man younger than Dewald’s 74-year-old was standing in a blue-collar shirt and beige sweater, wearing a blue-collar shirt and a beige sweater, in front of what appeared to be a blurry virtual background.
Seconds after the video, one of the judges confused by the on-screen image asked Dewald if the man was his lawyer.
“I generated it,” replied Dewald. “It’s not a real person.”
Judge Sally Manzanette Daniel, the first Judicial Division of the Appellate Division, temporarily suspended. It was clear that she was unhappy with his answer.
“It’s good to know that when you created your application she snapped him.”
“I’m not grateful for being misunderstood,” she added before someone yells at me to turn off the video.
What Dewald didn’t disclose is that he created digital avatars using artificial intelligence software, the latest example of AI sneaking into the US legal system in a potentially troublesome way.
Dewald’s presentation hearingwas taken by a court system camera on March 26th and previously reported Associated Press.
Dewald, plaintiff in the case reached Friday, said he was overwhelmed by the embarrassment of the hearing. He then sent an apology letter to the judge soon after, expressing his deep regret and saying that he admitted that his actions “cautiously mislead” the court.
He said he relied on using the software after stumbling over his words in previous legal proceedings. He thought that using AI in his presentation might help ease the pressure he felt in court.
He said he had planned to create a digital version of himself, but did so he encountered “technical difficulties.”
“My intention was not to deceive, but to present my argument in the most efficient way possible,” he said in a letter to the judge. “But we recognize that appropriate disclosure and transparency must always be prioritized.”
Dewald, a self-proclaimed entrepreneur, had sued previous ruling in a contract dispute with his former employer. He eventually presented oral arguments at the appeals court, frequently pausing and frequently pausing to reorganize and read the statements he had prepared and prepared from his cell phone.
As embarrassing as he was, Dewald was able to provide some comfort to the fact that an actual lawyer got into trouble in using AI in court.
In 2023, New York State lawyers faced serious consequences after him I created a legal brief using CHATGPT Filled with false judicial opinions and legal quotations. The incident showed flaws relying on artificial intelligence and echoed through legal trade.
That same year, former President Trump’s lawyer and fixer Michael Cohen provided his lawyer with a fake legal quote he obtained from Google Bard, an artificial intelligence program. Cohen ultimately pleaded mercy from a federal judge who was the main side of his case, emphasizing that he had no idea that the generated text service could provide false information.
Some experts say artificial intelligence and large-scale language models can be useful for people who have legal problems to deal with but can’t afford a lawyer. Still, the risks of technology remain.
“They can still hallucinate. “We need to deal with that risk,” says Daniel Singh, assistant research director at the Law and Court Technology Center at William & Mary Law School.
Source: www.nytimes.com