Lawyers Disciplined for Using AI-Generated False Quotes in Australian Trial | Legal News

A Victorian lawyer has made history as the first in Australia to garner professional sanctions for utilizing artificial intelligence in court, losing his right to practice as a leading attorney after generating unverified citations from AI.

According to a report by Guardian Australia, during a hearing last October on July 19, 2024, an unnamed lawyer representing her husband in a marital dispute provided the court with a list of prior cases that Judge Amanda Humphreys had requested regarding the enforcement of applications in this case.

Upon returning to her chamber, Humphreys stated in her ruling that neither she nor her colleagues could find any cases listed. When the issue was revisited in court, the lawyer disclosed that the list had been generated using AI-based legal software.

He confessed to not verifying the accuracy of the information before submitting it to the court.

The attorney extended an “unconditional apology” to the court, requesting not to be referred for investigation, saying he would “integrate lessons that he has taken to heart.”

He acknowledged his lack of understanding of how the software operated and recognized the necessity to verify the accuracy of AI-assisted research. He agreed to cover the costs incurred by the opposing lawyer due to the canceled hearing.

Sign up: AU Breaking NewsEmail

Humphreys accepted the apology, admitting that the stress it caused was unlikely to be repeated. However, given the prevalence of AI tools in the legal field, she noted that referrals for investigation were crucial due to the role of the Victorian Legal Services Commission in examining professional conduct.

The lawyer was subsequently referred to the Victorian Legal Services Commission for investigation, marking one of the first reported cases in Australia involving a lawyer using AI in court to produce fabricated citations.

The Victoria Legal Services Board confirmed on Tuesday that the lawyer’s practice certificate was altered on August 19 due to the findings of the investigation. This action means he no longer has the right to practice as a primary attorney, cannot handle trust funds, and is restricted to working solely as an employee’s lawyer.

The lawyer is required to undergo two years of supervised legal practice, with quarterly reports to the board from both him and his supervisor during this period.

A spokesman remarked, “The board’s regulatory actions on this matter reflect our commitment to ensuring that legal professionals using AI in their practices do so responsibly and in alignment with their obligations.”

Since this incident, over 20 additional cases have been reported in Australian courts where litigants or self-represented individuals used artificial intelligence to prepare court documents, leading to the inclusion of false citations.

Skip past newsletter promotions

The lawyer in Western Australia is also under scrutiny by its state regulatory body regarding practice standards.

In Australia, there was at least one instance where a document was claimed to have been prepared using ChatGPT solely for the court, even though the document was generated before ChatGPT became publicly accessible.

The courts and legal associations acknowledge the role of AI in legal proceedings but continue to caution that this does not diminish lawyers’ professional judgment.

Juliana Warner of Australia’s Legal Council told Guardian Australia last month, “If lawyers are using these tools, it must be done with utmost care, always keeping in mind their professional and ethical obligations to the court and their clients.”

Warner further noted that while the court’s relation to cases involving AI-generated false citations raises “serious concerns,” a blanket ban on the use of generative AI in legal proceedings “is neither practical nor proportional and risks hindering access to both innovation and justice.”




Source: www.theguardian.com

High Court Calls on UK Lawyers to Halt AI Misuse After Noting Fabricated Case Law

The High Court has instructed senior counsels to implement immediate actions to curb the misuse of artificial intelligence, following numerous false cases presented to the court featuring entirely fictitious individuals or constructed references.

While attorneys are leveraging AI systems to formulate legal arguments, two cases this year have been severely affected by citations from fictitious legal precedents, which are believed to have originated from AI.

In a damages lawsuit amounting to £89 million against Qatar National Bank, the claimant referenced 45 legal actions. The claimant acknowledged the use of publicly accessible AI tools, and his legal team admitted to citing non-existent authorities.

When Haringey Law Center filed a challenge against the London Borough of Haringey for allegedly failing to provide temporary accommodation for its clients, the attorney referenced fictitious case law multiple times. Concerns were raised when the counsel representing the council had to repeatedly explain why they could not verify the supposed authorities.

This situation led to legal action over unwarranted legal expenses, with the court ruling that the Law Centre and its attorneys, including the student attorney, were negligent. Although the barrister in that case refused to use AI, she stated that she might have inadvertently done so while preparing for another case where she cited the fictitious authority. She mentioned that she might have assumed the AI summary was accurate without fully understanding it.

In the Regulation Judgment, Dr. Victoria Sharp, President of the King’s Bench Division, warned, “If artificial intelligence is misused, it could severely undermine public trust in the judicial system. Lawyers who misuse AI could face disciplinary actions, including court contempt sanctions and referrals to law enforcement.”

She urged the Council of Lawyers and the Law Society to treat this issue as an immediate priority and instructed the heads of legal chambers and administrative bodies to ensure all lawyers understand their professional and ethical responsibilities regarding the use of AI.

“While tools like these can produce apparently consistent and plausible responses, those responses may be completely incorrect,” she stated. “They might assert confidently false information, reference non-existent sources, or misquote real documents.”

Ian Jeffrey, CEO of the English and Welsh Law Association, remarked that the ruling “highlights the dangers of employing AI in legal matters.”

“AI tools are increasingly utilized to assist in delivering legal services,” he continued. “However, the significant risk of inaccurate outputs produced by generative AI necessitates that lawyers diligently verify and ensure the accuracy of their work.”

Skip past newsletter promotions

These cases are not the first to suffer due to AI-generated inaccuracies. At the UK tax court in 2023, an appellant allegedly assisted by an “acquaintance at a law office” provided nine fictitious historical court decisions as precedents. She acknowledged that she might have used ChatGPT but claimed there were other cases supporting her position.

Earlier this year, in a Danish case valued at 5.8 million euros (£4.9 million), the appellant narrowly avoided dismissal when relying on a fabricated ruling that the judge had identified. A 2023 case in the US District Court for the Southern District of New York faced turmoil when the court was shown seven clearly fictitious cases cited by the attorneys. After querying, ChatGPT summarized the previously invented cases, leading the judge to express concerns and resulted in a $5,000 fine for two lawyers and their firm.

Source: www.theguardian.com

Utah Lawyers Approved After Using ChatGPT in Court: An Overview

The Utah Court of Appeals has sanctioned the attorney after it was found that he utilized ChatGPT in a filing that referenced a fictitious trial.

Earlier this week, the Utah Court of Appeals chose to take action against Richard Bednar following accusations that he submitted a brief with fabricated citations.

Based on reviewed court documents, By ABC4, Bednar along with Douglas Dalbano, another attorney from Utah who represented the petitioners, filed a “timely petition for dialogue appeal.”

Upon examining the summary prepared by the Law Clerk, it was revealed that the respondent’s counsel noted several inaccurate quotes in the case.

“It seems that parts of the petition may have been produced by AI, including citations that do not exist in the legal database (and can only be found in ChatGPT).

The report highlights that the brief cited a case named “Royer v Nelson,” which was absent from any legal database.

After discovering the false citation, Bednar expressed his “apologies” for the “errors present in the petition,” according to documents from the Utah Court of Appeals. During the April hearing, Bednar and his legal team acknowledged, “The petition contained fabricated legal authority acquired from ChatGPT and accepted responsibility for its contents.”

According to Bednar and his legal team, the “unlicensed legal assistant” drafted the outline, and Bednar did not conduct an “independent accuracy check” before filing. ABC4 further reported that Dalbano was not involved in crafting the petition, and the individual responsible for filing was a law school graduate who was subsequently let go from the firm.

The report added that Bednar had offered to cover the relevant attorneys’ fees to “rectify” the situation.

In a statement made public by ABC4, the Utah Court of Appeals commented: “I concur that employing AI for lawsuit preparation is a developing legal research tool that continues to evolve alongside technological advancements. Nonetheless, all attorneys must ensure that court submissions are accurate, emphasizing that claimants’ attorneys are liable for their filings. They included fictitious precedents produced by ChatGPT.”

As a consequence of the false citation, ABC4 reports that Bednar has been ordered to cover the respondent’s attorneys’ fees for the petition and the hearing, refund clients for time spent on preparation and attendance, and donate $1,000 to legal nonprofits and justice initiatives based in Utah.

Source: www.theguardian.com