A judge discovered that an immigration barrister had utilized AI for his court proceedings by referencing “completely fabricated” or “entirely irrelevant” cases.
Chaudhry Rahman was reported to have employed tools like ChatGPT in preparing for legal inquiries, as presented in court. It was revealed that Rahman not only relied on AI for his preparations but also “failed to perform necessary accuracy checks” on his work.
Superior Court Judge Mark Blundell stated that Rahman attempted to obscure his use of AI, effectively “wasting” the court’s time. Blundell indicated he might report Rahman to the Bar Standards Board. The Guardian has reached out to Rahman’s firm for a response.
This situation emerged during the case involving two Honduran sisters seeking asylum due to threats from criminal groups in their homeland. Rahman represented the sisters, aged 29 and 35, and the matter progressed to Superior Court.
Mr. Blundell dismissed Mr. Rahman’s claims, asserting: “Nothing articulated by Mr. Rahman, either verbally or in writing, indicates any legal error by the judge, and the appeal should be rejected.”
In an exceptional statement, Blundell later noted there were “significant issues” with the appeal’s grounds as presented to him.
He remarked that Rahman’s documentation referenced 12 authorities, but upon reviewing the evidence, he discovered that “some of these authorities did not exist, while others did not substantiate the legal claims made in the evidence.”
In his ruling, he enumerated 10 such instances and clarified “what Mr. Rahman claimed regarding those incidents, whether real or hypothetical.”
Mr. Blundell remarked: “Mr. Rahman appeared to lack any knowledge of the authorities cited in the appeal, which were purportedly resolved in July of this year. It was evident he did not intend to accept my views on any of the judgments submitted.”
“Certain decisions were nonexistent. Not a single decision supported the legal proposition presented in the basis.”
Mr. Blundell pointed out that Mr. Rahman’s assertion of using “various websites” for his research was thus deceptive.
Blundell asserted: “The most plausible explanation…is that the appeal grounds were drafted, wholly or partly, by generative artificial intelligence like ChatGPT.”
“I am acutely aware that one of the cases mentioned in Mr. Rahman’s appeal was recently misapplied by ChatGPT to endorse a similar argument.”
Rahman explained to the judge that the discrepancies in his rationale were “a consequence of his drafting style” and admitted there might have been some “confusion and ambiguity” in his submission.
Mr. Blundell stated: “The issue I’ve outlined is not merely a matter of drafting style. The authorities referenced in the rationale either did not exist or failed to support that rationale.”
He added: “In my opinion, it is overwhelmingly probable that Mr. Rahman employed generative artificial intelligence to formulate his grounds of appeal in this case and endeavored to conceal that from me during the hearing.”
“Even if Mr. Rahman believed, for any reason, that these cases somehow bolstered the argument he intended to present, he cannot justify the entirely fictitious citations.”
“In my view, the only plausible scenario is that Mr. Rahman heavily relied on AI generation while preparing his evidence and sought to hide that fact during discussions with me at the hearing.”
The judge’s ruling was issued in September and made public on Tuesday.
Source: www.theguardian.com
