Lawyers Disciplined for Using AI-Generated False Quotes in Australian Trial | Legal News

A Victorian lawyer has made history as the first in Australia to garner professional sanctions for utilizing artificial intelligence in court, losing his right to practice as a leading attorney after generating unverified citations from AI.

According to a report by Guardian Australia, during a hearing last October on July 19, 2024, an unnamed lawyer representing her husband in a marital dispute provided the court with a list of prior cases that Judge Amanda Humphreys had requested regarding the enforcement of applications in this case.

Upon returning to her chamber, Humphreys stated in her ruling that neither she nor her colleagues could find any cases listed. When the issue was revisited in court, the lawyer disclosed that the list had been generated using AI-based legal software.

He confessed to not verifying the accuracy of the information before submitting it to the court.

The attorney extended an “unconditional apology” to the court, requesting not to be referred for investigation, saying he would “integrate lessons that he has taken to heart.”

He acknowledged his lack of understanding of how the software operated and recognized the necessity to verify the accuracy of AI-assisted research. He agreed to cover the costs incurred by the opposing lawyer due to the canceled hearing.

Sign up: AU Breaking NewsEmail

Humphreys accepted the apology, admitting that the stress it caused was unlikely to be repeated. However, given the prevalence of AI tools in the legal field, she noted that referrals for investigation were crucial due to the role of the Victorian Legal Services Commission in examining professional conduct.

The lawyer was subsequently referred to the Victorian Legal Services Commission for investigation, marking one of the first reported cases in Australia involving a lawyer using AI in court to produce fabricated citations.

The Victoria Legal Services Board confirmed on Tuesday that the lawyer’s practice certificate was altered on August 19 due to the findings of the investigation. This action means he no longer has the right to practice as a primary attorney, cannot handle trust funds, and is restricted to working solely as an employee’s lawyer.

The lawyer is required to undergo two years of supervised legal practice, with quarterly reports to the board from both him and his supervisor during this period.

A spokesman remarked, “The board’s regulatory actions on this matter reflect our commitment to ensuring that legal professionals using AI in their practices do so responsibly and in alignment with their obligations.”

Since this incident, over 20 additional cases have been reported in Australian courts where litigants or self-represented individuals used artificial intelligence to prepare court documents, leading to the inclusion of false citations.

Skip past newsletter promotions

The lawyer in Western Australia is also under scrutiny by its state regulatory body regarding practice standards.

In Australia, there was at least one instance where a document was claimed to have been prepared using ChatGPT solely for the court, even though the document was generated before ChatGPT became publicly accessible.

The courts and legal associations acknowledge the role of AI in legal proceedings but continue to caution that this does not diminish lawyers’ professional judgment.

Juliana Warner of Australia’s Legal Council told Guardian Australia last month, “If lawyers are using these tools, it must be done with utmost care, always keeping in mind their professional and ethical obligations to the court and their clients.”

Warner further noted that while the court’s relation to cases involving AI-generated false citations raises “serious concerns,” a blanket ban on the use of generative AI in legal proceedings “is neither practical nor proportional and risks hindering access to both innovation and justice.”




Source: www.theguardian.com

AI chatbot spreading false information about voting, election officials take action | US election 2024

Following Joe Biden’s announcement of not seeking reelection, misinformation surfaced online regarding the potential for a new candidate to assume the presidency.

Screenshots claiming nine states couldn’t add new candidates to the ballot quickly went viral on Twitter (now X) and were widely viewed. The Minnesota Secretary of State’s office received requests to fact-check these posts which turned out to be completely false as the voting deadline had not passed and Kamala Harris had ample time to be added to the ballot.

The misinformation originated from Twitter’s chatbot Grok, which provided an incorrect response when asked if new candidates could still be added to the ballot.

This incident served as a test case for the interaction between election officials and artificial intelligence companies in the 2024 US presidential election, amid concerns that AI could mislead or distract voters. It also highlighted the potential role Grok could play as a chatbot lacking strict guardrails to prevent the generation of inflammatory content.

A group of secretaries of state and the National Association of Secretaries of State contacted Grok and X to report the misinformation. Initial attempts to correct it were ineffective, prompting Minnesota Secretary of State Steve Simon to express disappointment at the lack of action.

While the impact of the misinformation was relatively minor, prompting no hindrance to voting, the secretaries of state took a strong stance to prevent similar incidents in the future.

The secretaries launched a public effort by signing an open letter to Grok’s owner, Elon Musk, urging the chatbot to redirect election-related queries to trusted sources like CanIVote.org. Their efforts led to Grok now directing users to vote.gov when asked about the election.

Simon praised the company for eventually taking responsible action and emphasized the importance of early and consistent debunking of misinformation to maintain credibility and prompt corrective responses.

Despite initial setbacks, Grok’s redirection of users and Musk’s philosophy against centralized control offer hope for combating misinformation. It is critical to prevent AI tools like Grok from further exacerbating partisan divisions or spreading inaccurate information.

The potential for paid subscriptions and widespread usage of Grok integrated into social media platforms poses challenges in addressing the risk of deceptive content creation. Efforts to address and rectify misinformation are crucial in safeguarding the integrity of elections and ensuring responsible use of AI-based tools.

Source: www.theguardian.com

Elon Musk spreads false information about English rioters being relocated to the Falkland Islands

Elon Musk shared a fake Telegraph article claiming Keir Starmer is considering sending far-right rioters to “emergency detention camps” in the Falkland Islands.

Musk deleted the post about 30 minutes later. Screenshot taken by Politics.co.uk It is suggested that the video had nearly 2 million views before it was removed.

In it, Musk shared an image posted by Ashley Simon, co-leader of the far-right group Britain First, with the caption: “We will all be deported to the Falkland Islands.”

The fake article, purportedly written by a senior Telegraph news reporter and styled to resemble the paper, said that camps in the Falkland Islands would be used to hold prisoners from the ongoing riots because the UK prison system is already at capacity.

The Telegraph said on Thursday it had never published the story in question. A Telegraph Media Group spokesman said in a statement: “This is a fabricated headline for a story that doesn't exist. We have notified the relevant platforms and asked them to remove the story.”

In a post about X, the paper said: “We are aware that an image circulating purporting to be a Telegraph article about 'emergency detention centres' on X. The Telegraph has never published such an article.”

Musk has not apologized for sharing the fake report, but has continued to share material criticizing the UK government and law enforcement response to the riots.

The Guardian contacted Mr X for comment but received an automated response saying: “We're busy at the moment, please check back later.”

On Thursday, Musk said Share the Sky News interview Stephen Parkinson, the director of public prosecutions in England and Wales, said officers were searching social media for content that incited racial hatred. “This is something that is really happening,” Musk said. In another post about the same clip:Musk called Parkinson a “woke Stasi.”

Skip Newsletter Promotions

Musk has been embroiled in a spat with Prime Minister Keir Starmer and British police authorities after saying a “civil war is inevitable” in response to anti-immigration protests in England and Northern Ireland and claiming the police response had been “one-sided”.

A spokesman for the Prime Minister said this week there was “no justification” for the comments. In response, Mr Musk has repeatedly attacked Mr Starmer on his platform, branding him a “second-rate keel”.

Musk, the billionaire co-founder of Tesla, SpaceX and the payments platform X.com that later became PayPal, bought Twitter for $44 billion in 2022. Last year, he renamed it X. The direction Twitter has taken under his leadership has sparked a series of controversies, including accusations that it has not taken harmful content seriously enough.

The Royal National Orthopaedic Hospitals NHS Trust said in a post on Thursday that after 13 years running X's account it was closing it because the platform “no longer aligns with the trust's values”. The trust directed followers to Facebook, Instagram and LinkedIn.

This week, Musk announced he was suing a group of advertisers and major corporations for illegally agreeing not to advertise on X.

Source: www.theguardian.com

Start-up founders allege that investors undermined their company with false user accusations in real life

IRL founders Abraham Shafi and Genrik Khachatryan are suing investors for intentionally sabotaging the company.

At its peak, IRL was poised to become an alternative way to host events for Gen Z, who were using Facebook less and less.

CEO Shafi said: Paused It was ordered by IRL in April to investigate allegations of misconduct. In June, IRL’s board of directors discovered after an investigation that 95% of the company’s 20 million users were fake. The founders now claim investors accounted for the 95% figure “as an excuse to shut down the company and return capital to shareholders.”

The lawsuit specifically names Goodwater Capital’s Chihua Qian, SoftBank’s Selina Dale, and Floodgate’s Mike Maples. From these investors his social calendar app raised more than $200 million and the valuation brought him $1.17 billion. Notably, SoftBank led IRL’s $170 million Series C round in 2021. Mr. Shafi and Mr. Khachatryan accused the investors of wanting to shut down the company because they were “trying to finance a large portion of the company’s $40 million in cash reserves.”

Although IRL is defunct, the remaining board members deny the founders’ claims.

“Immediately after the Shafi outage, IRL experienced a significant drop in the number of daily active users virtually overnight. This was not due to an outage,” IRL and its board said in a statement, and an IRL spokesperson said: Elliott Sloan shared with TechCrunch. The same report that found 95% of users are fake also cited “the existence of private groups with millions of duplicate names, irregular signatures from Hotmail, Yahoo email addresses, and burner email addresses. The statement said they also discovered “suspicious user behavior such as Said. Forensic reports show that his IP address from proxy-his servers was used extensively, with individual accounts cycling through his IP address and device type, which could be linked to user behavior. indicates that it is invalid.

“Based on this, and evidence of Shafi’s misappropriation of company funds and repeated obstruction of investigations, the board, after several months of consideration, has concluded that the company’s future prospects are unsustainable.” The statement concludes.

As of December of last year, the SEC. ongoing investigation IRL may have misled investors and violated securities laws.

IRL is just one once-hot start-up that has come under fire for potentially tampered metrics. Investors say Bolt and co-founder Ryan Breslow of the giant one-click checkout company misrepresented the company’s financials as it sought to raise $355 million in a Series E round. raised concerns and faced SEC investigation. But 15 months later, the SEC said the company likely not to be prosecuted. And earlier this year, the SEC charged student financial aid startup Frank with defrauding JPMorgan, which acquired the company for $175 million in 2021. JPMorgan has filed a lawsuit accusing Frank’s founder Charlie Jarvis of defrauding millions of customers to get her bank to buy her. company.

IRL lawsuit by tech crunch On Scribd

Source: techcrunch.com