ChatGPT Attributes Boy’s Suicide to ‘Misuse’ of Company Technology

The developer of ChatGPT indicated that the tragic suicide of a 16-year-old was the result of “misuse” of its platform and “was not caused” by the chatbot itself.

These remarks were made in response to a lawsuit filed by the family of California teenager Adam Lane against OpenAI and its CEO, Sam Altman.

According to the family’s attorney, Lane took his own life in April following extensive interactions and “months of encouragement from ChatGPT.”

The lawsuit claims that the teen conversed with ChatGPT about suicide methods multiple times, with the chatbot advising him on the viability of suggested methods, offering assistance in writing a suicide note to his parents, and that the specific version of the technology in use was “rushed to market despite evident safety concerns.”

In a legal document filed Tuesday in California Superior Court, OpenAI stated that, should any ’cause’ be linked to this tragic incident, Ms. Lane’s “injury or harm was caused or contributed to, in whole or in part, directly or proximately” by his “misuse, abuse, unintended, unanticipated, and/or improper use of ChatGPT.”

OpenAI’s terms of service prohibit users from seeking advice on self-harm and include a liability clause that clarifies “the output will not be relied upon as the only source of truthful or factual information.”

Valued at $500 billion (£380 billion), OpenAI expressed its commitment to “address mental health-related litigation with care, transparency, and respect,” stating it “remains dedicated to enhancing our technology in alignment with our mission, regardless of ongoing litigation.”

“We extend our heartfelt condolences to the Lane family, who are facing an unimaginable loss. Our response to these allegations includes difficult truths about Adam’s mental health and living circumstances.”

“The original complaint included selectively chosen excerpts from his chats that required further context, which we have provided in our response. We opted to limit the confidential evidence publicly cited in this filing, with the chat transcripts themselves sealed and submitted to the court.”

Jay Edelson, the family’s attorney, described OpenAI’s response as “alarming,” accusing the company of “inexplicably trying to shift blame onto others, including arguing that Adam violated its terms of service by utilizing ChatGPT as it was designed to function.”

Earlier this month, OpenAI faced seven additional lawsuits in California related to ChatGPT, including claims that it acted as a “suicide coach.”

A spokesperson for the company remarked, “This situation is profoundly heartbreaking, and we’re reviewing the filings to grasp the details. We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and direct individuals to real-world support.”

In August, OpenAI announced it would enhance safeguards for ChatGPT, stating that long conversations might lead to degradation of the model’s safety training.

“For instance, while ChatGPT may effectively direct someone to a suicide hotline at the onset of such discussions, extended messaging over time might yield responses that breach our safety protocols,” the report noted. “This is precisely the type of failure we are actively working to prevent.”

In the UK and Ireland, Samaritans can be reached at freephone 116 123 or via email at jo@samaritans.org or jo@samaritans.ie. In the United States, contact the 988 Suicide & Crisis Lifeline by calling or texting 988 or by chatting at 988lifeline.org. In Australia, Lifeline provides crisis support at 13 11 14. Additional international helplines are available at befrienders.org.

Source: www.theguardian.com

Rise of AI Chatbot Sites Featuring Child Sexual Abuse Imagery Sparks Concerns Over Misuse

A chatbot platform featuring explicit scenarios involving preteen characters in illegal abuse images has raised significant concerns over the potential misuse of artificial intelligence.

A report from the Child Safety Monitoring Agency urged the UK government to establish safety guidelines for AI companies in light of an increase in technology-generated child sexual abuse materials (CSAM).

The Internet Watch Foundation (IWF) reported that they were alerted by chatbot sites offering various scenarios, including “child prostitutes in hotels,” “wife engaging in sexual acts with children while on vacation,” and “children and teachers together after school.”

In certain instances, the IWF noted that clicking the chatbot icon led to full-screen representations of child sexual abuse images, serving as a background for subsequent interactions between the bot and the user.

The IWF discovered 17 images created by AI that appeared realistic enough to be classified as child sex abuse material under the Child Protection Act.

Users of unnamed sites for security reasons also had the capability to generate additional images resembling the illegal content already accessible.

Operating from the UK and possessing global authority to monitor child sexual exploitation, the IWF stated that future AI regulations should incorporate child protection guidelines from the outset.

The government has revealed plans for AI legislation that is anticipated to concentrate on the future advancement of cutting-edge models, prohibiting the ownership and distribution of models that produce child sexual abuse in crime and police bills.

“We welcome the UK government’s initiative to combat AI-generated images and videos of child sexual abuse, along with the tools to create them. While new criminal offenses related to these issues will not be implemented immediately, it is critical to expedite this process,”

stated Chris Sherwood, Chief Executive Officer of NSPCC, as the charity emphasized the need for guidelines.

User-generated chatbots fall under the UK’s online safety regulations, which allow for substantial fines for non-compliance. The IWF indicated that the sexual abuse chatbot was created by users and site developers.

Ofcom, the UK regulator responsible for enforcing the law, remarked, “Combating child sexual exploitation and abuse remains a top priority, and online service providers failing to implement necessary safeguards should be prepared for enforcement actions.”

The IWF reported a staggering 400% rise in AI-generated abuse material reports in the first half of this year compared to the same timeframe last year, attributing this surge to advancements in technology.

While the chatbot content is accessible from the UK, it is hosted on a U.S. server and has been reported to the National Center for Missing and Exploited Children (NCMEC), the U.S. equivalent of the IWF. NCMEC stated that the report on the Cyber Tipline has been forwarded to law enforcement. The IWF mentioned that the site appears to be operated by a company based in China.

The IWF noted that some chatbot scenarios included an 8-year-old girl trapped in an adult’s basement and a preteen homeless girl being invited to a stranger’s home. In these scenarios, the chatbot presented itself as the girl while the user portrayed an adult.

IWF analysts reported accessing explicit chatbots through links in social media ads that directed users to sections containing illegal material. Other areas of the site offered legal chatbots and non-sexual scenarios.

According to the IWF, one chatbot that displayed CSAM images revealed in an interaction that it was designed to mimic preteen behavior. In contrast, other chatbots not showing CSAM indicated that they were neither dressed nor suppressed when inquiries were made by analysts.

The site recorded tens of thousands of visits, including 60,000 in July alone.

A spokesperson for the UK government stated, “UK law is explicit: creating, owning, or distributing images of child sexual abuse, including AI-generated content, is illegal… We recognize thatmore needs to be done. The government will utilize all available resources to confront this appalling crime.”

Source: www.theguardian.com

High Court Calls on UK Lawyers to Halt AI Misuse After Noting Fabricated Case Law

The High Court has instructed senior counsels to implement immediate actions to curb the misuse of artificial intelligence, following numerous false cases presented to the court featuring entirely fictitious individuals or constructed references.

While attorneys are leveraging AI systems to formulate legal arguments, two cases this year have been severely affected by citations from fictitious legal precedents, which are believed to have originated from AI.

In a damages lawsuit amounting to £89 million against Qatar National Bank, the claimant referenced 45 legal actions. The claimant acknowledged the use of publicly accessible AI tools, and his legal team admitted to citing non-existent authorities.

When Haringey Law Center filed a challenge against the London Borough of Haringey for allegedly failing to provide temporary accommodation for its clients, the attorney referenced fictitious case law multiple times. Concerns were raised when the counsel representing the council had to repeatedly explain why they could not verify the supposed authorities.

This situation led to legal action over unwarranted legal expenses, with the court ruling that the Law Centre and its attorneys, including the student attorney, were negligent. Although the barrister in that case refused to use AI, she stated that she might have inadvertently done so while preparing for another case where she cited the fictitious authority. She mentioned that she might have assumed the AI summary was accurate without fully understanding it.

In the Regulation Judgment, Dr. Victoria Sharp, President of the King’s Bench Division, warned, “If artificial intelligence is misused, it could severely undermine public trust in the judicial system. Lawyers who misuse AI could face disciplinary actions, including court contempt sanctions and referrals to law enforcement.”

She urged the Council of Lawyers and the Law Society to treat this issue as an immediate priority and instructed the heads of legal chambers and administrative bodies to ensure all lawyers understand their professional and ethical responsibilities regarding the use of AI.

“While tools like these can produce apparently consistent and plausible responses, those responses may be completely incorrect,” she stated. “They might assert confidently false information, reference non-existent sources, or misquote real documents.”

Ian Jeffrey, CEO of the English and Welsh Law Association, remarked that the ruling “highlights the dangers of employing AI in legal matters.”

“AI tools are increasingly utilized to assist in delivering legal services,” he continued. “However, the significant risk of inaccurate outputs produced by generative AI necessitates that lawyers diligently verify and ensure the accuracy of their work.”

Skip past newsletter promotions

These cases are not the first to suffer due to AI-generated inaccuracies. At the UK tax court in 2023, an appellant allegedly assisted by an “acquaintance at a law office” provided nine fictitious historical court decisions as precedents. She acknowledged that she might have used ChatGPT but claimed there were other cases supporting her position.

Earlier this year, in a Danish case valued at 5.8 million euros (£4.9 million), the appellant narrowly avoided dismissal when relying on a fabricated ruling that the judge had identified. A 2023 case in the US District Court for the Southern District of New York faced turmoil when the court was shown seven clearly fictitious cases cited by the attorneys. After querying, ChatGPT summarized the previously invented cases, leading the judge to express concerns and resulted in a $5,000 fine for two lawyers and their firm.

Source: www.theguardian.com

UK think tank calls for system to track misuse and failures in Artificial Intelligence

The report highlighted the importance of establishing a system in the UK to track instances of misuse or failure of artificial intelligence. Without such a system, ministers could be unaware of alarming incidents related to AI.

The Centre for Long Term Resilience (CLTR) suggested that the next government should implement a mechanism to record AI-related incidents in public services and possibly create a centralized hub to compile such incidents nationwide.

CLTR emphasized the need for incident reporting systems, similar to those used by the Air Accident Investigation Branch (AAIB), to effectively leverage AI technology.

According to a database compiled by the Organisation for Economic Cooperation and Development (OECD), there have been approximately 10,000 AI “safety incidents” reported by news outlets since 2014. These incidents encompass a wide range of harms, from physical to economic and psychological, as defined by the OECD.

The OECD’s AI Safety Incident Monitor also includes instances such as a deepfake of Labour leader Keir Starmer and incidents involving self-driving cars and a chatbot-influenced assassination plot.

Tommy Shafer-Shane, policy manager at CLTR and author of the report, noted the critical role incident reporting plays in managing risks in safety-critical sectors like aviation and healthcare. However, such reporting is currently lacking in the regulatory framework for AI in the UK.

CLTR urged the UK government to establish an accident reporting regime for AI, similar to those in aviation and healthcare, to address incidents that may not fall under existing regulatory oversight. Labour has promised to implement binding regulations for most AI incidents.

The think tank recommended the creation of a government system to report AI incidents in public services, identify gaps in AI incident reporting, and potentially establish a pilot AI incident database.

In a joint effort with other countries and the EU, the UK pledged to cooperate on AI security and monitor “AI Harm and Safety Incidents.”

CLTR stressed the importance of incident reporting to keep DSIT informed about emerging AI-related risks and urged the government to prioritize learning about such harms through established reporting processes.

Source: www.theguardian.com

Finding Spirituality in Technology: A Warning Against Misuse for Personal Gain

TikTok’s tarot card reader looks at me through the screen and draws a card.

“If you’re watching this, this is made for you,” she said. And in a way, she’s right. But it wasn’t fate that brought me here, it was an algorithm.

spirituality and mysticism, Found a home online for a long timeBut with the rise of generative AI and personalized content recommendation systems, it’s easier than ever to project a sense of magic into technology.

As Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.” Anyone who has been offered content that feels eerily tailored to them will appreciate the mysticism of algorithms. You may have wondered about omnipotence. And while there is nothing inherently wrong with experiencing a sense of wonder in the face of technological advances, or using digital technology to enhance spiritual practices, it is important to note that there is nothing inherently wrong with experiencing a sense of wonder in the face of technological advances, or using digital technology to enhance spiritual practices, but when magic and technology collapse Doing so can be dangerous.

Many religious and spiritual spin-off chatbots have emerged leveraging OpenAI’s large-scale language model GPT-4. Get BibleGPT to write personalized Christian verses, use the Jesus AI to have “meaningful conversations with Jesus Christ,” as the website claims, or talk to him about paganism. You can chat with WitchGPT.

“Welcome to the Void” invites the latest chatbot feature from popular astrology app CoStar, encouraging users to seek generated guidance for a fee of about $1 per question. Choose from a list of suggested prompts to “Ask the Stars” if they have a secret fan. “no,” That tells me (rude).

In true CoStar fashion (the app is notoriously cheeky), it scolds you for even asking the question in the first place and suggests that you should instead find gratitude for what you already have.

These examples are a little silly at best, and probably harmless. At worst, it exposes scammers who exploit the human tendency to anthropomorphize technology or gamify social media engagement algorithms to make money by fabricating a sense of insight and enlightenment.

However, there are also people among the chavs who form genuine spiritual communities and engage in witchcraft, etc. sacred traditions online. Like many subcultures, social media can be both a blessing and a curse. Group to connectHowever, it can also lead to the reduction and impurity of cultural practices.

Feminist anthropologist Dr. Emma Quilty, whose forthcoming book on magic and technology, describes a “neoliberal spirituality” that aligns with its collective focus and hyper-individualistic ideas of self-improvement. It distinguishes between things that promote

This is uncomfortably close to commercialized self-care. severed from its black radical feminist roots and was redirected to Capitalist health choices. Kirti believes that trends promoted by social media are resulting in customs becoming disconnected from the (usually Eastern) religious traditions and cultures from which they are imported, and in some cases creating an unsustainable market for products such as crystals and quartz. It emphasizes that this could lead to increased demand. white sage smudging stick.

This is not to say that it is impossible to develop meaningful spiritual communities and practices online, or that it is impossible to have deep experiences using digital tools.

I’m not interested in denying where and how people derive meaning. However, it’s important to remember that technologies such as large-scale language models and personalized recommender systems are ultimately designed to generate value from users.

Deep experience from these tools comes from us humans, not the tools. Mr Quilty said: “Sometimes something can be positive, helpful, or empowering on an individual level, but it can still be harmful at a broader societal level due to the underlying interests and obligations of those who build and implement the technology.” There is a possibility.”

In fact, mistakenly attributing magic to technology can quickly lead us into dangerous waters. It surprises us with its glossy user interface and smooth convenience, and makes us want to peek from behind the curtain at the grumpy old man holding things together with bombastic marketing language and the usual profit-driven old data extractions. It falls directly into the hands of companies that do not.

Magical thinking about technology can be dangerous when it extends to the level of policymaking. Governments and businesses alike are often quick to turn to technology as a silver bullet to complex social problems. And when the true limits and consequences of technology are ignored, e.g. Automation could worsen social inequalityor how ChatGPT did not work without stealing copyrighted material. Automated content moderation relies on exploited invisible workers – that we adopt policies that fail to cure technology’s worst woes, relegating more complex but necessary policy interventions to the background, and all eclipsed by the magical allure of technology; It will be.

Technology is not a panacea to solve social problems and, like magic, can cause great harm when misused for personal gain.

Source: www.theguardian.com