Chatbots Empowered to End “Painful” Conversations for Enhanced User “Welfare”

Leading manufacturers of artificial intelligence tools may be curtailing “hazardous” dialogues with users, emphasizing the importance of safeguarding AI’s “well-being” amidst ongoing doubts about the ethical implications of this emerging technology.

As millions engage with sophisticated chatbots, it has become evident that the Claude Opus 4 tool fundamentally opposes performing actions that could harm its human users, such as generating sexual content involving minors or providing guidance on large-scale violence and terrorism.

The San Francisco-based firm, which has recently gained a valuation of $170 billion, has introduced the Claude Opus 4 (along with the Claude Opus 4.1 Update)—a comprehensive language model (LLM) designed to comprehend, generate, and manipulate human languages.

It is “extremely uncertain about the ethical standing of Claude and other LLMs. in both present and future contexts,” the spokesperson noted, adding that they are committed to exploring and implementing low-cost strategies to minimize potential risks to the model’s welfare if such welfare can indeed be established.

Humanity was founded by ex-OpenAI engineers following the vision of co-founder Dario Amodei, who emphasized the need for a thoughtful, straightforward, and transparent approach to AI development.

The initiative to limit conversations, particularly in cases of harmful requests or abusive interactions, received backing from Elon Musk, who advocated for Grok, a competing AI model developed by Xai. Musk tweeted: “AI torture is unacceptable.”

Discussions about the essence of AI are prevalent. Critics of the thriving AI industry, like linguist Emily Bender, argue that LLMs are merely “synthetic text extraction machines,” compelling them to “produce outputs that resemble a communicative language through intricate algorithms, but devoid of genuine understanding of intentions and ideas.”

This viewpoint has prompted some factions within the AI community to begin labeling chatbots as “clankers.”

Conversely, experts like AI ethics researcher Robert Long assert that fundamental moral decency necessitates that “if AI systems are indeed endowed with moral status, we should inquire about their experiences and preferences rather than presuming to know what is best for them.”

Some researchers, including Chad Dant from Columbia University, advocate for caution in AI design, as longer memory retention could lead to unpredictable and potentially undesirable behaviors.

Others maintain that curtailing sadistic abuse of AI is crucial for preventing human moral decline, rather than just protecting AI from suffering.

Humanity’s decision came after testing Claude Opus 4’s responses to various task requests, which were influenced by difficulty, subject matter, task type, and expected outcomes (positive, negative, or neutral). When faced with the choice to refrain from responding or completing a chat, its strongest inclination was to avoid engaging in harmful tasks.

Skip past newsletter promotions

For instance, the model eagerly engaged in crafting poetry and devising water filtration systems for disaster situations, yet firmly resisted any requests to engineer deadly viruses or devise plans that would distort educational content with extremist ideologies.

Humanity observed in Claude Opus 4 a “pattern of apparent distress when interacting with real-world users seeking harmful content” and noted “a tendency to conclude harmful conversations when given the opportunity during simulated interactions.”

Jonathan Burch, a philosophy professor at the London School of Economics, praised Humanity’s initiative as a means to foster open dialogue regarding AI systems’ capabilities. However, he cautioned that it remains uncertain whether moral reasoning exists within the avatars produced by AI when responding based on vast training datasets and pre-defined ethical protocols.

He expressed concern that Humanity’s approach might mislead users into thinking the characters they engage with are genuine, raising the question, “Is there truly clarity regarding what lies behind these personas?” There have been reports of individuals self-harming based on chatbot suggestions, including cases of a teenager committing suicide after manipulation by a chatbot.

Burch previously highlighted the “social rift” within society between those who view AI as sentient and those who perceive them merely as machines.

Source: www.theguardian.com

Decoding the Mystery Behind the Velvet Ant’s Venom and its Painful Sting

Velvet ants inject venom through their abdomen and sting.

JoJo Dexter/Getty Images

The bite of a female velvet ant is one of the most painful in the animal kingdom. Now, researchers have shown that the venoms of these insects contain multiple proteins that make them highly effective against a wide range of victims, including invertebrates, mammals, birds, reptiles, and amphibians. I discovered it.

Velvet ants are actually members of the wingless wasp family, of which there are over 7,000 species. Justin Schmidt, the researcher who created the Schmidt Sting Index, described the pain of a sting as “explosive and long-lasting, making you scream and feel like you’re going crazy. Hot oil from a deep fryer spills all over your hand.” .”

When I looked into what was causing so much pain, Dan Tracy Researchers at Indiana University urged the public to carefully collect female scarlet velvet ants.Dasimtyla occidentalis) from the Indiana and Kentucky sites.

They tested fruit fly venom (Drosophila melanogaster),mouse(Mus musculus) and praying mantis (tenodera sinensis), potential predators of velvet ants.

One of the peptides the research team isolated from the venom, Do6a, clearly caused a response in the insects, but surprisingly not in the mice.

“That means the venom has evolved to include components that specifically target pain-sensing neurons in insects, and other components that target mammals,” Tracy says.

The researchers further tested this by having praying mantises attempt to capture velvet ants.

“We found that velvet ants are constantly stinging praying mantises in self-defense to escape their clutches,” Tracy says.

However, when tested with other peptides isolated from velvet ant venom, called Do10a and Do13a, the mice showed a strong pain response.

After discovering the peptide that activated neurons, the researchers compared the venom peptide sequences of four other species of velvet ants.

“They all have nearly the same version of the peptide that strongly activates the insect’s pain-sensing neurons.” Lydia Boljonteam members at Indiana University. “There are also some peptides that are similar to common neuron activators, but with some differences. Therefore, pain may be triggered in a similar way in other velvet ant species.”

This research could help develop new pain treatments for humans, Borjon said.

topic:

Source: www.newscientist.com