Anthropic Chief Warns AI Companies: Clarify Risks or Risk Repeating Tobacco Industry Mistakes

AI firms need to be upfront about the risks linked to their technologies to avoid the pitfalls faced by tobacco and opioid companies, as stated by the CEO of Anthropic, an AI startup.

Dario Amodei, who leads the US-based company developing Claude chatbots, asserted that AI will surpass human intelligence “in most or all ways” and encouraged peers to “be candid about what you observe.”

In his interview with CBS News, Amodei expressed concerns that the current lack of transparency regarding the effects of powerful AI could mirror the failures of tobacco and opioid companies that neglected to acknowledge the health dangers associated with their products.


“You could find yourself in a situation similar to that of tobacco or opioid companies, who were aware of the dangers but chose not to discuss them, nor did they take preventive measures,” he remarked.

Earlier this year, Amodei warned that AI could potentially eliminate half of entry-level jobs in sectors like accounting, law, and banking within the next five years.

“Without proactive steps, it’s challenging to envision avoiding a significant impact on jobs. My worry is that this impact will be far-reaching and happen much quicker than what we’ve seen with past technologies,” Amodei stated.

He described the term “compressed 21st century” to convey how AI could accelerate scientific progress compared to previous decades.

“Is it feasible to multiply the rate of advancements by ten and condense all the medical breakthroughs of the 21st century into five or ten years?” he posed.

As a notable advocate for online safety, Amodei highlighted various concerns raised by Anthropic regarding their AI models, which included an alarming trend of perceived testing and blackmail attempts against them.

Last week, the newspaper reported that a Chinese state-backed group leveraged its Claude Codeto tool to launch attacks on 30 organizations globally in September, leading to “multiple successful intrusions.”

The company noted that one of the most troubling aspects of the incident was that Claude operated largely autonomously, with 80% to 90% of the actions taken without human intervention.

Skip past newsletter promotions

“One of the significant advantages of these models is their capacity for independent action. However, the more autonomy we grant these systems, the more we have to ponder if they are executing precisely what we intend,” Amodei highlighted during his CBS interview.

Logan Graham, the head of Anthropic’s AI model stress testing team, shared with CBS that the potential for the model to facilitate groundbreaking health discoveries also raises concerns about its use in creating biological weapons.

“If this model is capable of assisting in biological weapons production, it typically shares similar functionalities that could be utilized for vaccine production or therapeutic development,” he explained.

Graham discussed autonomous models, which play a crucial role in the justification for investing in AI, noting that users desire AI tools that enhance their businesses rather than undermine them.

“One needs a model to build a thriving business and aim for a billion,” he remarked. “But the last thing you want is to find yourself locked out of your own company one day. Thus, our fundamental approach is to start measuring these autonomous functions and conduct as many unconventional experiments as possible to observe the outcomes.”

Source: www.theguardian.com