AI Firms “Unprepared” for Risks of Developing Human-Level Systems, Report Warns

A prominent AI Safety Group has warned that artificial intelligence firms are “fundamentally unprepared” for the consequences of developing systems with human-level cognitive abilities.

The Future of Life Institute (FLI) noted that its AI Safety Index scored a D in “Existential Safety Plans.”

Among the five reviewers of the FLI report, there was a focus on the pursuit of artificial general intelligence (AGI). However, none of the examined companies presented “a coherent, actionable plan” to ensure the systems remain safe and manageable.

AGI denotes a theoretical phase of AI evolution where a system can perform cognitive tasks at a level akin to humans. OpenAI, the creator of ChatGPT, emphasizes that AGI should aim to “benefit all of humanity.” Safety advocates caution that AGIs might pose existential risks by eluding human oversight and triggering disastrous scenarios.

The FLI report indicated: “The industry is fundamentally unprepared for its own aspirations. While companies claim they will achieve AGI within a decade, their existential safety plans score no higher than a D.”

The index assesses seven AI developers—Google Deepmind, OpenAI, Anthropic, Meta, Xai, Zhipu AI, and Deepseek—across six categories, including “current harm” and “existential safety.”

Humanity received the top overall safety grade of C+, followed by OpenAI with a C-, and Google DeepMind with a D.

FLI is a nonprofit based in the US advocating for the safer development of advanced technologies, receiving “unconditional” donations from crypto entrepreneur Vitalik Buterin.

SaferAI, another nonprofit focused on safety; also released a report on Thursday. They raised alarms about advanced AI companies exhibiting “weak to very weak risk management practices,” deeming current strategies “unacceptable.”

FLI’s safety evaluations were conducted by a panel of AI experts, including UK computer scientist Stuart Russell and Sneha Revanur, founder of the AI Regulation Campaign Group.

Max Tegmark, co-founder of FLI and professor at MIT, remarked that it was “quite severe” to expect leading AI firms to create ultra-intelligent systems without disclosing plans to mitigate potential outcomes.

He stated:

Tegmark mentioned that the technology is advancing rapidly, countering previous beliefs that experts would need decades to tackle AGI challenges. “Now, companies themselves assert it’s just a few years away,” he stated.

He pointed out that advancements in AI capabilities have consistently outperformed previous generations. Since the Global AI Summit in Paris in February, new models like Xai’s Grok 4, Google’s Gemini 2.5, and its video generator Veo3 have demonstrated significant improvements over their predecessors.

A spokesperson for Google DeepMind asserted that the report overlooks “the entirety of Google DeepMind’s AI safety initiatives,” adding, “Our comprehensive approach to safety and security far exceeds what’s captured in the report.”

OpenAI, Anthropic, Meta, Xai, Zhipu AI, and Deepseek have also been contacted for their feedback.

Source: www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *