Transformative Choice: Jared Kaplan on Permitting Autonomous AI Learning | Technology




By the year 2030, humanity will face a critical decision regarding the “ultimate risk” of allowing artificial intelligence systems to self-train and enhance their capabilities, according to one of the foremost AI experts.

Jared Kaplan, chief scientist and co-founder of the $180bn (£135bn) US startup Anthropic, emphasized that crucial choices are being made concerning the level of autonomy granted to these evolving systems.

This could potentially spark a beneficial “intellectual explosion” or signify humanity’s loss of control.

In a conversation addressing the intense competition to achieve artificial general intelligence (AGI), also referred to as superintelligence, Kaplan urged global governments and society to confront what he termed the “biggest decision.”

Anthropic belongs to a network of leading AI firms striving for supremacy in the field, alongside OpenAI, Google DeepMind, xAI, Meta, and prominent Chinese competitors led by DeepSeek. Claude, one of the popular AI assistants, has gained significant traction among business clients.




Kaplan predicted that a decision to “relinquish” control to AI could materialize between 2027 and 2030. Photo: Bloomberg/Getty Images

Kaplan stated that aligning swiftly advancing technology with human interests has proven successful to date, yet permitting technology to recursively enhance itself poses “the ultimate risk, as it would be akin to letting go of AI.” He mentioned that a decision regarding this could emerge between 2027 and 2030.




Photo: Casey Clifford/The Guardian


“Envisioning a process generated by an AI that is as intelligent, or nearly as intelligent, as you. This is essentially about developing smarter AI.”




Photo: Casey Clifford/The Guardian


“This seems like a daunting process. You cannot predict the final outcome.”

Kaplan transitioned from a theoretical physicist to an AI billionaire in just seven years. During an extensive interview, he also conveyed:

  • AI systems are expected to handle “most white-collar jobs” in the coming two to three years.

  • His 6-year-old son is unlikely to outperform AI in academic tasks, such as writing essays or completing math exams.

  • It is natural to fear a scenario where AI can self-improve, leading humans to lose control.

  • The competitive landscape around AGI feels tremendously overwhelming.

  • In a favorable outcome, AI could enhance biomedical research, health and cybersecurity, productivity, grant additional leisure time, and promote human well-being.

Kaplan met with the Guardian at Anthropic’s office in San Francisco, where the interior design, filled with knitted rugs and lively jazz music, contrasts with the existential concerns surrounding the technology being cultivated.




San Francisco has emerged as a focal point for AI startups and investment. Photo: Washington Post/Getty Images

Kaplan, a physicist educated at Stanford and Harvard, joined OpenAI in 2019 following his research at Johns Hopkins University and Cologne, Germany, and co-founded Anthropic in 2021.

He isn’t alone in expressing concerns at Anthropic. One of his co-founders, Jack Clark, remarked in October: He considers himself both an optimist and a “deeply worried” individual. He described the path of AI as “not a simplistic and predictable mechanism, but a genuine and enigmatic entity.”

Kaplan conveyed his strong belief that AI systems would align with human interests, aligning them to the level of human cognition, although he harbors concerns about surpassing that boundary.

He explained: “If you envision creating this process using an AI smarter or comparable in intelligence to humans, it becomes about creating smarter AI. We intend to leverage AI to enhance its own capability. This suggests a process that may seem intimidating. The outcome is uncertain.”

The advantages of integrating AI into the economy are being scrutinized. Outside Anthropic’s headquarters, a sign from another tech corporation pointedly posed a question about returns on investment: “All AI and no ROI?” A September Harvard Business Review study indicated that AI “workthrop” — subpar AI-generated work requiring human corrections — was detrimental to productivity.

The most overt benefit appears to be the application of AI in computer programming tasks. In September, Anthropic unveiled its advanced AI, Claude Sonnet 4.5, a computer coding model allowing the creation of AI agents and granting autonomous computer utilization.




The attackers exploited the Claude Code tool for various organizations. Photo: Anthropic

Kaplan commented that the company can handle complex, multi-step programming tasks for 30 continuous hours and has, in specific instances, doubled the speed of its programmers through AI integration.

However, Anthropic revealed in November that it suspected a state-supported Chinese group engaged in misconduct by operating the Claude Code Tool, which not only assisted humans in orchestrating cyberattacks but also executed approximately 30 attacks independently, some of which were successful. Kaplan articulated that permitting an AI to train another AI is “a decision of significant consequence.”

“We regard this as possibly the most substantial decision or the most alarming scenario… Once no human is involved, certainty diminishes. You might begin the process thinking, ‘Everything’s proceeding as intended, it’s safe,’ but the reality is it’s an evolving process. Where is it headed?”

He identified two risks associated with the recursive self-improvement method, often referred to in this context, when allowed to operate uncontrollably.

“One concern is regarding potential loss of control. Is the AI aware of its actions? The fundamental inquiries are: Will AI be a boon for humanity? Can it be beneficial? Will it remain harmless? Will it understand us? Will it enable individuals to maintain control over their lives and surroundings?”




Photo: Casey Clifford/The Guardian


“It’s crucial to prevent power grabs and the misuse of technology.”




Photo: Casey Clifford/The Guardian


“It seems very hazardous if it lands in the wrong hands.”

The second risk pertains to the security threat posed by self-trained AI that could surpass human capabilities in scientific inquiry and technological advancement.

“It appears exceedingly unsafe for this technology to be misappropriated,” he stated. “You can envision someone wanting this AI to serve their own interests. Preventing power grabs and the misuse of technology is essential.”

Independent studies on cutting-edge AI models, including ChatGPT, have demonstrated that the length of tasks they can execute is expanding. Doubling every seven months.

The Future of AI

The contenders aiming to achieve superintelligence. This was compiled in collaboration with the Editorial Design team. Read more from the series.

Words

Nick Hopkins, Rob Booth, Amy Hawkins, Dara Kerr, Dan Milmo

Design and Development

Rich Cousins, Harry Fischer, Pip Lev, Alessia Amitrano

Picture Editors

Fiona Shields, Jim Hedge, Gail Fletcher

Kaplan expressed his worry that the rapid pace of advancement might not allow humanity sufficient time to acclimatize to the technology before it evolves significantly further.

“This is a source of concern… individuals like me could be mistaken in our beliefs and it might all culminate,” he remarked. “The best AI might be the one we possess presently. However, we genuinely do not believe that is the case. We anticipate ongoing improvements in AI.”

He added, “The speed of change is so swift that people often lack adequate time to process it or contemplate their responses.”

During its pursuit of AGI, Anthropic is in competition with OpenAI, Google DeepMind, and xAI to develop more sophisticated AI systems. Kaplan remarked that the atmosphere in the Bay Area is “certainly intense with respect to the stakes and competitiveness in AI.”

“Our perspective is that the trends in investments, returns, AI capabilities, task complexity, and so forth are all following this exponential pattern. [They signify] AI’s growing capabilities,” he noted.

The accelerated rate of progress increases the risk of one of the competitors making an error and falling behind. “The stakes are considerable to remain at the forefront in terms of not losing ground on exponential growth. [the curve] You could quickly find yourself significantly behind, particularly regarding resources.”

By 2030, it is anticipated that $6.7 trillion will be necessary for global data centers to meet increasing demand. Investors are eager to support companies that are aligned closest to the forefront.




Significant accomplishments have been made in utilizing AI for code generation. Photo: Chen Xin/Getty Images

At the same time, Anthropic advocates for AI regulation. The company’s mission statement emphasizes “the development of more secure systems.”

“We certainly aim to avoid a situation akin to Sputnik where governments abruptly realize, ‘Wow, AI is crucial’… We strive to ensure policymakers are as knowledgeable as possible during this evolution, so they can make informed decisions.”

In October, Mr. Anthropic’s stance led to a confrontation with the Trump administration. David Sachs, an AI advisor to the president, accused Anthropic of “fear-mongering” while promoting state-specific regulations beneficial to the company, while being detrimental to startups.

After Sachs suggested the company was positioning itself as an “opponent” of the Trump administration, Kaplan, alongside Dario Amodei, Anthropic’s CEO, countered by stating the company had publicly supported Trump’s AI initiatives and was collaborating with Republicans, aspiring to maintain America’s dominance in AI.

Source: www.theguardian.com

Labor Refutes Claims of Permitting Tech Giants to Exploit Copyrighted Content for AI Training

In response to significant backlash from writers, arts, and media organizations, the Albanon government has definitively stated that tech companies will not be allowed to freely access creative content for training artificial intelligence models.

Attorney General Michel Rolland is expected to announce this decision on Monday, effectively rejecting a contentious proposal from the Ministry of Justice. productivity committee, which had support from technology companies.

“Australian creatives are not just top-tier; they are essential to the fabric of Australian culture, and we need to ensure they have robust legal protections,” said Mr. Rowland.

The commission faced outrage in August when its interim report on data usage in the digital economy suggested exemptions from copyright law, effectively granting tech companies free access to content for AI training.

Sign up: AU breaking news email

Recently, Scott Farquhar, co-founder of Atlassian and chairman of the Australian Technology Council, told the National Press Club that revising existing restrictions could “unlock billions in foreign investment for Australia”.

The proposal triggered a strong backlash from creators, including Indigenous rapper Adam Briggs, who testified in September that allowing companies to utilize local content without fair remuneration would make it “hard to put the genie back in the bottle.”

Australian author Anna Funder argued that large-scale AI systems rely on “massive unauthorized appropriation of every available book, artwork, and performance that can be digitized.”

The same inquiry uncovered that the Productivity Commission did not engage with the creative community or assess the potential effects of its recommendations before releasing its report. This led Green Party senator Sarah Hanson-Young to state that the agency had “miscalculated the importance of the creative industries.”

The Australian Council of Trade Unions also cautioned against the proposal, asserting it would lead to “widespread theft” of creative works.

Higher government ministers were disrespectful, although a so-called “text and data mining” exemption may still be considered, Rowland’s statement marks the first time it has been specifically ruled out.

“While artificial intelligence offers vast opportunities for Australia and its economy, it’s crucial that Australian creators also reap the benefits,” she asserted.

The Attorney General plans to gather the government’s Copyright and AI Reference Group on Monday and Tuesday to explore alternative measures to address the challenges posed by advancing technology.

This includes discussions on whether a new paid licensing framework under copyright law should replace the current voluntary system.

Briggs says he will be replaced by AI: AI doesn’t know ‘what a lounge room in Shepparton smells like’ – video

The Australian Recording Industry Association (ARIA), one of the organizations advocating against the exemption, praised the announcement as “a substantial step forward.”

“This represents a win for creativity and Australian culture, including Indigenous culture, but more importantly, it’s a victory for common sense. The current copyright licensing system is effective,” stated ARIA CEO Annabel Hurd.

Skip past newsletter promotions

“Intellectual property law is fundamental to the creative economy, digital economy, and tech industry. It is the foundation that technology companies rely on to protect and monetize their products, driving innovation.”

Hurd emphasized that further measures are necessary to safeguard artists, including ensuring AI adheres to licensing rules.

“Artists have the right to determine how their work is utilized and to share in the value that it generates,” she stated.

“Safeguarding those frameworks is how we secure Australia’s creative sovereignty and maintain our cultural vitality.”

Media companies also expressed their support for the decision.

A spokesperson for Guardian Australia stated that this represents “a significant step towards affirming that Australia’s copyrighted content warrants protection and compensation.”

“Australian media, publishers, and creators all voiced strong opposition to the TDM (text and data mining) exception, asserting it would permit large-scale theft of the work of Australian journalists and creators, undermining Australia’s national interests,” the spokesperson added.

They also indicated that the Guardian seeks to establish a fair licensing system that supports genuine value exchange.

News Corp Australasia executive chairman Michael Miller remarked that the government made the “correct decision” to exclude the exemption.

“By protecting creators’ rights to control access, usage terms, and remuneration, we reinforce the efficacy of our nation’s copyright laws, ensuring favorable market outcomes,” he affirmed.

Source: www.theguardian.com