A significant wave of cyber threats is sweeping across the internet, and it’s showing no signs of slowing down. According to World Economic Forum, global cyberattacks surged by 58% over two years, projected to reach alarming heights by 2025.
Much of this escalation is attributed to AI, with 89 percent of attacks leveraging artificial intelligence in 2025 alone.
While phishing attacks—where criminals disguise themselves in emails, calls, or texts to extract sensitive information—are predominantly responsible for the rise, a fundamental shift is underway. The announcement of the Claude Mythos Preview by Anthropic reverberated through the tech space, indicating significant advancements in AI capabilities.
This revolutionary model has raised concerns, as it can identify vulnerabilities in software that even seasoned analysts may overlook. As a result, Anthropic launched Project Glasswing, uniting over 40 leading software companies to utilize the Claude Mythos Preview in order to detect and rectify these flaws before malicious actors can exploit similar AI functionalities.
Reportedly, this model has already uncovered thousands of critical vulnerabilities across key operating systems and web browsers. Anthropic warned that it’s “not too distant” when AI models may proliferate with such capabilities, posing severe risks to economic, public safety, and national security.
In essence, the Mythos Preview and similar models reveal that many widely trusted systems on which the Internet is built harbor longstanding vulnerabilities that AI can exploit faster than any hacker.
The pressing question remains: Can we address these security flaws and fortify the Internet in time?
The Open Source Gap
Irrespective of your stance on the tech giants leading the AI charge, one encouraging note is that the most advanced tools in safeguarding the Internet are currently in the hands of “good people.” However, this situation may not hold indefinitely.
The industry’s top AI systems, known as “frontier models,” include closely monitored entities like Mythos Preview.
Nevertheless, a new category known as “open source models” is rising, offering more transparency and innovation opportunities, albeit with accompanying risks. Decentralization could allow malicious actors to modify AI systems for illicit purposes if these models operate on independent servers.
“A few years ago, it wasn’t so accessible, but now anyone can access tools to create AI agents,” says Professor Peter Bentley from University College London, in a discussion with BBC Science Focus.
“While it requires powerful hardware, criminals will undoubtedly invest to reap rewards. They’ll acquire robust systems and local models, making malicious deployment plausible. Pandora’s box is indeed open.”
Traditionally, open source models have been less advanced than state-of-the-art systems, but this gap is narrowing quickly. A recent report by the AI Security Research Institute indicates that the disparity is now about six months.
With this pace, it could be just under a year before models like Mythos Preview fall into malicious hands, further endangering fundamental web software. Is urgency starting to sink in?
Filtering the Noise
Before you dive into hysteria, it’s crucial to acknowledge that the AI sector is often prone to sensationalism.
Firms like Anthropic, OpenAI, and Google may exaggerate their models’ potential and dangers.
This tendency is especially prevalent in workplaces. Despite years of claims about AI revolutionizing industries, many roles have witnessed minimal change.
“Significant investments have been made in AI,” noted Bentley, “Yet the landscape has shifted primarily toward efficiency rather than transformation.”
While Anthropic hails the Mythos Preview as a “quantum leap,” others exhibit skepticism.
For instance, noted scientist and author Gary Marcus highlighted in a Substack post after the Project Glasswing announcement that the model is an incremental improvement rather than a groundbreaking leap forward.
An analysis from the AI cybersecurity firm Aisle indicated that a smaller, less expensive model could deliver performance nearly equivalent to that of Claude Mythos Preview.
Despite rising fears regarding malicious use of future AI models, the intent behind such misuse varies widely. “Criminals typically engage for financial gain,” Bentley explains. However, political adversaries might be more inclined to sow chaos than to profit.
“Once any nation acquires this technology, it’s likely they’ll employ it against others,” Bentley warns. “We are inadvertently weaponizing AI.”

The Race is On
Clearly, the race is underway to reinforce the Internet before this new generation of models gains traction.
But is simply patching every vulnerability the right strategy? And can we feasibly do so?
Using AI for code correction presents its challenges. “AI-generated code is often convoluted and subpar,” notes Bentley. “Attempting to patch existing code with AI can lead to further complications and new vulnerabilities.”
Perhaps the solution lies in gaining an upper hand while defenders remain ahead of the curve.
A recent post from the UK’s National Cyber Security Center highlighted that defenders can “shape the battlefield,” leveraging their environment to their advantage while minimizing risks for adversaries.
AI can also be effectively employed to monitor for malicious AI activities. In the near term, AI is clumsy in penetrating systems, producing noticeable alerts that are easier to track, as explained in the NCSC post.
For Bentley, the situation resembles an arms race: “It’s akin to providing smart scientists with comprehensive blueprints for creating explosives and letting them loose,” he asserts.
The underlying concern remains: What vulnerabilities may go up in smoke first?
Read More:
Source: www.sciencefocus.com












