Zuckerberg Introduces AI “Superintelligence” Amidst On-Stage Glitch with Smart Glasses

As we near the threshold of the AI apocalypse, glimmers of hope remain. The technology may not always function as intended.

This was evident last week when Mark Zuckerberg attempted to showcase his company’s latest AI-powered smart glasses. “I’m not sure what to say, folks,” he told his supporters after struggling multiple times to make a video call through the glasses, which ultimately failed.

This mishap came after an ambitious start to the event at Meta Connect 2025, a developer conference held in Menlo Park, California. The keynote was set to unveil the Ray-Ban Meta Display, essentially a modern version of the wearable iPhone—ideal for those too lazy to dig their devices out of their pockets, and appealing to fans of both Buddy Holly and the Terminator. Yet, despite its alluring design, the presentation was riddled with technical blunders, perhaps serving as an ironic tribute to the latest, meaningless iterations of digital devices.

The event kicked off with quite the spectacle. Attendees watched as Zuckerberg made his way to the stage, hitting the beat while sharing numerous fist bumps along the way. The camera on the glasses displayed “Mark’s POV” for the audience, all while he undoubtedly received an avalanche of texts filled with genuine excitement: “Let’s Gooo” followed by rocket emojis, accompanied by GIFs of two guys exclaiming, “The audience is hyped,” and “It’s Time.”

Zuckerberg eventually reached the stage, clad in his trademark baggy t-shirt and tousled hair. He expressed the company’s dedication to developing attractive eyewear, all while referencing the ironic concept that technology “doesn’t interrupt” human interactions, alongside the equally ironic assertion that “serious Super Intelligence” is the cornerstone of our age. “AI must serve humanity, not just those in data centers automating our lives,” he stated.

Things seemed to flow smoothly until it was time to actually utilize the AI features. Zuckerberg attempted a video call with chef Jack Mankuso, suggesting a dish inspired by “probably Korean-style, like steak sauce.”

“What should I do first?” he asked the Oracle.

“You’ve already combined the basic ingredients,” the AI mistakenly informed him, leading to an awkward silence.

“What do I do first?” Mankuso inquired again.

“You’ve already combined the base ingredients, so grate the pears and gently mix them into the base sauce,” the AI patiently reminded him.

“I think the Wi-Fi is acting up. Sorry. Back to you, Mark.” (Certainly the fault lay with the Wi-Fi, not the AI itself.)

To his credit, Zuckerberg maintained his composure. “It’s all good. What can you do? It’s all good,” he said. “The irony is that you can spend years crafting technology, only for the Wi-Fi of the day to trip you up.”

Failing AI demonstrations are not new phenomena. They’ve become a tradition; last year at Google, a presenter attempted to use the Gemini tool to scan posters for Sabrina Carpenter’s concert to find her tour dates. The bot remained silent when asked to “Open Gemini and take a photo and ‘Check out the calendar for my availability when she visits San Francisco this year.” It eventually worked on my third attempt on another device.

This year, Google demonstrated its translation features with its own smart glasses, which failed only 15 seconds into the presentation. To be fair, a blunder in a high-stakes tech demonstration doesn’t equate to a non-functioning product, as anyone familiar with a certain Tesla CyberTruck presentation will remember. It flopped when the designer threw metal balls at the truck’s so-called “armor glass”; the incident paved the way for a bright future and earned the dubious title of “more fatal than the Ford Pinto.”

At this juncture in his presentation, one might assume Zuckerberg would play it safe. However, when it came time to demonstrate the new wristbands for the Ray-Ban Meta display, he chose to rely on live trials instead of slides.

The wristband, which he dubbed a “neural interface,” detects minimal hand gestures by picking up electrical signals from muscle activity. “You can be among others, yet still type without drawing attention,” Zuckerberg explained. In essence, the combination of glasses and wristbands is practically a stalker’s fantasy.

At least, that is, when it operates correctly. Zuckerberg repeatedly attempted to call his colleague Andrew Bosworth, but each attempt was met with failure. “What a letdown. I’m not sure what went wrong,” he said after the first unsuccessful attempt. He tried again: “I’ll pick it up with my neural band,” he quipped, but still couldn’t connect.

“I’m not sure what to tell you guys, it’s impressive, we’ll bring Boz out here and move to the next presentation and hope it works.” The sign at the back of the room that appeared on-screen read, “Live Demo – Good Luck.”

If the aim was to humanize Zuckerberg, it indeed succeeded: he put forth his best effort in the face of disasters and smiled through it all, making it easy to forget the childlike wonder.


However, the overall event felt like a misaligned millennial dream, a bizarre echo of early 2000s optimism that only Silicon Valley billionaires could buy into. The spectacle mirrored Steve Jobs’ iPhone unveiling in 2007, with two key contrasts: back then, the U.S. hadn’t crumbled behind the scenes—not yet—and it was clear why people were eager to see the devices’ launch. They were on the internet! In your pocket! Can you believe this incredible human innovation?

This event is mired in hardware and software that seems to function without them, with many AI pushes hoping to harness the same energy remotely and without comparable offerings.

For amateurs, it appears consumer technology has entered an era of solutions searching for problems. Witnessing our high-tech overlords stumble on stage raises a broader question: Is that not the case?

Source: www.theguardian.com

Meta Unveils $15 Billion Investment to Develop Computerized “Superintelligence”

Reports indicate that Meta is preparing to unveil a substantial $15 billion (£11 billion) bid aimed at achieving computerized “Superintelligence.”

The competition in Silicon Valley to lead in artificial intelligence is intensifying, even as many current AI systems show inconsistent performance.

Meta CEO Mark Zuckerberg is set to announce the acquisition of a 49% stake in Scale AI, which is led by King Alexandre and co-founded by Lucie Guo. This strategic move has been described by one analyst in Silicon Valley as a “wartime CEO” initiative.

Superintelligence refers to an AI that can outperform humans across all tasks. Currently, AI systems have not yet achieved the same capabilities as humans, a condition known as Artificial General Intelligence (AGI). Recent studies reveal that many prominent AI systems falter when tackling highly complex problems.

Following notable progress from competitors like Sam Altman’s OpenAI and Google, as well as substantial investments in the underperforming Metaverse concept, observers are questioning whether Meta’s renewed focus on AI can restore its competitive edge and drive meaningful advancements.

In March, the 28-year-old King signed a contract to develop the Thunderforge system for the US Department of Defense, which focuses on applying AI to military planning and operations, with initial emphasis on Indo-Pacific and European directives. The company has also received early funding from the Peter Thiel founder fund.

Meta’s initiative has sparked fresh calls for the European government to embark on its own transparent research endeavors, ensuring robust technological development while fostering public trust, akin to the Swiss CERN European Nuclear Research Institute.

Michael Wooldridge, a professor at the Oxford University Foundation for Artificial Intelligence, stated, “They are maximizing their use of AI. We cannot assume that we fully understand or trust the technology we are creating. It’s crucial that governments collaborate to develop AI openly and rigorously, much like the importance of CERN and particle accelerators.”

Wooldridge commented that the reported acquisition appears to be Meta’s effort to reclaim its competitive edge following the Metaverse’s lackluster reception, noting that the company invested significantly in that venture.

However, he pointed out that the state of AI development remains uneven, with AGI still a distant goal, and “Superintelligence” being even more elusive.

“We have AI that can achieve remarkable feats, yet it struggles with tasks that capable GCSE students can perform,” he remarked.

Andrew Rogoiski, director of partnerships and innovation at the University of Surrey’s People-centered AI Institute, observed, “Meta’s approach to AI differs from that of OpenAI or Humanity. For Meta, AI is not a core mission, but rather an enabler of its broader business strategy.”

“This allows them to take a longer-term view, rather than feeling rushed to achieve AGI,” he added.

Reports indicate that King is expected to take on a significant role within Meta.

Meta has chosen not to comment at this time. Scale AI will be reached for additional comments.

Source: www.theguardian.com

AI Companies Caution: Assess the Risks of Superintelligence or Face the Consequences of Losing Human Control

Prior to the deployment of the omnipotent system, AI companies are encouraged to replicate the safety assessments that formed the basis of Robert Oppenheimer’s initial nuclear test.

Max Tegmark, a prominent advocate for AI safety, conducted analyses akin to those performed by American physicist Arthur Compton before the Trinity test, indicating a 90% likelihood that advanced AI could present an existential threat.

The US government went ahead with Trinity in 1945, after providing assurances that there was minimal risk of the atomic bomb igniting the atmosphere and endangering humanity.

In a paper published by Tegmark and three students at the Massachusetts Institute of Technology (MIT), the “Compton constant” is suggested for calculation. This is articulated as the likelihood that omnipotent AI could evade human control. Compton mentioned in a 1959 interview with American author Pearlback that he approved the test after evaluating the odds for uncontrollable reactions to be “slightly less” than one in three million.

Tegmark asserted that AI companies must diligently assess whether artificial superintelligence (ASI)—the theoretical system that surpasses human intelligence in all dimensions—can remain under human governance.

“Firms developing superintelligence ought to compute the Compton constant, which indicates the chances of losing control,” he stated. “Merely expressing a sense of confidence is not sufficient. They need to quantify the probability.”

Tegmark believes that achieving a consensus on the Compton constant, calculated by multiple firms, could create a “political will” to establish a global regulatory framework for AI safety.

A professor of physics at MIT and an AI researcher, Tegmark is also a co-founder of The Future of Life Institute, a nonprofit advocating for the secure advancement of AI. The organization released an open letter in 2023 calling for a pause in the development of powerful ASI, garnering over 33,000 signatures, including notable figures such as Elon Musk and Apple co-founder Steve Wozniak.

This letter emerged several months post the release of ChatGPT, marking the dawn of a new era in AI development. It cautioned that AI laboratories are ensnared in “uncontrolled races” to deploy “ever more powerful digital minds.”

Tegmark discussed these issues with the Guardian alongside a group of AI experts, including tech industry leaders, representatives from state-supported safety organizations, and academics.

The Singapore consensus, outlined in the Global AI Safety Research Priority Report, was crafted by distinguished computer scientist Joshua Bengio and Tegmark, with contributions from leading AI firms like OpenAI and Google DeepMind. Three broad research priority areas for AI safety have been established: developing methods to evaluate the impacts of existing and future AI systems, clarifying AI functionality and designing systems to meet those objectives, and managing and controlling system behavior.

Referring to the report, Tegmark noted that discussions surrounding safe AI development have regained momentum following remarks by US Vice President JD Vance, asserting that the future of AI will not be won through mere hand-raising and safety debates.

Tegmark stated:

Source: www.theguardian.com