Zuckerberg Introduces AI “Superintelligence” Amidst On-Stage Glitch with Smart Glasses

As we near the threshold of the AI apocalypse, glimmers of hope remain. The technology may not always function as intended.

This was evident last week when Mark Zuckerberg attempted to showcase his company’s latest AI-powered smart glasses. “I’m not sure what to say, folks,” he told his supporters after struggling multiple times to make a video call through the glasses, which ultimately failed.

This mishap came after an ambitious start to the event at Meta Connect 2025, a developer conference held in Menlo Park, California. The keynote was set to unveil the Ray-Ban Meta Display, essentially a modern version of the wearable iPhone—ideal for those too lazy to dig their devices out of their pockets, and appealing to fans of both Buddy Holly and the Terminator. Yet, despite its alluring design, the presentation was riddled with technical blunders, perhaps serving as an ironic tribute to the latest, meaningless iterations of digital devices.

The event kicked off with quite the spectacle. Attendees watched as Zuckerberg made his way to the stage, hitting the beat while sharing numerous fist bumps along the way. The camera on the glasses displayed “Mark’s POV” for the audience, all while he undoubtedly received an avalanche of texts filled with genuine excitement: “Let’s Gooo” followed by rocket emojis, accompanied by GIFs of two guys exclaiming, “The audience is hyped,” and “It’s Time.”

Zuckerberg eventually reached the stage, clad in his trademark baggy t-shirt and tousled hair. He expressed the company’s dedication to developing attractive eyewear, all while referencing the ironic concept that technology “doesn’t interrupt” human interactions, alongside the equally ironic assertion that “serious Super Intelligence” is the cornerstone of our age. “AI must serve humanity, not just those in data centers automating our lives,” he stated.

Things seemed to flow smoothly until it was time to actually utilize the AI features. Zuckerberg attempted a video call with chef Jack Mankuso, suggesting a dish inspired by “probably Korean-style, like steak sauce.”

“What should I do first?” he asked the Oracle.

“You’ve already combined the basic ingredients,” the AI mistakenly informed him, leading to an awkward silence.

“What do I do first?” Mankuso inquired again.

“You’ve already combined the base ingredients, so grate the pears and gently mix them into the base sauce,” the AI patiently reminded him.

“I think the Wi-Fi is acting up. Sorry. Back to you, Mark.” (Certainly the fault lay with the Wi-Fi, not the AI itself.)

To his credit, Zuckerberg maintained his composure. “It’s all good. What can you do? It’s all good,” he said. “The irony is that you can spend years crafting technology, only for the Wi-Fi of the day to trip you up.”

Failing AI demonstrations are not new phenomena. They’ve become a tradition; last year at Google, a presenter attempted to use the Gemini tool to scan posters for Sabrina Carpenter’s concert to find her tour dates. The bot remained silent when asked to “Open Gemini and take a photo and ‘Check out the calendar for my availability when she visits San Francisco this year.” It eventually worked on my third attempt on another device.

This year, Google demonstrated its translation features with its own smart glasses, which failed only 15 seconds into the presentation. To be fair, a blunder in a high-stakes tech demonstration doesn’t equate to a non-functioning product, as anyone familiar with a certain Tesla CyberTruck presentation will remember. It flopped when the designer threw metal balls at the truck’s so-called “armor glass”; the incident paved the way for a bright future and earned the dubious title of “more fatal than the Ford Pinto.”

At this juncture in his presentation, one might assume Zuckerberg would play it safe. However, when it came time to demonstrate the new wristbands for the Ray-Ban Meta display, he chose to rely on live trials instead of slides.

The wristband, which he dubbed a “neural interface,” detects minimal hand gestures by picking up electrical signals from muscle activity. “You can be among others, yet still type without drawing attention,” Zuckerberg explained. In essence, the combination of glasses and wristbands is practically a stalker’s fantasy.

At least, that is, when it operates correctly. Zuckerberg repeatedly attempted to call his colleague Andrew Bosworth, but each attempt was met with failure. “What a letdown. I’m not sure what went wrong,” he said after the first unsuccessful attempt. He tried again: “I’ll pick it up with my neural band,” he quipped, but still couldn’t connect.

“I’m not sure what to tell you guys, it’s impressive, we’ll bring Boz out here and move to the next presentation and hope it works.” The sign at the back of the room that appeared on-screen read, “Live Demo – Good Luck.”

If the aim was to humanize Zuckerberg, it indeed succeeded: he put forth his best effort in the face of disasters and smiled through it all, making it easy to forget the childlike wonder.


However, the overall event felt like a misaligned millennial dream, a bizarre echo of early 2000s optimism that only Silicon Valley billionaires could buy into. The spectacle mirrored Steve Jobs’ iPhone unveiling in 2007, with two key contrasts: back then, the U.S. hadn’t crumbled behind the scenes—not yet—and it was clear why people were eager to see the devices’ launch. They were on the internet! In your pocket! Can you believe this incredible human innovation?

This event is mired in hardware and software that seems to function without them, with many AI pushes hoping to harness the same energy remotely and without comparable offerings.

For amateurs, it appears consumer technology has entered an era of solutions searching for problems. Witnessing our high-tech overlords stumble on stage raises a broader question: Is that not the case?

Source: www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *