Academy Award-winning actors Matthew McConaughey and Michael Caine have entered into an agreement with AI audio firm Eleven Labs.
The New York-based company is now authorized to produce AI-generated voice replicas as part of its initiative to tackle “significant ethical challenges” in the intersection of artificial intelligence and Hollywood.
McConaughey, who has also invested in the company and collaborated with them since 2022, will allow Eleven Labs to produce a Spanish audio version of his newsletter “Lyrics of Livin'” using his voice.
In a statement, the Dallas Buyers Club star expressed his admiration for Eleven Labs and hoped this collaboration would enable him to “reach and connect with an even broader audience.”
Eleven Labs is launching the Iconic Voices Marketplace, allowing brands to collaborate and utilize officially licensed celebrity voices for AI-generated applications. Caine’s new agreement includes his iconic voice in this lineup.
“For years, I have lent my voice to stories that inspire people—tales of bravery, ingenuity, and the human experience,” Caine stated. “Now, I am helping others to discover their voice. With Eleven Labs, I can save and share everyone’s voice, not just mine.”
He further mentioned that the company “leverages innovation to celebrate humanity, not to replace it,” asserting that it “does not replace voices, it amplifies them.”
Caine has also revealed plans to return from retirement to co-star with Vin Diesel in The Last Witch Hunter 2.
Other voices featured in the marketplace include legendary Hollywood figures like John Wayne, Rock Hudson, and Judy Garland, alongside contemporary stars such as Liza Minnelli and Art Garfunkel. The list also encompasses notable figures like Amelia Earhart, Babe Ruth, J. Robert Oppenheimer, Maya Angelou, and Alan Turing.
Recently, Eleven Labs was valued at approximately $6.6 billion.
This news follows a series of celebrity and AI partnership agreements, including various celebrities who have consented to allow Meta to utilize their voices. Last year, the company released a list that featured Judi Dench, John Cena, and Kristen Bell.
Other stars, including Ashton Kutcher and Leonardo DiCaprio, have also made investments in AI enterprises.
As we near the threshold of the AI apocalypse, glimmers of hope remain. The technology may not always function as intended.
This was evident last week when Mark Zuckerberg attempted to showcase his company’s latest AI-powered smart glasses. “I’m not sure what to say, folks,” he told his supporters after struggling multiple times to make a video call through the glasses, which ultimately failed.
This mishap came after an ambitious start to the event at Meta Connect 2025, a developer conference held in Menlo Park, California. The keynote was set to unveil the Ray-Ban Meta Display, essentially a modern version of the wearable iPhone—ideal for those too lazy to dig their devices out of their pockets, and appealing to fans of both Buddy Holly and the Terminator. Yet, despite its alluring design, the presentation was riddled with technical blunders, perhaps serving as an ironic tribute to the latest, meaningless iterations of digital devices.
The event kicked off with quite the spectacle. Attendees watched as Zuckerberg made his way to the stage, hitting the beat while sharing numerous fist bumps along the way. The camera on the glasses displayed “Mark’s POV” for the audience, all while he undoubtedly received an avalanche of texts filled with genuine excitement: “Let’s Gooo” followed by rocket emojis, accompanied by GIFs of two guys exclaiming, “The audience is hyped,” and “It’s Time.”
Zuckerberg eventually reached the stage, clad in his trademark baggy t-shirt and tousled hair. He expressed the company’s dedication to developing attractive eyewear, all while referencing the ironic concept that technology “doesn’t interrupt” human interactions, alongside the equally ironic assertion that “serious Super Intelligence” is the cornerstone of our age. “AI must serve humanity, not just those in data centers automating our lives,” he stated.
Things seemed to flow smoothly until it was time to actually utilize the AI features. Zuckerberg attempted a video call with chef Jack Mankuso, suggesting a dish inspired by “probably Korean-style, like steak sauce.”
“What should I do first?” he asked the Oracle.
“You’ve already combined the basic ingredients,” the AI mistakenly informed him, leading to an awkward silence.
“What do I do first?” Mankuso inquired again.
“You’ve already combined the base ingredients, so grate the pears and gently mix them into the base sauce,” the AI patiently reminded him.
“I think the Wi-Fi is acting up. Sorry. Back to you, Mark.” (Certainly the fault lay with the Wi-Fi, not the AI itself.)
To his credit, Zuckerberg maintained his composure. “It’s all good. What can you do? It’s all good,” he said. “The irony is that you can spend years crafting technology, only for the Wi-Fi of the day to trip you up.”
Failing AI demonstrations are not new phenomena. They’ve become a tradition; last year at Google, a presenter attempted to use the Gemini tool to scan posters for Sabrina Carpenter’s concert to find her tour dates. The bot remained silent when asked to “Open Gemini and take a photo and ‘Check out the calendar for my availability when she visits San Francisco this year.” It eventually worked on my third attempt on another device.
This year, Google demonstrated its translation features with its own smart glasses, which failed only 15 seconds into the presentation. To be fair, a blunder in a high-stakes tech demonstration doesn’t equate to a non-functioning product, as anyone familiar with a certain Tesla CyberTruck presentation will remember. It flopped when the designer threw metal balls at the truck’s so-called “armor glass”; the incident paved the way for a bright future and earned the dubious title of “more fatal than the Ford Pinto.”
At this juncture in his presentation, one might assume Zuckerberg would play it safe. However, when it came time to demonstrate the new wristbands for the Ray-Ban Meta display, he chose to rely on live trials instead of slides.
The wristband, which he dubbed a “neural interface,” detects minimal hand gestures by picking up electrical signals from muscle activity. “You can be among others, yet still type without drawing attention,” Zuckerberg explained. In essence, the combination of glasses and wristbands is practically a stalker’s fantasy.
At least, that is, when it operates correctly. Zuckerberg repeatedly attempted to call his colleague Andrew Bosworth, but each attempt was met with failure. “What a letdown. I’m not sure what went wrong,” he said after the first unsuccessful attempt. He tried again: “I’ll pick it up with my neural band,” he quipped, but still couldn’t connect.
“I’m not sure what to tell you guys, it’s impressive, we’ll bring Boz out here and move to the next presentation and hope it works.” The sign at the back of the room that appeared on-screen read, “Live Demo – Good Luck.”
If the aim was to humanize Zuckerberg, it indeed succeeded: he put forth his best effort in the face of disasters and smiled through it all, making it easy to forget the childlike wonder.
However, the overall event felt like a misaligned millennial dream, a bizarre echo of early 2000s optimism that only Silicon Valley billionaires could buy into. The spectacle mirrored Steve Jobs’ iPhone unveiling in 2007, with two key contrasts: back then, the U.S. hadn’t crumbled behind the scenes—not yet—and it was clear why people were eager to see the devices’ launch. They were on the internet! In your pocket! Can you believe this incredible human innovation?
This event is mired in hardware and software that seems to function without them, with many AI pushes hoping to harness the same energy remotely and without comparable offerings.
For amateurs, it appears consumer technology has entered an era of solutions searching for problems. Witnessing our high-tech overlords stumble on stage raises a broader question: Is that not the case?
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.