Matthew McConaughey and Michael Caine Secure Voice Agreement with AI Firm

Academy Award-winning actors Matthew McConaughey and Michael Caine have entered into an agreement with AI audio firm Eleven Labs.

The New York-based company is now authorized to produce AI-generated voice replicas as part of its initiative to tackle “significant ethical challenges” in the intersection of artificial intelligence and Hollywood.


McConaughey, who has also invested in the company and collaborated with them since 2022, will allow Eleven Labs to produce a Spanish audio version of his newsletter “Lyrics of Livin'” using his voice.

In a statement, the Dallas Buyers Club star expressed his admiration for Eleven Labs and hoped this collaboration would enable him to “reach and connect with an even broader audience.”

Eleven Labs is launching the Iconic Voices Marketplace, allowing brands to collaborate and utilize officially licensed celebrity voices for AI-generated applications. Caine’s new agreement includes his iconic voice in this lineup.

“For years, I have lent my voice to stories that inspire people—tales of bravery, ingenuity, and the human experience,” Caine stated. “Now, I am helping others to discover their voice. With Eleven Labs, I can save and share everyone’s voice, not just mine.”

He further mentioned that the company “leverages innovation to celebrate humanity, not to replace it,” asserting that it “does not replace voices, it amplifies them.”


Caine has also revealed plans to return from retirement to co-star with Vin Diesel in The Last Witch Hunter 2.

Other voices featured in the marketplace include legendary Hollywood figures like John Wayne, Rock Hudson, and Judy Garland, alongside contemporary stars such as Liza Minnelli and Art Garfunkel. The list also encompasses notable figures like Amelia Earhart, Babe Ruth, J. Robert Oppenheimer, Maya Angelou, and Alan Turing.

Recently, Eleven Labs was valued at approximately $6.6 billion.

This news follows a series of celebrity and AI partnership agreements, including various celebrities who have consented to allow Meta to utilize their voices. Last year, the company released a list that featured Judi Dench, John Cena, and Kristen Bell.

Other stars, including Ashton Kutcher and Leonardo DiCaprio, have also made investments in AI enterprises.

Source: www.theguardian.com

Maternal Voice Enhances Language Development in Premature Infants

Premature babies may face language challenges later, but simple interventions can assist.

BSIP SA/Alamy

The first randomized controlled trial of this straightforward intervention suggests that playing recordings of a mother’s voice to premature infants could expedite their brain maturation processes. This method may eventually enhance language development in babies born prematurely.

Premature birth alters brain structure, leading to potential language disorders and affecting later communication and academic success. A mother’s voice and heartbeat can foster the development of auditory and language pathways. Unfortunately, parents may not always be able to physically be with their infants in the neonatal units.

To explore whether this absence could be compensated for through recordings, Katherine Travis and her team at Weill Cornell Medicine in New York conducted a study with 46 premature infants born between 24 and 31 weeks gestation, all situated in the neonatal intensive care unit.

We recorded mothers reading from children’s books, including selections from A Bear Named Paddington. Half of the infants listened to a ten-minute audio segment twice every hour overnight between 10 PM and 6 AM, increasing their daily exposure to their mother’s voice by an average of 2.7 hours until they reached their original due date. The other infants received similar medical care but were not exposed to recordings.

Upon reaching their due date, these infants underwent two MRI scans to evaluate the organization and connectivity of their brain networks. The results indicated that those who heard their mother’s voice at night exhibited more robust and organized connections in and around the left arcuate fasciculus, a crucial area for language processing. “The structure appeared notably more developed,” said Travis. “The characteristics matched what one might expect to find in older, more mature infants.”

The scans also suggested that this maturation could be linked to increased myelination— the creation of a fatty sheath that insulates nerve fibers, enhancing the speed and efficiency of signal transmission within the brain. “Myelination is crucial for healthy brain development, especially in pathways that support communication and learning,” noted Travis.

Previous studies have indicated that delayed development of these brain areas correlates with language delays and learning challenges. The latest findings imply that targeted speech exposure could improve these outcomes.

However, is it truly vital for infants to hear their mother’s voice rather than others? While this study did not address that, earlier research explains the phenomenon. Babies start hearing around the 24th week of pregnancy, and continue to recognize their mother’s voice after birth due to early exposure in the womb. Travis explained, “This voice is biologically significant and may be especially appealing to the developing brain.”

Nonetheless, Travis emphasizes that language exposure from other caregivers is also critical for language development, and future studies will explore this aspect further.

The intervention is straightforward and can easily be integrated into care protocols. However, David Edwards from Evelina London Children’s Hospital cautioned against overinterpreting the findings. “Given the small sample size, additional control groups, including different audio sources and forms of auditory stimulation, should be evaluated,” he suggested.

Travis and her research team aim to validate these results in larger trials involving medically vulnerable infants. They will continue to monitor current participants to determine if the observed brain differences result in tangible improvements in language and communication skills as these infants grow.

topic:

Source: www.newscientist.com

AI Can’t Capture the Sound of Orgasms: The Growing Demand for Voice Actors Amidst Robot Narrators in Audiobooks

The reasons audiobooks resonate are deeply human. They evoke moments that catch in the throat or a genuine smile when a word is spoken.

Melbourne actor and audiobook narrator Annabelle Tudor refers to narration as a storyteller’s innate ability, a fundamental and priceless skill. “The voice easily reveals our true feelings,” she explains.

However, this art form might be facing challenges.

In May, Audible, the audiobook service under Amazon, revealed plans to enable authors and publishers to narrate in English, Spanish, French, and Italian, using over 100 voices generated by artificial intelligence.

With a dwindling number of audiobook companies, emerging talents like Tudor are increasingly reliant on these opportunities, sparking concerns regarding job security, transparency, and overall quality.

Having narrated 48 books, Tudor is uncertain whether AI can replicate her work, yet fears that a dip in quality may alienate listeners.

“I once narrated a particularly explicit scene. The AI lacks understanding of how an orgasm sounds,” she remarks. “I’m curious to know how they plan to address such nuances, including more delicate scenes like childbirth.”

Audiobook Giant Audible claims it aims to use AI to enhance human narration rather than replace it. Photo: M4OS Photo/Aramie

The Audiobook Boom

A 2024 report from Nielseniq Bookdata indicates that over half of Australian audiobook consumers have increased their listening in the last five years. On an international scale, US audiobook sales have risen by 13% from 2023 to 2024. Meanwhile, the UK has seen audiobook revenues soar to £268 million, marking a 31% increase in 2023, as reported by the Publishers Association.

As demand for audio content surges, companies are seeking quicker and cheaper production methods. In January 2023, Apple unveiled a new catalog featuring AI-narrated audiobooks. Later that year, Amazon introduced a feature allowing self-published authors to convert their Kindle ebooks into audiobooks using “virtual audio” technology, resulting in tens of thousands of Audible titles now available in AI-generated formats.

Additionally, in February, Spotify announced support for AI audiobooks, making it easier for authors to reach wider audiences. Audible claims its goal is not to supersede human narrators but to enable more authors and titles to connect with larger audiences. In the US, Audible is testing audio replicas of audiobook narrators to create a unique voice, enhancing their capacity to produce high-quality audiobooks.

“In 2023 and 2024, Audible Studios has hired more [human narrators],” a spokesperson shared with the Guardian. “We continually engage with creators eager to have their work available in audio format and reach new audiences across languages.”

Yet, robot narrators remain a more economical choice than human talent, raising fears among industry professionals about potential job threats.

Volume vs. Quality?

Australian bestselling crime author Chris Hammer helped narrator Doge Swallow launch his career, highlighting a belief that AI narration is a tool designed by people who fail to grasp the intricacies, techniques, and skills necessary for quality audiobook production.

“Some assume we just press a button for a similar or sufficient quality result,” he notes.

Simon Kennedy, president of the Australian Audio Actors Association, mentions a long-standing struggle in Australia about fair remuneration for narrators. Recording an audiobook can mean narrators spend up to three times the length of the finished product for recording, not counting the initial read to understand the narrative and characters.

Skip past newsletter promotions

“In my view, AI narrators prioritize volume over quality and aim to cut costs,” he asserts.

In 2024, Kennedy founded the Australian Voice Subject Association in response to AI’s looming threat. In a submission to a parliamentary committee last year, the organization warned that 5,000 Australian voice acting jobs were at stake.

While not surprised by Audible’s recent announcement, he dismisses it as a “foolish decision.”

“Audiobook narrators hold a truly special and intimate connection with their listeners; pursuing an approach that lacks this connection is misguided,” he suggests.

Regarding voice cloning opportunities, he states that voice actors should be involved in the process, but warns that it may lead to a homogenized robotic voice that listeners quickly tire of.

“If a monotonous, emotionless narration suffices for ‘high quality,’ then perhaps,” he counters. “However, if you seek an evocative, captivating listening experience, don’t expect to find it there.”

Another pressing concern is the absence of AI regulations within Australia. The EU has its own AI ACT, while China and Spain also have measures in place, whereas Australia lacks regulations regarding the labeling of AI-produced content.

“No laws exist to prevent data scraping, voice cloning, or breeding deeper AI capabilities,” Kennedy explains. “There’s no labeling or transparency requirement for AI-generated material or its origins, nor any regulations governing the proper use of AI-generated deepfakes, audio clones, or text.”

Author Hannah Kent expresses concern that AI will “devalue creativity” in the arts. Photo: Carrie Jones/Guardian

This year, during the burial ceremony and dedication of her work, Author: Hannah Kent dropped with astonishment upon discovering that pirated copies of her work had trained meta AI systems. Despite initial resistance and frustration towards AI’s infiltration in creative spaces, she shows curiosity about Audible’s AI developments and the prospective trials for translating texts into various languages.

“It’s evident that the primary motive behind AI adoption is cost-efficiency. Its aim is to reduce artistic value and creative narratives,” Kent reflects.

Both Tudor and Swallow agree that large corporations struggle to fully substitute human narration, as many Australian authors express opposition.

Yet, it remains unclear whether audiences can discern the difference.

“We are rushing straight into a dystopia,” Tudor warns. “Will I listen to humans or robots?”

Source: www.theguardian.com

“My AI Voice was Cloned and Used by the Far-Right. Can I do anything to stop it?” – Georgina Findlay

M
My brother put the cell phone to my ear. “You’re going to think this is creepy,” he warned. Ann
instagram reels
The footage, which showed teenage boys attending the rally, included a news broadcast-style narration. “The recent protests by British students have become a powerful symbol of the deepening crisis in Britain's education system,” she said in a soft, female voice with barely a hint of a Manchenian accent. I opened my eyes wide and sat up straight.

As a presenter on a YouTube news channel, I was used to hearing my voice on screen. But this wasn't me – even if that voice said so.
definitely mine.
“They force us to learn about Islam and Muhammad in school,” he continued. “Listen, this is disgusting.” It was horrifying to hear my voice being associated with far-right propaganda, but more than that, I was horrified to hear how this fraud is being perpetrated. As I dug deeper, I learned how far-reaching the effects of false voices can be.

AI voice cloning is an emerging form of audio “deepfake” and the third fastest growing form
Scam of 2024.
Unwitting victims find that their voices have been cleverly duplicated without their consent or even knowledge, a phenomenon that has already led to bank security checks.
bypassed and people
deceived He had a stranger he believed to be a relative send money to him. My brother was sent the clip by a friend who recognized my voice.

After some research, I was able to find a far-right YouTube channel with about 200,000 subscribers. Although this was said to be an American channel, many of the misspellings in the video were typical of misinformation accounts from non-native English speakers. I was shocked to learn that my voice was featured in 8 of the channel's 12 most recent videos. I scrolled back and found one video using my voice from 5 months ago.
10m views.
The voice was almost the same as mine. The voice was AI-generated, except the pace of my speech was a little odd.


This increasing sophistication of AI voice cloning software is a cause for serious concern. In November 2023, an audio deepfake of London Mayor Sadiq Khan allegedly making inflammatory remarks about Armistice Day was widely circulated on social media. The clip almost caused a “serious injury”;
Mr Khan told the BBC..
“If you're looking to sow disharmony and cause trouble, there's no better time.” At a time when confidence in Britain's political system is already at record levels.
lowThe ability to manipulate public rhetoric is more harmful than ever, with 58% of Britons saying they have “little trust” in politicians to tell the truth.

The legal right to own one's voice falls within a vague gray area of ​​poorly legalized AI issues. TV naturalist David Attenborough became the center of an AI voice cloning scandal in November. He said he was “deeply disturbed” to learn that his voice was being used to deliver partisan breaking news in the United States. In May, actor Scarlett Johansson sued OpenAI for using a text-to-speech model in ChatGPT, an OpenAI product, that Johansson described as “eerily similar” to her own voice. There was a collision.

In March 2024, OpenAI postponed the release of a new voice replication tool, deeming it “too risky” to make it publicly available in a year with a record number of global elections. Some AI startups that let users clone their own voices can detect the creation of voice clones that imitate politicians actively involved in election campaigns, including in the US and UK. We have a preventive policy in place.

However, these mitigation measures are not enough. In the United States, concerned senators are proposing legislation to crack down on those who copy audio without consent. In Europe, the European Identity Theft Surveillance System (Aitos) has developed four tools to help police identify deepfakes, with plans to have them ready by the end of this year. But tackling the audio crisis is no easy task. Dr Dominic Rees, an expert on AI in film and television who advises a UK parliamentary committee, told the Guardian: “Our privacy and copyright laws are not prepared for what this new technology will bring.”

If declining trust within organizations is one problem, creeping distrust among communities is another. The ability to trust is central to human cooperation as globalization advances and personal and professional lives become increasingly intertwined, but we have never come to the point of undermining it to this extent. Hany Farid, a professor of digital forensics at the University of California, Berkeley and an expert on deepfake detection, said:
told the Washington Post The consequences of this voice crisis could be as extreme as mass violence or “election theft.”

Is there any benefit to this new ability to easily clone audio? Maybe. AI voice clones could allow people to seek solace by connecting with the dead
loved ones
or help give a voice to people who:
medical condition. American actor
val kilmerhas been undergoing treatment for throat cancer, and returned to “Top Gun: Maverick'' in 2022 with a voice restored by AI. Our ability to innovate may serve those with evil intentions, but it also serves those working for good.

When I became a presenter, I happily shared my voice on screen, but I did not agree to sign on to anyone who wanted to use this essential and precious part of me. As broadcasters, we sometimes worry about how colds and winter viruses will affect our recordings. But my recent experience has given the concept of losing one's voice a different, far more sinister meaning.

Source: www.theguardian.com

Richard III’s voice recreated using high-tech technology to capture Yorkshire accent

The voice of medieval king Richard III has been recreated using technology, complete with a distinctive Yorkshire accent.

An digital avatar of the monarch was unveiled at York Theater Royal, with experts assisting in replicating his voice.

Richard III reigned as King of England from 1483 until his death in 1485 at the age of 32. His remains were discovered under a car park in Leicester in 2012 as part of Philippa Langley’s Finding Richard project.

Through various scientific methods, including DNA analysis, his skeleton was identified and now his voice has been successfully recreated.

Langley, speaking about the recreation, stated to Sky News: “We have leading experts who have been working tirelessly on this research for a decade, ensuring that every detail is meticulously researched and presented with evidence. Thus, we have the most accurate portrayal of Richard III.”

Yvonne Morley Chisholm, a voice teacher and vocal coach, joined the project over 10 years ago, providing after-dinner entertainment comparing Shakespeare’s Richard III with real-life figures.

The project took an unexpected turn when Maury Chisholm was prompted to create a performance following the discovery of Richard III’s remains under a car park in Leicester.

The voice re-creation project quickly gained momentum, with experts from various fields coming together to piece together the puzzle.

The reconstructed voice of Richard III has a strong Yorkshire accent, distinct from the English accents typically heard in portrayals by actors like Ian McKellen and Laurence Olivier in Shakespeare’s plays.

Richard III met his end at the Battle of Bosworth on 22 August 1485, marking the close of the House of York and the Plantagenet dynasty. His defeat was a significant event in the Wars of the Roses.

Source: www.theguardian.com

Elwood Edwards, famously known as the voice behind AOL’s “You’ve got mail” greeting, passes away at 74 years old

Elwood Edwards, the iconic voice behind AOL’s famous “You’ve got mail” greeting, passed away at the age of 74.

Edwards died at his residence in New Bern, North Carolina, on Tuesday, following complications from a stroke last year, as confirmed by his daughter Heather.

In 1989, Edwards recorded the greetings for AOL in his living room. The phrase “You’ve got mail” became widely recognized in the late 1990s, even inspiring a movie starring Tom Hanks and Meg Ryan in 1998.

Elwood Edwards. Photo: Social Media

“He always blushed when people mentioned it,” shared his daughter. “He enjoyed the attention but never quite got used to it.”

Apart from “You’ve got mail,” Edwards also lent his voice to AOL’s “Welcome,” “Goodbye,” and “File Completed” messages, earning $200 for the recordings.

He landed the gig while his wife Karen, who worked at AOL as a customer service rep, heard about the voiceover opportunity and recommended him for the job.

Despite being unseen by most, Edwards’ voice resonated with millions daily. “For a while, America Online [AOL] kept me a secret, turning me into a bit of a mystery figure. But it’s out now, that’s that,” Edwards stated in 1999.

He made a memorable appearance on The Tonight Show Starring Jimmy Fallon in 2015, delighting the audience by delivering his famous phrase, and even made a guest voice appearance on The Simpsons in 2000.

Before his AOL fame, Edwards worked in radio and later transitioned to television. His daughter fondly recalled his self-deprecating humor and cheerful demeanor.

Skip past newsletter promotions

Transitioning to TV, Edwards worked as a “graphics guru, camera operator, and all-around talent” at WKYC-TV in Cleveland, where he also did voice-over work for commercials in addition to freelancing for radio.

He is survived by his daughter Sally, granddaughter Abby, and brother Bill. The family plans to hold a memorial service in New Bern on Monday.

Source: www.theguardian.com

Cambridge exhibition showcases AI technology that gives voice to deceased animals

Don’t worry if the salted bodies, partial skeletons, and taxidermied carcasses that fill the museum seem a little, well, quiet. In the latest coup in artificial intelligence, dead animals will be given a new lease of life, sharing their stories and even their experiences of the afterlife.

More than a dozen exhibits, from American cockroaches and dodo remains to a stuffed red panda and a fin whale skeleton, will be given the gift of conversation on Tuesday for a month-long project at the University of Cambridge Museum of Zoology.

Dead creatures and models with personalities and accents can communicate by voice or text through visitors’ mobile phones. This technology allows animals to describe their time on Earth and the challenges they have faced in the hope of reversing apathy towards the biodiversity crisis.

“Museums use AI in many ways, but we think this is the first application where we’re talking from an object perspective,” said Jack Ashby, the museum’s assistant director. “Part of the experiment is to see if giving these animals their own voices will make people think differently about them. Giving cockroaches a voice will change the public’s perception of them. Is it possible?”




A fin whale skeleton hangs from the museum’s roof. Photo: University of Cambridge

This project was conceived by natural perspectiveis a company building AI models to strengthen the connection between people and the natural world. For each exhibit, the AI includes specific details about where the specimen lived, its natural environment, how it arrived in the collection, and all available information about the species it represents.

The exhibits change their tone and words to suit the age of the person they are talking to, allowing them to converse in over 20 languages, including Spanish and Japanese. The platypus’s cry is Australian-like, the red panda’s call is slightly Himalayan-like, and the mallard’s call is British-like. Through live conversations with the exhibits, Ashby hopes visitors will learn more than can be written on the labels on the specimens.

As part of the project, the conversations visitors have with exhibits will be analyzed to better understand the information visitors are looking for in specimens. The AI suggests a variety of questions for the fin whales, such as “Tell me about life in the open ocean,” but visitors can ask whatever they like.

“When you talk to these animals, you really get a sense of their personalities. It’s a very strange experience,” Ashby said. “I started by asking questions like, “Where did you live?’ and “How did you die?’ but eventually I asked more human questions. Tanda. ”




Mallard ducks have a British accent due to AI. Photo: University of Cambridge

The museum’s dodo, one of the world’s most complete specimens, fed on fruit, seeds and the occasional small invertebrate in Mauritius, explains how its strong, curved beak is perfect for splitting tough fruit. I explained what it was. Tambaracock tree.

The AI-enhanced exhibit also shared views on whether humans should try to revive the species through cloning. “Even with advanced technology, the dodo’s return will require not only our DNA, but also Mauritius’ delicate ecosystem that supported our species,” the group said. . “This is a poignant reminder that the essence of all life goes beyond our genetic code and is intricately woven into our natural habitats.”

A similar level of obvious care was given to the fin whale skeleton that hangs from the museum’s roof. When I asked him about the most famous person he had ever met, he admitted that in his lifetime he had never had the opportunity to meet anyone as “famous” as humans see them. “But,” the AI-powered skeleton continued, “I would like to think that anyone who stands below me and feels awe and love for the natural world is important.”

Source: www.theguardian.com

BBC Presenter Deceived into Using AI-Generated Voice for Advertisement: A Portrait of the Incident

Her voice seemed off, not quite right, and it meandered in unexpected ways.

Viewers familiar with science presenter Liz Bonnin’s Irish accent were puzzled when they received an audio message seemingly from her endorsing a product from a distant location.

It turned out the message was a fake, created by artificial intelligence to mimic Bonnin’s voice. After spotting her image in an online advertisement, Bonnin’s team investigated and found out it was a scam.

Bonin, known for her work on TV shows like Bang Goes The Theory, expressed her discomfort with the imitated voice, which she described as shifting from Irish to Australian to British.

The person behind the failed campaign, Incognito CEO Howard Carter, claimed he had received convincing audio messages from someone posing as Bonin, leading him to believe it was the real presenter.

The fake Bonin provided contact details and even posed as a representative from the Wildlife Trust charity, negotiating a deal for the advertisement campaign. Carter eventually realized he had been scammed after transferring money and receiving the image for the campaign.

AI experts confirmed that the voice memos were likely artificially generated due to inconsistencies in accent and recitation speed. Bonin warned about the dangers of AI misuse and stressed the importance of caution.

Incognito reported the incident to authorities and issued a statement cautioning others about sophisticated scams involving AI. They apologized to Bonin for any unintended harm caused by the deception.

Neither the BBC nor the Wildlife Trust responded to requests for comments on the incident.

Source: www.theguardian.com

OpenAI warns against releasing voice cloning tools due to safety concerns.

OpenAI’s latest tool can create an accurate replica of someone’s voice with just 15 seconds of recorded audio. This technology is being used by AI Labs to address the threat of misinformation during a critical global election year. However, due to the risks involved, it is not being released to the public in an effort to limit potential harm.

Voice Engine was initially developed in 2022 and was initially integrated into ChatGPT for text-to-speech functionality. Despite its capabilities, OpenAI has refrained from publicizing it extensively, taking a cautious approach towards its broader release.

Through discussions and testing, OpenAI aims to make informed decisions about the responsible use of synthetic speech technology. Selected partners have access to incorporate the technology into their applications and products after careful consideration.

Various partners, like Age of Learning and HeyGen, are utilizing the technology for educational and storytelling purposes. It enables the creation of translated content while maintaining the original speaker’s accent and voice characteristics.

OpenAI showcased a study where the technology helped a person regain their lost voice due to a medical condition. Despite its potential, OpenAI is previewing the technology rather than widely releasing it to help society adapt to the challenges of advanced generative models.

OpenAI emphasizes the importance of protecting individual voices in AI applications and educating the public about the capabilities and limitations of AI technologies. The voice engine is watermarked to enable tracking of generated voices, with agreements in place to ensure consent from original speakers.

While OpenAI’s tools are known for their simplicity and efficiency in voice replication, competitors like Eleven Labs offer similar capabilities to the public. To address potential misuse, precautions are being taken to detect and prevent the creation of voice clones impersonating political figures in key elections.

Source: www.theguardian.com

Artificial Intelligence (AI) is Leaving Job Seekers Feeling Excluded: “The Interviewer’s Voice Resembled Siri”

When Ty passed a phone interview with a financial/banking company last month, they thought it would be nothing more than a quick chat with a recruiter. When Ty answers the phone, he assumes the recruiter named Jamie is a human. But things have become robotic.

“The voice sounded like Siri,” said Tai, 29, who lives in the D.C. metropolitan area. “It was creepy.”

Ty realized they weren’t talking to a living, breathing human being. Their interviewer was an AI system and had a tendency to be quite rude. Jamie asked Ty all the right questions – what is your management style? Are you suitable for this role? – But she wouldn’t let Ty answer completely.

“After disconnecting me, the AI ​​responds, “Great!” Sounds good! perfection! ‘Move on to the next question,’ Tai said. “After the third or fourth question, the AI ​​paused for a moment and said the interview was complete and someone from the team would contact me later.” (Ty said his current employer We asked that our last names not be used because we do not know that they are looking for work.)

a investigation Resume Builder, released last summer, found that by 2024, 4 in 10 companies will be using AI to “converse” with candidates during interviews. Of these companies, 15% said hiring decisions are made without any human input.

Laura Michelle Davis I have written From CNET: “Today, it’s not uncommon for applicants to be rejected by robots in human resources departments before they even connect with a real human.” To make the grueling hiring process even more discouraging, many are worried that generative AI, which uses datasets to create text, video, audio, images, and even robot recruiters, will completely take over our jobs.But can AI help us? search Any new gigs in the meantime?

Source: www.theguardian.com

AI Voice Messages of Shooting Victims Call for Gun Reform in the US

SNine years ago today, Joaquin Oliver was murdered in the hallway outside his Florida classroom. He was one of 17 students and staff killed in America's deadliest high school shooting. On Wednesday, lawmakers in Washington, D.C., will hear his voice recreated by artificial intelligence on the phone, asking them why they haven't done more about the gun violence epidemic.

“It's been six years and you haven't done anything. You can't stop the shootings that have happened since then,” he said of the Valentine's Day 2018 shooting at Marjory Stoneman Douglas High School in Parkland. A message from Oliver, who was 17 at the time of his tragic death, reads:

“I came back today because my parents used AI to recreate my voice and call you. Other victims like me have also received countless calls demanding action. How many calls will it take to care? How many dead voices will I hear before I finally hear it?”

Oliver is one of six people who lost their lives to firearms, and his voice is about to be heard again. He's issuing a call to action in an innovative online gun reform campaign launched today. shot line.

Parkland victim Joaquin Oliver

“How many dead voices will we hear before we finally hear it?”

Sorry, your browser does not support audio. However, you can download and listen here $https://uploads.guim.co.uk/2024/02/13/TheShotline_AI_JoaquinOliver_Call_to_Congress.mp3

A project by two activist groups formed in the wake of the Parkland shooting and creative communications agency MullenLowe, it leverages AI technology to generate direct messages from shooting victims themselves.

The voices are “trained” using deep machine learning from audio clips provided by family members. The resulting recordings are ready to go directly to the people in Congress who have the power to take action against gun violence. Website visitors enter their zip code and choose the message they want to send to their elected representatives.

“We all hear children's voices in our heads. Why don't lawmakers need to hear them too?” said Mike Song, whose 15-year-old son Ethan died in an accident involving a missing gun.

Ethan's message, like Oliver's, is straightforward. “Children like me die every day. It's time to act. It's time to pass laws that protect children from unsafe guns. At the end of the day, it's about helping people. It’s your job to pass responsible gun control, or we’ll find someone to do it.”

Other voices recreated for the Shotline project include that of 10-year-old Ujiyah Garcia, a victim of the 2022 Uvalde Elementary School shooting in Texas. Akira DaSilva, 23, was killed in the 2018 Waffle House shooting in Tennessee. Jaycee Webster, 20, was shot and killed by an intruder in his Maryland home in 2017. And in 2014, Mike Bohan committed suicide with a gun he could buy in 15 minutes.

Vaughn's death, who suffered from depression, sparked a movement that led to passage of Maryland's first Red Flag gun control.

Six years after Oliver's murder, it is by design that Oliver's voice is at the forefront of the campaign. One of his two groups behind this effort is march for our livean activist group formed by Stoneman Douglas students that sparked global protests after Parkland.

The Shotline campaign uses AI to generate audio messages from gun violence victims. Photo: shot line

the other one is, Change references, was founded by the teenager's parents, Manny and Patricia Oliver. They have been relentlessly advocating for gun reform since his son was murdered.

“We wanted this to be a powerful message,” Patricia Oliver said. “Joaquin has his own energy, his own image, and that's what keeps him alive. I'm so proud of Joaquin, he's the driving force that drives us forward.”

She admits the process of recreating her son's voice for 56 seconds was mentally taxing. The Olivers searched their phones and computers for videos containing Joaquin's statements and asked her sister Andrea, other relatives and girlfriend Tori to do the same.

“It was difficult to make out his exact voice because of the noise in the background,” she said. “In one video, he was in the pool and we were talking and the sound of the water was distracting.”

Eventually, we assembled enough clips for our engineers to work with, and after a long period of fine-tuning, we received the final “draft.”

“When I played it, it was incredibly shocking and a lot of different emotions came up. We had been listening to videos of Joaquín talking about the past, and now he's in a situation where he is today, very emotional. We talk about recent things,” she said.

“I know this is just a fantasy and not the truth. But in that moment, you forget what you're listening to, why you're listening, and he just says, 'Hello, Mom, how are you?' I just hope from the bottom of my heart that you just say, “?”. once again. “

Ethan's mother, Christine Song, said she felt the same painful emotions when she heard her son “talk” again six years after his death.

“It brings you back to that day, the last words your child said to you before leaving your life,” she said.

“Honestly, I just sat there and sobbed, because I knew he would never come back. But the Olivers, and my husband, and people like us all have one thing in common: What we're saying is that we go out every day and fight for respect for our children, and we're actually fighting for your children and grandchildren.”

The Songs are pressuring federal lawmakers to pass the Connecticut bill. ethan's lawrequires safe storage of firearms in the home.

“We have promised that we will not stop until we can create a cultural shift in this country where gun owners make safe storage of their weapons second nature,” said Kristen Song. Ta. “You might think that's enough because the coffins of our dead children are piling up, but when it comes to Republicans in Congress, they just don't listen.”

To create voice and calls, MullenLowe talking baby For E*Trade's Super Bowl commercial, we partnered with AI specialist Edisen, with teams in the US and Sweden working on the project.

Snippets of audio “trained” on speech patterns and tonality were fed through Eleven Labs’ generative voice AI platform, and the reconstructed voices generated voice calls from text-to-speech scripts.

“There's a lot of talk about AI right now, but this is a beautiful example of what AI can actually achieve, and a very human achievement,” says Mirko, AI creative designer at Stockholm-based Edisen.・Mr. Lempert said.

“This project was very moving and showed me how different our world is, because in my country we are not exposed to it.” [gun violence] That's the situation. That was a wake-up call. ”

Last week, the Federal Communications Commission banned robocalls using AI-generated voices after Joe Biden's voice was imitated in a fake phone call to voters in New Hampshire.

MullenLowe said Shotline calls are exempt because they are not auto-dialed, are made to a landline and are provided with a callback number.

Source: www.theguardian.com

Concerned about AI voice fraud? Don’t worry, I have a guaranteed solution- Zoe Williams

a A friend of mine was recently fooled by a fraudulent email purporting to be from her middle school daughter and transferred £100 into her account to cover a mysterious situation, which she described as a very time-sensitive and inconvenient event. That’s it.

You can imagine how the scammers managed to pull it off. Remember the everyday low-level anxiety of parents expecting bad news when their children are further away than the kitchen table? What’s more, the bad news story, which begins with a 19-year-old’s email saying, “I broke my phone,” is completely believable. All the scammer has to do is lean back.

Still, the story isn’t complete, as it neglects to ask basic questions like, “But if your phone is broken, why transfer money to someone else’s bank account?” , and for years afterward we called him a fool. He didn’t even call her number to see if he could talk to her. A 100-pound lighter was probably the best place to land. If someone tries to release his life savings, he will concentrate.

But what happens when you hear your child begging for money just like you? Who has strong enough defenses to withstand voice cloning? Members of Stop Scams UK tried to explain this to me last year. Scammers can extract the child’s voice from her TikTok account. Then all they have to do is find the parent’s phone number. I thought I had gotten the wrong end of the stick and had to piece together the message from recorded words available on social media. Good luck getting some soccer tips and some believable havoc from K-Pop, I thought. When it comes to AI, he didn’t think for 10 seconds about whether it could infer speech patterns from samples. In fact, it’s possible.

I think it’s still pretty easy to get around. Kid Machine is seeking urgent assistance. You say, “Precious and perfect being, I love you with all my heart.” Kid Machine will surely reply, “I love you too.” Why can’t we do that? A real child would claim to have been sick in the mouth. You can’t build an algorithm for this.

Zoe Williams is a columnist for the Guardian



Do you have an opinion on the issues raised in this article? Click here if you would like to email your answer of up to 300 words to be considered for publication in our email section.

Source: www.theguardian.com

Grok, the AI stuffed animal with Grimes’ voice, was trademarked before Elon Musk’s Grok

Grimes is getting into the toy business with “Glock,” the character she voices for Curio’s new line of screenless AI plush toys.

The toy is not affiliated with Grok, an AI chatbot backed by Grimes’ ex-Elon Musk. Musk described xAI’s Grok as having a “rebellious personality” and a willingness to answer “tough questions that most other AI systems would refuse.” That sounds vulgar if you ask me.

Grok, Gabbo, and Grem, on the other hand, are designed to encourage play. In a conversation with Misha Sallee and his partner Sam Eaton, the Curio founder said: Published on Curio’s blogGrimes said she encourages children’s creativity early on through dynamic conversations rather than a static list of prompts.

“The idea of ​​bringing more imagination or making it easier to access imagination within one’s current existence, rather than just observing it within other beings such as screens, movies, and books. I like it,” she said.

in Curio announcement videoGrimes said she doesn’t want her kids to be “in front of a screen” but is “really busy.”

Image credits: antique

Curio says the toys can have full conversations, allowing children (or adults) to practice their communication skills. Glock is an anthropomorphic rocket ship, voiced by Grimes. There’s Gabo, who looks like a Game Boy stuffed animal with arms and legs. And then there’s Grem, a cyan rabbit with hearts on his cheeks. The beta version of the toy is Pre-order possible Through Sunday, the price is $99 each. Recommended for children from 3 years old to her 7 years old. Grimes and Musk’s oldest child is named XÆA-Xii and she is 3 years old.

The stuffed animals answer questions about how rocket ships are made and play games with the user, encouraging the development of children’s listening and conversation skills. Inside the stuffed animal is a rechargeable, Wi-Fi-connected speaker and microphone, connected to an app that parents can set up and monitor interactions with their children.

“When I think about kids, my goal is to keep as many hearts out of this as possible. Basically, how many iPads can we replace?” Grimes said with Eaton and Sally. He said this in a conversation.

She later added: “I think the more we verbalize things, the more we’re forcing people to use their working memory. You know, there are little things here and there that make our brains just a little bit better. ”

Grimes became involved with Curio after answering a question. post “Children’s teddy bears talk to children and give them peace of mind at night.” About the future of AI-integrated toys.grimes answered “It would be great if it was safe,” she said, and she would be happy if children could have “a culture ship in a teddy bear at heart.”

The line launches about a week after a competitor to Musk’s ChatGPT (named Grok) began rolling out to X Premium Plus subscribers.

“Grimes is the voice of the toy, and this rocket just so happens to be named Grok and was made before the announcement of the Grok AI, so there’s some interesting overlap between him and Grimes,” Sally said. said in a conversation.

as Business Insider ReportGrimes’ Grok was the first to be trademarked.

Curio has filed a trademark for Grok on September 12 this year. xAI files trademark for Grok on October 23rd. Curio’s Grok stands for Grocket, as the Grimes children spend a lot of time around rockets since their father is the owner of SpaceX. The Washington Post reported.

grimes and musk Currently in custody battle The couple has filed child custody lawsuits in California and Texas over their three children.

in post Regarding the name, Grimes said that by the time Curio realized that xAI’s Grok team was also using that name, “it was too late for either AI to change its name.”

“I currently have two AIs named Grok, and I can’t wait for them to be friends,” she said. “I can’t believe that even an AI can’t avoid showing up at school and meeting other kids with the same name lol.”

Source: techcrunch.com

YC-Powered Voice API Platform Empowers Productivity App Bots with Super-Powerful Pivot

Calendar apps are essential to productivity, but it’s difficult to differentiate them enough from their core use cases to sustain growth. Powered by Y Combinator super powerfulan AI-powered meeting note-taking tool that does not require bot recording, has overcome this obstacle and is currently Vapian API provider, makes it easy for anyone to create natural, voice-based, AI-powered assistants.

Superpowered was founded in 2020 by Jordan Deersley and Nikhil Gupta. But Dearsley said after three years of work, the team wanted to work on a more challenging product. Superpowered is profitable, the startup said, and the company has no intention of shutting down its first product and is in the process of hiring someone to run it. Y Combinator announced in June that more than 10,000 people use the product each week, but the company did not provide updated numbers.

Image credits: Vapi

To date, Superpowered/Vapi has raised $2.1 in seed money from investors including Kleiner Perkins and Abstract Ventures.

Pivot to Vapi

The company offers Vapi as an API that allows developers to create bots using only a prompt and putting it behind a phone number. Additionally, SDK integration is also provided, allowing developers to embed bots on their websites and mobile apps.

Dearsley told TechCrunch via email that the idea to build Vapi stemmed from a personal problem. He moved to San Francisco and began to miss his friends and family in different time zones. He built his AI bot that connects to the other party’s phone number to have a conversation with someone to organize their thoughts.

“I liked it, but I was constantly annoyed by how unnatural it was. It didn’t feel like talking to a person. The audio cut out, it took a long time to respond, and when I was talking it would interrupt me. ” he said.

“So I kept working on it and went for walks with it. Eventually, we fell in love with this conversation problem. It’s really hard to make something feel human. Voice assistants. today It’s clunky and turn-based, so I wanted to create something with a human touch. ”

Technically, Vapi is currently integrating a number of third-party APIs to build a robust voice conversation platform. For example, we use solutions from Twilio for phone calls, Deepgram for transcription, Daily for audio streaming, and OpenAI for responses. PlayHT For text reading.

ScaleConvo, a 2024 YC Winter Batch startup, is already using Vapi to launch conversational bots for sales teams and property management companies. However, Vapi did not reveal other customers.The company publishes his API Current Vapi Phone and Vapi Web products.

Vapi challenges

According to Magnus Revan, former Gartner analyst and chief product officer at multimodal conversation startup Openstream.ai, one of the startup’s biggest challenges is reducing latency.

“OpenAI models take between 2 and 10 seconds to generate a response. On the phone, the gold standard is 700 milliseconds between when the user finishes speaking and when the ‘bot’ begins speaking. And with capable models (high parameter count open source models like LLaMA2 70B) it is very difficult to achieve sub-second latencies,” he says.

Currently, Vapi’s latency is between 1.2 and 2 seconds depending on various factors. Dearsley expects that Vapi’s own efforts and improvements to his OpenAI will bring latency down to below a second for him in the next month.

Mohamed Musbah, an angel investor at Vapi, also said that the startup’s solution will be improved by overall advances in APIs.

“As OpenAI and other companies improve their models, Vapi’s platform will become more powerful, with a better knowledge base, code execution capabilities, and a larger context window. “As demand grows, Vapi’s focus on solving the biggest friction areas in voice communications will be a strength for the company,” he said.

However, this puts the responsibility on improving other solutions, not Vapi itself. Dearsley said the reliance on other APIs will make Vapi less defensive if large companies start moving into the space. But the team said it has an advantage in that it has built the infrastructure to handle thousands of calls simultaneously. Dearsley emphasized that with the general availability of Vapi’s web and phone APIs, the team will also look to build proprietary models for his audio-to-audio solutions.

Source: techcrunch.com