AI Decodes Brain Waves of Paralyzed Individuals into Real-Time Audio

A man with paralysis is connected to a brain-computer interface system

Lisa E. Howard/Mitely Wairagkar et al. 2025

Men who have lost their ability to speak can engage in real-time conversations and even sing using brain-controlled synthetic voices.

The brain-computer interface captures neural activity through electrodes implanted in the brain, instantly creating audio sounds that match intended pitch, intonation, and emphasis.

“This represents a breakthrough in instantaneous speech synthesis, achieving this within 25 ms,” says Sergei Stavisky from the University of California, Davis.

While advancements are needed to improve speech clarity, Maitreyee Wairagkar, also at UC Davis, notes that the individual who lost his speech due to amyotrophic lateral sclerosis expresses happiness and feels that he has found his true voice.

Existing speech neurospheres that utilize brain-computer interfaces typically require a few seconds to convert brain activity into sound. Stavisky mentions that this delays natural conversation and if the connection falters, it can feel like speaking on a poor-quality phone call.

To create a more seamless speech experience, Wairagkar, Stavisky, and their team implanted 256 electrodes in the areas of the male brain responsible for facial muscle control necessary for speech. In subsequent sessions, they introduced thousands of sentences on a screen, recorded brain activity, and prompted the subject to vocalize with specific intonations.

“For instance, phrases like ‘How are you today?’ or variations such as ‘How are you? today?’ can significantly alter the meaning of sentences,” explains Stavisky. “This approach allows for a richer, more natural dialog, marking a significant advancement over previous technologies.”

The researchers utilized an AI model trained to link particular patterns of neural activity with corresponding words and tonal variations, resulting in synthetic speech that mirrors both the content and emotional delivery intended by the user.

The AI was trained with audio recordings from before the male’s condition deteriorated, employing voice-cloning technology to ensure the synthetic speech bore a resemblance to his original voice.

In another phase of the study, researchers attempted to teach him to sing a simple melody with varying pitches, with their models accurately interpreting the intended pitch in real time and adjusting the produced singing voice accordingly.

He also utilizes the system to communicate spontaneously, making sounds such as “hmmm,” “eww,” and forming words, as noted by Wairagkar.

“He’s a remarkably articulate and intelligent individual,” says David Brandman from UC Davis. “Despite his paralysis, he has continued to participate actively in work and engage in meaningful conversations.”

topic:

Source: www.newscientist.com

Physicists witness real-time movement of electrons in liquid water for the first time

A research team led by physicists at Argonne National Laboratory isolated the energetic motion of electrons while “freezing” the motion of the much larger atoms they orbit in a sample of liquid water.

Shuai other. Synchronized attosecond X-ray pulse pairs (pictured here in pink and green) from an X-ray free electron laser were used to study the energetic response of electrons (gold) in liquid water on the attosecond time scale. On the other hand, hydrogen (white) and oxygen (red) atoms are “frozen” over time. Image credit: Nathan Johnson, Pacific Northwest National Laboratory.

“The radiation-induced chemical reactions we want to study are the result of targeted electronic reactions that occur on the attosecond time scale,” said lead author of the study, Professor Linda Young, a researcher at Argonne National Laboratory. said.

Professor Young and colleagues combined experiment and theory to reveal the effects of ionizing radiation from an X-ray source when it hits material in real time.

Addressing the timescales over which actions occur will provide a deeper understanding of the complex radiation-induced chemistry.

In fact, researchers originally came together to develop the tools needed to understand the effects of long-term exposure to ionizing radiation on chemicals found in nuclear waste.

“Attosecond time-resolved experiments are one of the major R&D developments in linac coherent light sources,” said study co-author Dr. Ago Marinelli, a researcher at the SLAC National Accelerator Laboratory.

“It's exciting to see these developments applied to new types of experiments and moving attosecond science in new directions.”

Scientists have developed a technique called X-ray attosecond transient absorption spectroscopy in liquids that allows them to “watch” electrons energized by X-rays move into an excited state before larger nuclei move on. “We were able to.

“In principle, we have tools that allow us to track the movement of electrons and watch newly ionized molecules form in real time,” Professor Young said.

The discovery resolves a long-standing scientific debate about whether the X-ray signals observed in previous experiments are the result of different structural shapes or motifs in the mechanics of water or hydrogen atoms.

These experiments conclusively demonstrate that these signals are not evidence of two structural motifs in the surrounding liquid water.

“Essentially, what people were seeing in previous experiments was a blur caused by the movement of hydrogen atoms,” Professor Young explained.

“By recording everything before the atoms moved, we were able to eliminate that movement.”

To make this discovery, the authors used a technique developed at SLAC to spray an ultrathin sheet of pure water across the pulse path of an X-ray pump.

“We needed a clean, flat, thin sheet of water that could focus the X-rays,” said study co-author Dr. Emily Nienhaus, a chemist at Pacific Northwest National Laboratory.

Once the X-ray data was collected, the researchers applied their knowledge of interpreting X-ray signals to recreate the signals observed at SLAC.

They modeled the response of liquid water to attosecond X-rays and verified that the observed signal was indeed confined to the attosecond timescale.

“Using the Hyak supercomputer, we developed cutting-edge computational chemistry techniques that enable detailed characterization of transient high-energy quantum states in water,” study co-authors from the University of Washington said Xiaosong Li, a researcher at Pacific Northwest National University. Laboratory.

“This methodological breakthrough represents a pivotal advance in our quantum-level understanding of ultrafast chemical transformations, with extraordinary precision and atomic-level detail.”

The team worked together to peer into the real-time movement of electrons in liquid water.

“The methodology we have developed enables the study of the origin and evolution of reactive species produced by radiation-induced processes encountered in space travel, cancer treatment, nuclear reactors, legacy waste, etc.,” Professor Young said. Stated.

The team's results were published in a magazine science.

_____

L. Shuai other. 2024. Attosecond Pump Attosecond Probe X-ray Spectroscopy of Liquid Water. science, published online on February 15, 2024. doi: 10.1126/science.adn6059

Source: www.sci.news

OpenAI Introduces Sora, a Tool that Generates Videos from Text in Real-time Using Artificial Intelligence (AI)

OpenAI on Thursday announced a tool that can generate videos from text prompts.

The new model, called Sora after the Japanese word for “sky,” can create up to a minute of realistic footage that follows the user’s instructions for both subject matter and style. The model can also create videos based on still images or enhance existing footage with new material, according to a company blog post.



“We teach AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.” says the blog post.

One video included among the company’s first few examples was based on the following prompt: Movie trailer featuring the adventures of a 30-year-old astronaut wearing his red woolen knitted bike in his helmet, blue sky, salt desert, cinematic style shot on 35mm film, vibrant colors .”

The company announced that it has opened up access to Sora to several researchers and video creators. According to the company’s blog post, experts have “red-teamed” the product and implemented OpenAI’s terms of service, which prohibit “extreme violence, sexual content, hateful images, likenesses of celebrities, or the IP of others.” We will test whether there is a possibility of evasion. The company only allows limited access to researchers, visual artists and filmmakers, but CEO Sam Altman took to Twitter after the announcement to answer questions from users about a video he said was created by Sola. posted. The video contains a watermark indicating that it was created by AI.



The company debuted its still image generator Dall-E in 2021 and its generated AI chatbot ChatGPT in November 2022, quickly gaining 100 million users. His other AI companies have also debuted video generation tools, but those models could only generate a few seconds of footage that had little to do with the prompt. Google and Meta said they are developing a video generation tool, although it is not publicly available. on wednesday, announced the experiment We’ve added deeper memory to ChatGPT to remember more of your users’ chats.



OpenAI told the New York Times how much footage was used to train Sora, except that the corpus includes videos that are publicly available and licensed from copyright holders. He also did not reveal the source of the training video. The company has been sued multiple times for alleged copyright infringement in training generative AI tools that digest vast amounts of material collected from the internet and mimic the images and text contained in those datasets. .

Source: www.theguardian.com