How Noise Reduction Technology May Subtly Alter Your Brain Function

Noise-canceling headphones function by utilizing a microphone that detects external sounds. Through sophisticated electronics, these sounds are ‘cancelled’ by playing an inverted wave to the listener, which diminishes the audio signal reaching the eardrum.

This mechanism is akin to how a car’s active suspension mitigates vibrations from uneven roads.

The outcome is that listeners enjoy crystal-clear audio with almost no interference from background noise.

Moreover, these headphones help safeguard your ears from high volume levels. By reducing background noise, your device doesn’t need to produce sound as loudly. Hence, parents globally often encourage their children to wear headphones.










Sounds advantageous, right? But then I began hearing stories about young people facing increasing challenges, such as Auditory Processing Disorder (APD).

These individuals frequently struggle to comprehend sounds and speech amidst distracting background noise.

The underlying causes may be linked to a notable rise in young people using noise-canceling headphones and relying on subtitles while watching videos.

Instead of their brains developing typically and learning to filter the noisy environment, they wear noise-canceling headphones for extended periods, regardless of their location, thereby not allowing their brains to adapt properly.

Our brains function like muscles; they evolve in response to external stimuli.

Just as biking 100 miles a day will sculpt your thighs, your auditory processing skills may weaken if you expose yourself solely to pure audio without any background noise, leaving you unable to process multiple sounds simultaneously.

Auditory therapy can be beneficial in retraining the brain, but the optimal approach is to engage more with the world around you before complications develop. Over-isolating ourselves may lead to greater issues.


This article addresses the question (submitted by Mary Watkins): “Can noise-canceling headphones harm your ears?”

If you have any inquiries, please contact us at: questions@sciencefocus.com or send us a message Facebook, Twitter, or Instagram Page (don’t forget to include your name and location).

Explore our ultimate fun facts and more fascinating science pages!


Read more:

Source: www.sciencefocus.com

Is “Brain Rot” a Reality? Researchers Highlight Emerging Risks Linked to Short-Form Videos

Short-form videos are dominating social media, prompting researchers to explore their impact on engagement and cognitive function. Your brain may even be changing.

From TikTok to Instagram Reels to YouTube Shorts, short videos are integral to platforms like LinkedIn and Substack. However, emerging research indicates a link between heavy short-form video consumption and issues with concentration and self-control.

The initial findings resonate with concerns about “brain rot,” defined by Oxford University Press as “the perceived deterioration of a person’s mental or intellectual condition.” This term has gained such popularity that it was named the word of the year for 2024.

In September, a review of 71 studies found that extensive short-form video use was correlated with cognitive decline, especially in attention span and impulse control, involving nearly 100,000 participants. Published in the American Psychological Association’s Psychological Bulletin, this review also connected heavy consumption to heightened symptoms of depression, anxiety, stress, and loneliness.

Similarly, a paper released in October summarized 14 studies that indicated frequent consumption of short-form videos is linked to shorter attention spans and poorer academic performance. Despite rising concerns, some researchers caution that the long-term effects remain unclear.

James Jackson, a neuropsychologist at Vanderbilt University Medical Center, noted that fear of new technologies is longstanding, whether regarding video games or iconic concerts. He acknowledges legitimate concerns but warns against overreacting. “It’s naive to dismiss worries as just grumpy complaints,” he said.

Jackson emphasized that research indicates extensive short-form video consumption could adversely affect brain function, yet further studies are needed to identify who is most at risk, the long-lasting impact, and the specific harmful mechanisms involved.

ADHD diagnoses in the U.S. are on the rise, with about 1 in 9 children diagnosed by 2022, according to the CDC. Keith Robert Head, a doctoral student at Capella University, suggests that the overlap between ADHD symptoms and risks from short videos deserves attention. “Are these ADHD diagnoses truly ADHD, or merely effects of short video use?” he questioned.

Three experts noted that research on the long-term effects of excessive short-form video use is in its early stages, with international studies revealing links to attention deficits, memory issues, and cognitive fatigue. However, these studies do not establish causation, often capturing only a snapshot in time.

Dr. Nidhi Gupta, a pediatric endocrinologist focused on screen time effects, argues that more research is necessary, particularly concerning older adults who may be more vulnerable. Gupta cautions that cognitive changes associated with short-form media may lead to a new addiction, likening it to “video games and TV on steroids.” She speculated that, just as research on alcohol and drugs took decades to evolve, a similar moral panic around short videos could emerge within the next 5 to 10 years.

Nevertheless, Jackson contends that short-form videos can be beneficial for online learning and community engagement: “The key is balance. If this engagement detracts from healthier practices or fosters isolation, then that becomes a problem.”

Source: www.nbcnews.com

Neuroscientists Discover Five Key Phases of Brain Structure Development Across the Human Lifespan

Recent findings from neuroscientists reveal that the brain’s structure divides into five main stages throughout a typical person’s life, marked by four significant turning points from birth to death where the brain undergoes reorganization. Brain topology in children evolves from birth up to a crucial transition at age 9, then shifts into adolescence, which generally lasts until around age 32. In your early 30s, the neural wiring transitions to adult mode, marking the longest phase that extends for over 30 years. The third turning point occurs at about age 66, indicating the start of an early aging phase of brain structure, while the late brain phase begins around age 83.

Masry et al. Using a dataset of MRI diffusion scans, they compared the brains of 3,802 individuals aged 0 to 90 years. The dataset maps neural connections by tracking the movement of water molecules through brain tissue. Image credit: Mously et al., doi: 10.1038/s41467-025-65974-8.

“While we know brain wiring plays a crucial role in our development, we still lack a comprehensive understanding of how and why it fluctuates throughout life,” explained Dr. Alexa Mausley, a researcher at the University of Cambridge.

“This study is the first to pinpoint essential stages in brain wiring throughout the human lifespan.”

“These epochs offer vital insight into our brain’s strengths and vulnerabilities at different life stages.”

“Understanding these changes could shed light on why certain developmental challenges arise, such as learning difficulties in early childhood or dementia later in life.”

During the transition from infancy to childhood, strengthened neural networks emerge as the excess of synapses (the connections between neurons) in a baby’s brain diminishes, allowing only the most active synapses to thrive.

The brain rewires in a consistent pattern from birth until approximately age 9.

In this timeframe, the volumes of gray and white matter grow swiftly, resulting in maximal cortical thickness (the distance from the outer gray matter to the inner white matter), with the cortical folds stabilizing.

By the first turning point at age 9, cognitive abilities begin to evolve gradually, and the likelihood of mental health issues becomes more pronounced.

The second stage, adolescence, is characterized by an ongoing increase in white matter volume, leading to an enhancement in the sophistication of the brain’s communication networks, measurable through water diffusion scans.

This phase is marked by improved connectivity efficiency across specific regions and swift communication throughout the brain, correlating with enhanced cognitive performance.

“As expected, neural efficiency is closely linked to shorter pathways, and this efficiency increases throughout adolescence,” Mausley notes.

“These advancements peak in your early 30s, representing the most significant turning point in your lifetime.”

“Around age 32, the change in wiring direction is the most pronounced, and the overall trajectory alteration is greater than at any other turning points.”

“Although the onset of puberty is clearly defined, the conclusion is far harder to identify scientifically.”

“Based solely on neural structure, we found that puberty-related changes in brain structure conclude by the early 30s.”

Post age 32, adulthood enters its longest phase, characterized by a more stable brain structure with no significant turning points for three decades. This aligns with findings indicating an “intellectual and personality plateau.”

Additionally, the researchers observed a greater degree of “segregation” during this phase, indicating a gradual fragmentation of brain regions.

The tipping point at age 66 is more gradual, lacking dramatic structural shifts; however, notable changes in brain network patterns were found around this age on average.

“Our findings indicate a gradual reconfiguration of brain networks that peaks in the mid-60s,” stated Dr. Mausley.

“This is likely linked to aging, as white matter begins to decline, reducing connectivity further.”

“We are currently facing an era where individuals are increasingly at risk for various health conditions impacting the brain, such as high blood pressure.”

The final turning point arises around age 83, ushering in the last stage of brain structure.

Data from this stage is scarce, but a key characteristic is the shift from global to local connectivity as interactions across the brain diminish while reliance on specific regions intensifies.

Professor Duncan Astle of the University of Cambridge remarked: “In reflection, many of us recognize that our lives encompass distinct stages.”

“Interestingly, the brain also navigates through these phases.”

“Numerous neurodevelopmental, mental health, and neurological conditions are tied to the brain’s wiring.”

“In fact, variations in brain wiring can predict challenges with attention, language, memory, and a wide array of other behaviors.”

“Recognizing that structural transformations in the brain occur not in a linear fashion but through several major turning points can assist us in identifying when and how brain wiring may be vulnerable to disruptions.”

a paper detailing the study was published in the journal on November 25. Nature Communications.

_____

A. Mausley et al. 2025. Topological turning points across the human lifespan. Nat Commun 16, 10055; doi: 10.1038/s41467-025-65974-8

Source: www.sci.news

Nightmares Could Signal Brain Health Issues

Many people experience unusual bad dreams. If you often wake up feeling anxious and sweaty, you might be concerned whether it’s simply stress or if there’s a deeper issue at play.

Recent research has indicated a link between frequent nightmares and a heightened risk of dementia.

A 2022 study published in Lancet eClinicalMedicine revealed that individuals in middle age who have weekly nightmares are more prone to cognitive decline.

Furthermore, older adults with recurrent nightmares showed an increased likelihood of developing dementia. While this may seem alarming, should it genuinely be a cause for concern?

Individuals with mental health conditions, such as anxiety and depression, are more prone to experiencing bad dreams – Image courtesy of Getty Images

Not necessarily. The study suggests a correlation but does not establish causation. It remains uncertain whether nightmares are early indicators of existing changes in the brain or if sleep disturbances contribute to disease progression.

Other factors could also be at play—individuals suffering from anxiety, depression, and poor sleep (which themselves have ties to elevated dementia risk) are more likely to encounter bad dreams.

What we do know is that sleep is vital for brain health. Regardless of the underlying cause, there’s evidence that chronic sleep disruption or low-quality sleep may elevate the long-term risk of cognitive decline.

The takeaway? Experiencing regular nightmares alone does not serve as a dependable early warning of Alzheimer’s disease.

For now, practicing good sleep hygiene is the most effective initial step—not just for pleasant dreams, but for a healthy brain. Aim for a consistent bedtime, minimize caffeine and alcohol intake, and limit screen time before sleeping.


This article addresses the query (from Aaron Martin of Stoke-on-Trent): “I keep having nightmares.” Should I be worried?”

If you have any inquiries, feel free to email us at: questions@sciencefocus.com or message us facebook, ×or Instagram page (make sure to include your name and location).

Explore our ultimate fun facts for more incredible science content.


read more:

Source: www.sciencefocus.com

Study identifies five distinct ‘eras’ of brain development throughout human life.

As we grow older, our brains undergo significant rewiring.

Recent studies indicate that this transformation takes place in various stages, or “epochs,” as our neural structures evolve, altering how we think and process information.

For the first time, scientists have pinpointed four key turning points in the typical aging brain: ages 9, 32, 66, and 83. During each of these phases, our brains display distinctly different structural characteristics.

The findings were Published Tuesday in Nature Communications, revealing that human cognitive ability does not merely peak and then decline with age. In reality, research suggests that the interval between 9 and 32 years old is the sole period in which our neural networks are increasingly efficient.

In adulthood, from 32 to 66 years, the structure of the average brain stabilizes without significant modifications, leading researchers to believe that intelligence and personality tend to plateau during this time.

Following another turning point, from age 83 and beyond, the brain increasingly relies on specific regions as connections between them slowly deteriorate.

“It’s not a linear progression,” comments lead author, Alexa Maudsley, a postdoctoral researcher at the University of Cambridge. “This marks an initial step in understanding how brain changes differ with age.”

These insights could shed light on why certain mental health and neurological issues emerge during specific rewiring phases.

Rick Betzel, a neuroscience professor at the University of Minnesota and not a part of the study, remarked that while the findings are intriguing, further data is necessary to substantiate the conclusions. He cautioned that the theory might face challenges over time.

“They undertook a very ambitious effort,” Betzel said about the study. “We shall see where things stand in a few years.”

For their research, Maudsley and colleagues examined MRI diffusion scans (images illustrating water molecule movement in the brain) of around 3,800 individuals, ranging from newborns to 90 years old. Their objective was to map neural connections at varying life stages.

In the brain, bundles of nerve fibers that convey signals are encased in fatty tissue called myelin—analogous to wiring or plumbing. Water molecules diffusing into the brain typically travel along these fibers, allowing researchers to identify neural pathways.”

“We can’t open up the skull…we depend on non-invasive techniques,” Betzel mentioned, discussing this form of neuroscience research. “We aim to determine the location of these fiber bundles.”

A groundbreaking study utilized MRI scans to chart the neural networks of an average individual across their lifetime, pinpointing where connections strengthen or weaken. The five “eras” discussed in the paper reflect the neural connections observed by the researchers.

They propose that the initial stage lasts until age nine, during which both gray and white matter rapidly increases. This phase involves the removal of redundant synapses and self-reconstruction.

Between ages 9 and 32, there is an extensive period of rewiring. The brain is characterized by swift communication across its regions and efficient connections.

Most mental health disorders are diagnosed during this interval, Maudsley pointed out. “Is there something about this second phase of life that might predispose individuals to mental health issues?”

From ages 32 to 66, the brain reaches a plateau. It continues to rewire, but this process occurs at a slower and less dramatic pace.

Subsequently, from ages 66 to 83, the brain undergoes “modularization,” where neural networks split into highly interconnected subnetworks with diminished central integration. By age 83, connectivity further declines.

Betzel expressed that the theory presented in this study is likely reflective of people’s experiences with aging and cognition.

“It’s something we naturally resonate with. I have two young kids, and I often think, ‘They’re transitioning out of toddlerhood,'” Betzel remarked. “Science may eventually uncover the truth. But are they precisely at the correct age? I’m not sure.”

Ideally, researchers would gather MRI diffusion data on a large cohort, scanning each individual across their lifespan, but that was unfeasible decades ago due to technological constraints.

Instead, the team amalgamated nine diverse datasets containing neuroimaging from prior studies, striving to harmonize them.

Betzel noted that these datasets vary in quality and methodology, and attempts to align them may obscure essential variations and introduce bias into the findings.

Nonetheless, he acknowledged that the paper’s authors are “thoughtful” and proficient scientists who did their utmost to mitigate that risk.

“Brain networks evolve throughout life, that’s undeniable. But are there five precise moments of transition? I hope you’ll take note of this intriguing notion.”

Source: www.nbcnews.com

Four Major Stages of Brain Development from Birth to Age 90

The wiring of our neurons evolves over the decades

Alexa Mousley, University of Cambridge

Our brain’s functionality isn’t static throughout our lives. We know that our capacity for learning and the risk of cognitive decline fluctuate from infancy to our 90s. Recently, scientists may have uncovered a possible reason for this change. The wiring of our brains seems to experience four key turning points at ages 9, 32, 66, and 83.

Previous studies indicate that our bodies undergo three rapid aging cycles around the ages of 40, 60, and 80. However, the complexity of the brain complicates our understanding.

The brain consists of distinct regions that communicate through white matter tracts. These tracts are wire-like structures formed by long, slender projections known as axons, which extend from neurons, or brain cells. These connections significantly influence cognitive functions, including memory. Nevertheless, it was uncertain if this substantial change in wiring transpires throughout one’s life. “No one has combined multiple metrics to characterize stages of brain wiring,” states Alexa Mousley from Cambridge University.

In an effort to bridge this knowledge gap, Maudsley and his team examined MRI scans of roughly 3,800 individuals from the UK and US, primarily white, spanning ages from newborns to 90 years. These scans were previously gathered as part of various brain imaging initiatives, most of which excluded individuals with neurodegenerative diseases or mental health issues.

The researchers discovered that the brain wiring of individuals reaching 90 years old typically progresses through five significant stages, separated by four primary turning points.

In the initial stage, from birth to age nine, the white matter tracts between brain areas seem to become longer, more intricate, and less efficient. “It takes time for information to travel between regions,” explains Mausley.

This may be due to the abundance of connections in our brains as young children. As we age and gain experiences, we gradually eliminate unused connections. Mausley notes that the brain prioritizes making broader connections, beneficial for activities like piano practice, though at the expense of efficiency.

However, during the second stage, from ages 9 to 32, this trend appears to reverse, potentially driven by the onset of puberty and hormonal shifts affecting brain development. “Suddenly, your brain’s connections become more efficient. Connections become shorter, allowing information to traverse more swiftly,” says Mausley. This could enhance skills such as planning and decision-making, along with improved cognitive abilities like working memory.

The third stage, which spans from 32 to 66 years, is the longest phase. “During this stage, the brain continues to change, albeit at a slower rate,” Mausley explains. Specifically, she notes that connections between regions have a tendency to become less efficient over time. “It’s unclear what exactly triggers this change; however, the 30s often involve significant lifestyle alterations, like starting a family, which may play a role,” she adds. This inefficiency might also stem from general physical wear and tear, as noted by Katia Rubia from King’s College London.

From ages 66 to 83, the connections between neurons in the same brain area tend to remain more stable than those among different regions. “This is noteworthy, especially as the risk of developing conditions like dementia increases during this period,” Mausley remarks.

In the final stage, from ages 83 to 90, connections between brain regions weaken and rely more frequently on “hubs” that link multiple areas. “This indicates that there are fewer resources available to maintain connections at this age, leading the brain to depend on specific areas to serve as hubs,” Mausley explains.

Understanding these alterations in the brain could provide insights into why mental health issues arise, typically before the age of 25, and why individuals over 65 are particularly vulnerable to dementia, she states.

“It’s vital to comprehend the normal stages of structural changes in the brain throughout the human lifespan, so future research can explore deviations that occur in mental health and neurodegenerative disorders,” Rubia notes. “Grasping the causes of these deviations can assist us in pinpointing treatment strategies. For instance, we might examine which environmental factors or chemicals are responsible for these differences and discover methods to counteract them through treatments, policies, and medications.”

Nevertheless, Rubia emphasizes the need for further research to determine whether these findings apply to a more ethnically and geographically diverse population.

topic:

Source: www.newscientist.com

Ultrasound Could Enhance Stroke Survival by Clearing Brain Debris

Ultrasound can penetrate the skull and reach the brain

Shutterstock/peterschreiber.media

Recent research suggests that pulsed ultrasound waves directed at the brain may enhance survival rates following a specific stroke type by promoting the removal of inflammatory dead blood cells, based on findings from a study involving mice. This technique, which boosts lymphatic drainage efficiency, could also have applications for treating Alzheimer’s disease, with clinical trials anticipated to commence next year.

Hemorrhagic stroke, constitutes around 15% of all strokes and occurs when a blood vessel in the brain bursts, leading to bleeding, disrupting oxygen supply to the brain, and causing cellular damage, which can result in motor and cognitive issues.

Treatments typically involve sealing the ruptured blood vessel with small metal clips and extracting dead red blood cells via a catheter or similar device. Neglecting this procedure can exacerbate inflammation and lead to further tissue damage. However, this method is highly invasive, posing risks of brain damage and infections, as noted by Larg Airan at Stanford University, California.

After an unexpected experience with prolonged ultrasound application during drug activation in mouse brains, Aylan considered whether pulsed ultrasound could be effective in removing the “debris” from the brain. “When I observed the drug’s effects, it appeared to spread throughout the brain, almost as if it were being ‘painted’ over,” he recounted.

To probe this idea, the research team simulated a hemorrhagic stroke by injecting mice with blood from their tails. For three consecutive days, they administered pulsed ultrasound to the skulls of half the mice for 10 minutes each day, while the others received no treatment.

Subsequently, all mice underwent a three-minute test in a water tank divided into four corners, with healthy mice typically turning in either direction 50% of the time. The team discovered that mice treated with ultrasound turned left 39% of the time, compared to 27% for the control group. Additionally, treated mice exhibited stronger grips on a metal bar than their untreated counterparts, indicating they suffered less brain damage, a conclusion that was later substantiated through brain slice analyses conducted post-euthanasia.

One week following the blood injection, around half of the control group mice perished, compared to only one-fifth of the ultrasound-treated group. A rapid increase in survival rates was noted, with an approximately 30 percentage point improvement achieved through just three 10-minute ultrasound treatments, according to Airan.

Further insights revealed that the ultrasound pulses triggered pressure-sensitive proteins in microglia, the brain’s immune cells, reducing their inflammation and enhancing their ability to clear dead red blood cells. Additionally, this technique improved the flow of cerebrospinal fluid, facilitating the removal of dead cells to lymph nodes in the neck, which are part of the lymphatic system responsible for eliminating metabolic waste.

While more investigations are necessary, this method might also have the potential to address various brain disorders. “If ultrasound can efficiently remove larger red blood cells from the brain, it stands to reason it could also eliminate smaller toxic proteins, such as the misfolded tau associated with Parkinson’s and Alzheimer’s diseases,” Aylan explained.

Experts are impressed with this promising research due to its non-invasive nature. Kathleen Caron from the University of North Carolina at Chapel Hill noted that the lymphatic systems in mice and humans show considerable similarities, indicating this approach could be applicable in human cases as well.

The use of ultrasonic irradiation is considered safe, and while research is ongoing to confirm these findings, Aylan is optimistic about the lack of unforeseen side effects from this treatment.

Ultimately, the research team aspires to test this technique on individuals suffering from hemorrhagic strokes that necessitate urgent intervention. They aim to gather additional data on its safety and efficacy for Alzheimer’s patients, with trials projected to begin next year, according to Aylan.

topic:

Source: www.newscientist.com

This Small Worm Brain Could Revolutionize Artificial Intelligence: Here’s How.

Contemporary artificial intelligence (AI) models are vast, relying on energy-hungry server farms and operating on billions of parameters trained on extensive datasets.

Is this the only way forward? It seems not. One of the most exciting prospects for the future of machine intelligence began with something significantly smaller: the minute worm.

Inspired by Caenorhabditis elegans, a tiny creature measuring just a millimeter and possessing only 302 neurons, researchers have designed a “liquid neural network,” a radically different type of AI capable of learning, adapting, and reasoning on a single device.













“I wanted to understand human intelligence,” said Dr. Ramin Hassani, co-founder and CEO of Liquid AI, a pioneering company in this mini-revolution, as reported by BBC Science Focus. “However, we found that there was minimal information available about the human brain or even those of rats and monkeys.”

At that point, the most thoroughly mapped nervous system belonged to C. elegans, providing a starting point for Hassani and his team.

The appeal of C. elegans lay not in its behavior, but in its “neurodynamics,” or how its cells communicated with one another.

The neurons in this worm’s brain transmit information through analog signals rather than the sharp electrical spikes typical of larger animals. As nervous systems developed and organisms increased in size, spiking neurons became more efficient for information transmission over distances.

Nonetheless, the origins of human neural computation trace back to the analog realm.

For Hassani, this was an enlightening discovery. “Biology provides a unique lens to refine our possibilities,” he explained. “After billions of years of evolution, every viable method to create efficient algorithms has been considered.”

Instead of emulating the worm’s neurons one by one, Hassani and his collaborators aimed to capture their essence of flexibility, feedback, and adaptability.

“We’re not practicing biomimicry,” he emphasized. “We draw inspiration from nature, physics, and neuroscience to enhance artificial neural networks.”

read more:

What characterizes them as “liquid”?

Conventional neural networks, like those powering today’s chatbots and image generators, tend to be very static. Once trained, their internal connections are fixed and not easily altered through experience.

Liquid neural networks, however, offer a different approach. “They are a fluid that enhances adaptability,” said Hassani. “These systems can remain dynamic throughout computation.”

To illustrate, he referenced self-driving cars. When driving in rain, adjustments must be made even if visibility (or input data) becomes obscured. Thus, the system must adapt and be sufficiently flexible.

Traditional neural networks operate in a strictly unidirectional, deterministic fashion — the same input always results in the same output, and data flow is linear within the layer. While this is a simplified view, the point is clear.

Liquid neural networks function differently: neurons can influence one another bidirectionally, resulting in a more dynamic system. Consequently, these models behave stochastically. Providing the same input twice may yield slightly varied responses, akin to biological systems.

C. elegans is a small worm, about 1 mm long, that thrives in moist, nutrient-rich settings like soil, compost piles, and decaying vegetation. – Credit: iStock / Getty Images Plus

“Traditional networks take input, process it, and deliver results,” stated Hassani. “In contrast, liquid neural networks perform calculations while simultaneously adjusting their processing methods with each new input.”

The mathematics behind these networks is complex. Earlier versions were slow due to the reliance on intricate equations requiring sequential resolution before yielding an output.

In 2022, Hassani and his team published a study in Nature Machine Intelligence, introducing an approximate way to manage these equations without heavy computation.

This innovation significantly enhanced the liquid model’s speed and efficiency while preserving the biological adaptability that conventional AI systems often lack.

More compact, eco-friendly, and intelligent

This adaptability allows liquid models to store considerably more information within smaller infrastructures.

“Ultimately, what defines an AI system is its ability to process vast amounts of data and condense it into this algorithmic framework,” Hassani remarked.

“If your system is constrained by static parameters, your capabilities are limited. However, with dynamic flexibility, one can effectively encapsulate greater intelligence within the system.”

He referred to this as the “liquid method of calculation.” Consequently, models thousands of times smaller than today’s large language models can perform comparably or even exceed them in specific tasks.

Professor Peter Bentley, a computer scientist at University College London, specializing in biologically-inspired computing, noted that this transformation is vital: “AI is presently dominated by energy-intensive models relying on antiquated concepts of neuron network simulation.”

“Fewer neurons translate to a smaller model, which reduces computational demand and energy consumption. The capacity for ongoing learning is crucial, something current large models struggle to achieve.”

As Hassani stated, “You can essentially integrate one of our systems into your coffee machine.”

“If it can operate within the smallest computational unit, it can be hosted anywhere, opening up a vast array of opportunities.”

Liquid models are compact enough to run directly on devices like smart glasses or self-driving cars, with no need for cloud connectivity. – Credit: iStock / Getty Images Plus

AI that fits in your pocket and on your face

Liquid AI is actively developing these systems for real-world application. One collaboration involves smart glasses that operate directly on users’ devices, while others are focused on self-driving cars and language translators functioning on smartphones.

Hassani, a regular glasses wearer, pointed out that although smart glasses sound appealing, users may not want every detail in their surroundings sent to a server for processing (consider bathroom breaks).

This is where Liquid Networks excel. They can operate on minimal hardware, allowing for local data processing, enhancing privacy, and reducing energy consumption.

This also promotes AI independence. “Humans don’t depend on one another for function,” Hassani explained. “Yet they communicate. I envision future devices that maintain this independence while being capable of sharing information.”

Hassani dubbed this evolution “physical AI,” referring to intelligence that extends beyond cloud settings to engage with the physical realm. Realizing this form of intelligence could make the sci-fi vision of robots a reality without needing constant internet access.

However, there are some limitations. Liquid systems only function with “time series” data, meaning they cannot process static images, which traditional AI excels at, but they require continuous data like video.

According to Bentley, this limitation is not as restrictive as it appears. “Time series data may sound limiting, but it’s quite the opposite. Most real-world data has a temporal component or evolves over time, encompassing video, audio, financial exchanges, robotic sensors, and much more.”

Hassani also acknowledged that these systems aren’t designed for groundbreaking scientific advancements, such as identifying new energy sources or treatments. This research domain will likely remain with larger models.

Yet, that isn’t the primary focus. Instead, this technology aims to render AI more efficient, interpretable, and human-like while adapting it to fit various real-world applications. And it all originated from a small worm quietly moving through the soil.

read more:

Source: www.sciencefocus.com

Lead Exposure Could Have Shaped Human Brain Evolution, Behavior, and Language Development

Several hominid species — Australopithecus africanus, Paranthropus robustus, early homo varieties, Gigantopithecus brachy, Pongo, papio, homo neanderthalensis, and homo sapiens — have undergone significant lead exposure over two million years, as revealed by a new analysis of fossilized teeth collected from Africa, Asia, Oceania, and Europe. This finding challenges the notion that lead exposure is merely a contemporary issue.

Lead exposure affecting modern humans and their ancestors. Image credit: J. Gregory/Mount Sinai Health System.

Professor Renaud Joannes Boyau from Southern Cross University remarked: “Our findings indicate that lead exposure has been integral to human evolution, not just a byproduct of the industrial revolution.”

“This suggests that our ancestors’ brain development was influenced by toxic metals, potentially shaping their social dynamics and cognitive functions over millennia.”

The team analyzed 51 fossil samples globally utilizing a carefully validated laser ablation microspatial sampling technique, encompassing species like Australopithecus africanus, Paranthropus robustus, early homo variants, Gigantopithecus brachy, Pongo, papio, homo neanderthalensis, and homo sapiens.

Signs of transient lead exposure were evident in 73% of the specimens analyzed (compared to 71% in humans). This included findings on Australopithecus, Paranthropus, and homo species.

Some of the earliest geological samples from Gigantopithecus brachy, believed to be around 1.8 million years old from the early Pleistocene and 1 million years old from the mid-Pleistocene, displayed recurrent lead exposure events interspersed with periods of little to no lead uptake.

To further explore the impact of ancient lead exposure on brain development, researchers also conducted laboratory studies.

Australopithecus africanus. Image credit: JM Salas / CC BY-SA 3.0.” width=”580″ height=”627″ srcset=”https://cdn.sci.news/images/2015/01/image_2428-Australopithecus-africanus.jpg 580w, https://cdn.sci.news/images/2015/01/image_2428-Australopithecus-africanus-277×300.jpg 277w” sizes=”(max-width: 580px) 100vw, 580px”/>

Australopithecus africanus. Image credit: JM Salas / CC BY-SA 3.0.

Using human brain organoids (miniature brain models grown in the lab), researchers examined the effects of lead on a crucial developmental gene named NOVA1, recognized for modulating gene expression during neurodevelopment in response to lead exposure.

The modern iteration of NOVA1 has undergone changes distinct from those seen in Neanderthals and other extinct hominins, with the reasons for this evolution remaining unclear until now.

In organoids with ancestral versions of NOVA1, exposure to lead significantly altered neural activity in relation to Fox P2 — a gene involved in the functionality of brain regions critical for language and speech development.

This effect was less pronounced in modern organoids with NOVA1 mutations.

“These findings indicate that our variant of NOVA1 might have conferred a protective advantage against the detrimental neurological effects of lead,” stated Alison Muotri, a professor at the University of California, San Diego.

“This exemplifies how environmental pressures, such as lead toxicity, can drive genetic evolution, enhancing our capacity for survival and verbal communication while also affecting our susceptibility to contemporary lead exposure.”

Gigantopithecus blackii inhabiting the forests of southern China. Image credit: Garcia / Joannes-Boyau, Southern Cross University.” width=”580″ height=”375″ srcset=”https://cdn.sci.news/images/2024/01/image_12599-Gigantopithecus-blacki.jpg 580w, https://cdn.sci.news/images/2024/01/image_12599-Gigantopithecus-blacki-300×194.jpg 300w, https://cdn.sci.news/images/2024/01/image_12599-Gigantopithecus-blacki-84×55.jpg 84w” sizes=”(max-width: 580px) 100vw, 580px”/>

An artistic rendition of a Gigantopithecus brachy herd in the forests of southern China. Image credit: Garcia / Joannes-Boyau, Southern Cross University.

Genetic and proteomic analyses in this study revealed that lead exposure in archaic variant organoids disrupts pathways vital for neurodevelopment, social behavior, and communication.

Alterations in Fox P2 activity indicate a possible correlation between ancient lead exposure and the advanced language abilities found in modern humans.

“This research highlights the role environmental exposures have played in human evolution,” stated Professor Manish Arora from the Icahn School of Medicine at Mount Sinai.

“The insight that exposure to toxic substances may conjure survival advantages in the context of interspecific competition introduces a fresh perspective in environmental medicine, prompting investigations into the evolutionary origins of disorders linked to such exposures.”

For more information, refer to the study published in the journal Science Advances.

_____

Renaud Joannes Boyau et al. 2025. Effects of intermittent lead exposure on hominid brain evolution. Science Advances 11(42); doi: 10.1126/sciadv.adr1524

Source: www.sci.news

Ancient Lead Exposure Could Have Shaped Brain Evolution

Homo sapiens may have developed greater tolerance to lead exposure compared to other hominids

frantic00/Shutterstock

Research on fossilized teeth indicates that ancient humans were exposed to harmful lead for over two million years, suggesting that modern humans might have adapted to handle this toxic metal more effectively than their predecessors.

Traditionally, lead poisoning was associated with modern issues such as industrialization, poor mining techniques, and lead additives in fuels. Fortunately, efforts to phase out lead exposure have been underway since the 1980s.

This toxin is particularly harmful to children, hindering physical and cognitive growth, while adults may experience a range of serious physical and mental health issues.

Dr. Renaud Joanne Bois and colleagues from Southern Cross University in Lismore, Australia, aimed to investigate whether our ancient ancestors faced similar lead exposure.

They examined 51 fossilized hominin teeth, representing species such as Australopithecus africanus, Paranthropus robustus, Gigantopithecus black, Homo neanderthalensis, and Homo sapiens. The fossils were sourced from regions including Australia, Southeast Asia, China, South Africa, and France.

The research team utilized laser ablation techniques to identify lead concentrations in the teeth, revealing layers of lead that accumulated during the growth of these hominids. This exposure could be attributed to environmental contaminants, such as polluted water, soil, or volcanic eruptions.

Dr. Joanne Boyau noted the surprising levels of lead discovered within the teeth. For instance, Gigantopithecus, a massive ancestral relative of today’s orangutans, primarily lived in what is now China. “If current humans exhibit similar lead levels, it indicates considerable exposure from industrial activities,” she remarked.

The research then shifted focus to understanding how both modern humans and Neanderthals managed lead exposure. The team created lab-grown brain models called organoids to analyze differences in the NOVA1 gene in both species, subsequently assessing the effects of lead neurotoxicity on these organoids.

“Our findings indicate that modern NOVA1 is significantly less impacted by lead neurotoxicity,” states Joannes Boyau.

Crucially, when archaic organoids expressed NOVA1 under lead exposure, another gene, Fox P2 exhibited notable differences.

“These genes are linked to cognitive functions, language, and social bonding,” explains Joannes-Boyau. “The diminished neurotoxicity in modern humans compared to Neanderthals could provide a crucial evolutionary advantage.” This suggests that lead exposure has influenced our evolutionary history.

However, Dr. Tanya Smith from Griffith University in Brisbane, Australia, remains cautious about the conclusions drawn by the researchers regarding lead exposure levels or potential evolutionary benefits inferred from their organoid studies.

“This paper is complex and makes speculative claims,” Smith emphasizes. “While it seems logical that ancient humans and wild primates faced some level of lead exposure, the limited scope and variety of fossils studied do not necessarily demonstrate that our ancestors were consistently exposed to lead over two million years.”

Exploring Neanderthals and Ancient Humans in France

Join New Scientist’s Kate Douglas on an engaging exploration of significant Neanderthal and Upper Paleolithic sites across southern France, spanning from Bordeaux to Montpellier.

Topic:

  • Human Evolution/
  • Ancient Humans

Source: www.newscientist.com

Explore Excerpts from “Our Award-Winning Brain: Ourselves” by Neurologist Mazood Hussain

Marcel Proust, photographed in 1905

Photo 12 / Alamy

As dawn broke, a peaceful calm enveloped the city. The shadows along the roads gradually receded, leading us into a radiant morning. It was June, and the few early risers setting up market stalls relished the serene, gentle light, even with the enemy only 50 miles away. Many who had fled the metropolitan area clung to the hope that the defense line would hold after nearly four years. Hope remained alive.

On Houseman Street, a handful of cars headed east, but otherwise, the street was quiet as most residents lingered in wakefulness. However, the inhabitants of the second-floor apartment at No. 102 had been awake for quite some time—indeed, all night. The window shutters remained tightly drawn, as they had been for months. A green bedside lamp glowed in the otherwise darkened room, amidst furniture shrouded in shadows and filled with stramonium steam for asthma, creating a stifling atmosphere. The sounds from the street, coupled with the soundproof cork-lined walls, contributed to a sense of suffocating confinement that visitors undoubtedly felt.

Sitting on a bed in a beautifully adorned Japanese courtyard, propped up by large cushions, he usually lost himself in his manuscript. But today felt different. Overwhelming fear consumed him. One side of his face seemed to sag. When addressing Celeste, his housekeeper, he worried his words lacked clarity, turning his speech into an almost incomprehensible ramble. Convinced he was on the brink of a major stroke—the same fate that plagued both his parents—he found no alternative explanation. It was a hereditary concern. And had his beloved mother, Jeanne, escaped complete frailty? Her stroke had robbed her of language, rendering her unable to communicate with her cherished sons.

In the summer of 1918, as the Germans initiated their final offensives of World War I towards Paris, the renowned novelist Marcel Proust sat on a blue satin chair, engulfed in fear of potential brain damage. Now in his late 40s, he was all too familiar with aphasia; his mother had suffered from it, and his father, Dr. Adrian Proust, had authored an entire book on the subject prior to his own stroke.

Young Marcel had also befriended many of the city’s most distinguished neurologists. At that time, Paris stood as a prominent hub for neurology, with pioneering experts making significant advancements in understanding language disorders following strokes. Without such insights, where would Proust find himself?

On that June morning in 1918, he anticipated a meeting with Joseph Babinsky, a well-known neurologist. Babinsky, unaware of the reasons behind Proust’s visit, simply inquired, “Do you have any symptoms?”

Proust’s intention was to persuade Babinsky to perform a trepanation—drilling holes in his skull—driven by his profound belief that such a drastic step was necessary to halt the looming stroke. However, Babinsky, an expert in his field, reassured Proust that there was no evidence suggesting he was experiencing a stroke and declined to proceed with the operation. It’s difficult to imagine how the trajectory of Proust’s monumental novel would have shifted had he suffered a stroke. While Marcel Proust never experienced a stroke, the shadow of that fear haunted him throughout his life, lingering long after, even when he was near death from pneumonia, it was Babinsky he called upon.

Proust’s anxieties surrounding brain-related illnesses resonate with many. While diseases can afflict anyone in various ways, our deepest fears often lie in disorders that impact our minds. Why is that? Because neurological conditions can transform individuals dramatically. Some may struggle with communication, as Proust feared, while others could experience memory loss, distorted perceptions, or hallucinations. Some might exhibit socially inappropriate behavior, a lack of empathy, or rudeness. Others could become impulsive or withdrawn, developing new addictions or suffering from pathological indifference.

Such behavioral shifts can be distressing and terrifying for both individuals and their loved ones. Yet, they reveal profound insights into our very nature. By examining the consequences of certain brain functions being impaired, we glean understanding about our own normality, how cognitive functions shape our identities—personal and social, formed through our connections with others.

For someone like Marcel Proust, losing the ability to communicate would be devastating. Not only would he lose his gift for writing, but he would also risk dismantling his carefully crafted social presence. The social identity he had labored to cultivate would effectively disintegrate. Proust had invested years nurturing relationships with key figures in French society and possessed remarkable perceptions regarding his connections with influential individuals. As a gay man from a Jewish background, He adeptly navigated the complexities of prejudice and societal expectations in Paris.

Through keen observation and emulation, he became an integral part of the circles he thought he belonged to. Some observers suggested that Proust was a master manipulator, indicating that even while isolated in his dimly lit bedroom, he was unwilling to relinquish control over those around him. However, without language, the intricate web he had worked to weave would no longer be accessible; he would no longer “belong.”

This excerpt is from Massoud Hussain’s work Our Brains, Ourselves (Canong’s publication), recipient of The Royal Society Trivedi Science Book Prize and the latest selection from the New Scientist Book Club. Join us to read together.

Topics:

Source: www.newscientist.com

Why Our Brain Won the 2025 Royal Society Trivedi Science Book Award

The human brain is one of the most intricate entities ever to exist.

Andriy Onofriyenko/Getty Images

Science literature, particularly those authored by scientists, is often perceived as monotonous and challenging. They are sometimes regarded as mere textbooks meant for structured learning. However, the book featuring the finalists for the Royal Society’s Trivite Science Award proves this perception wrong and showcases the judges’ selection for this year: Our Brains, Ourselves by neurologist Masd Hussain.

I was fortunate to serve as a panel chair among six dedicated readers and book enthusiasts, including New Scientist‘s Jacob Aron, who faced the daunting challenge of curating a list of nominees. Our discussions, led by passionate advocates for science, were diverse and engaging, reflecting the love we all share for both literature and science.

We frequently engaged in respectful debates, as I was usually in the company of individuals willing to consider opposing viewpoints. Our varied backgrounds and experiences enriched our understanding of the privileges associated with reading and the act of reading itself.

This year’s submissions featured numerous outstanding scientific works, yet Our Brains, Ourselves notably blended exquisite storytelling with rigorous, cutting-edge science, particularly evidenced in its humanistic approach. Hussain is a neuroscientist and a clinician; seven personal narratives from his patients are highlighted throughout the book.

The experiences shared are diverse. One individual feels an overwhelming apathy post-stroke, while another believes she has a connection with her husband. Each story illustrates profound transformations. This book is a poignant exploration of how neurological disorders can radically alter one’s identity and breed societal alienation.

A recurring theme in the book is the concept of “self” and how our brains shape our identities. It is conveyed empathetically and personally. The scientific elements are firmly grounded in Hussain’s own research, presented in an easily digestible manner, while acknowledging the unknowns. I appreciate this transparency; real science encourages the pursuit of further questions.

While case studies in clinical practices might seem commonplace, the unique personal touch here makes a significant difference. Have you ever felt a disconnect from personal attributes? The narratives of patients with brain disorders provoke thought regarding identity, selfhood, and our social roles.

This notion resonated strongly with our panel. What constitutes belonging? Some individuals encountered in the book are members of immigrant communities, facing prejudice and violence to carve out their place in society. It would seem that as our world becomes increasingly interconnected, our fear of differences should diminish—but this isn’t always the case.

Our Brains, Ourselves encourages readers to reflect on how neurological disorders can profoundly disrupt one’s sense of belonging while illustrating how cognitive function influences one’s identity. Ultimately, our brains substantially define who we are. This compassionate narrative not only educates readers about science but also showcases extraordinary human kindness.

Sandra Knapp is a plant taxonomist at the Museum of Natural History in London and chaired this year’s Royal Society Trivedi Science Book Award judging committee. The winner of the award is Our Brains, Ourselves, the latest addition to the New Scientist Book Club.

Topic:

Source: www.newscientist.com

Uncovering the Role of Brain Organoids in Defining Human Uniqueness

100-day-old brain organoids

Madeline Lancaster

Since the inception of brain organoids by Madeline Lancaster in 2013, these structures have become invaluable in global brain research. But what are they really? Are they simply miniaturized brains? Could implanting them into animals yield a super-intelligent mouse? Where do we draw the ethical line? Michael Le Page explored these questions at Lancaster’s lab at the MRC Institute of Molecular Biology in Cambridge, UK.

Michael Le Page: Can you clarify what a brain organoid is? Is it akin to a mini brain?

Madeline Lancaster: Not at all. There are various types of organoids, and they are not miniature brains. We focus on specific parts of the human brain, and our organoids are small and immature. They don’t function like developed human brains with memories. In scale, they’re comparable to insect brains, lacking the necessary tissue present in those brains. I would categorize them closer to insect neural structures.

What motivated you to create your first brain organoid?

I initiated the process using mouse embryonic brain cells, cultivating them in Petri dishes. Some cells didn’t adhere as expected, leading to a fascinating outcome where they interconnected and formed self-organizing cell clusters indicative of early brain tissue development. The same technique was then applied to human embryonic stem cells.

Why is the development of brain organoids considered a significant breakthrough?

The human brain is vital to our identity and remained enigmatic for a long time. Observing a mouse brain doesn’t capture the intricacies of the human brain. Brain organoids have opened a new perspective into this complex system.

Can you provide an example of this research?

One of our initial ventures involved modeling a condition called micropathy, where the brain is undersized. In mice, similar mutations don’t alter brain size. We tested whether we could replicate size reduction in human brain organoids, and we succeeded, enabling further insights into the disease.

Madeline Lancaster in her lab in Cambridge, UK

New Scientist

What has been your most significant takeaway from studying brain organoids?

We are gaining a better understanding of what distinguishes the human brain. I’m fascinated by the finding that human stem cells which generate neurons behave differently from those in mice and chimpanzees. One key difference is that human development is notably slower, allowing for more neurons to be produced as our stem cells proliferate.

Are there practical outcomes from this research?

Much of our foundational biology research has crucial implications for disease treatment. My lab primarily addresses evolutionary questions, particularly genetic variances between humans and chimpanzees. Specific genes that arise are often linked to human disorders, implying that mutations essential for brain development could lead to significant damage.

What types of treatments might emerge from this work in the future?

We’re already utilizing brain organoids for drug screening. I’m especially optimistic about their potential in treating mental health conditions and neurodegenerative diseases, where novel therapies are lacking. Currently, treatments for schizophrenia utilize medications that are five decades old. Brain organoid models could unveil new approaches. In the longer term, organoids might even provide therapeutic options themselves. While not for all brain areas, techniques have already been developed to create organoids of dopaminergic neurons from the substantia nigra, which are lost in Parkinson’s, for potential implantation.

Are human brain organoids already being implanted in animal brains?

Yes, but not for treatment purposes; rather, these practices enhance human organoid research. Organoids usually lack vascularity and other cell types from outside the brain, especially microglia, which serve as the brain’s immune cells. Thus, to examine how these other cells interact with human brain matter, various studies have implanted organoids into mice.

Should we have concerns regarding the implantation of human organoids in animals?

Neurons are designed to connect with one another. So, when a human brain organoid is inserted into a mouse brain, the human cells will bond with mouse neurons. However, they aren’t structured coherently. These mice exhibit diminished cognitive performance after implantation, akin to a brain malfunction; hence, they won’t become super-intelligent.

Images of the color of brain organoids, showing their neural connections

MRC Institute of Molecular Biology

Is cognitive enhancement a possibility?

We’re quite a distance from that. Higher-level concepts relate to how different brain regions interlink, how individual neurons connect, and how collections of neurons communicate. Achieving an organized structure like this could be possible, but challenges like timing persist. While mice have a short lifespan of about two years, human development toward advanced intelligence takes significantly longer. Furthermore, the sheer size of human brains presents challenges; a human-sized brain cannot fit within a mouse. Because of these factors, I don’t foresee such concerns emerging in the near future.

Regarding size, the main limitation is the absence of blood vessels. Organoids start to die off when they exceed a few millimeters. How much headway has been made in addressing this issue?

While we’ve made strides and should acknowledge our accomplishments, generating brain tissue is relatively straightforward as it tends to develop autonomously. Vascularization, however, is complex. Progress is being made with the introduction of vascular cells, but achieving fully functional blood perfusion remains a significant hurdle.

When you reference ‘far away’…

I estimate it could take decades. It may seem simple, given that the body accomplishes this naturally. However, the challenges arise from the body’s integrated functioning. Successfully vascularizing organoids requires interaction with a whole organism; we can’t replicate this on a plate.

If we achieve that, could we potentially create a full-sized brain?

Even if we manage to develop a large, vascularized human brain in a lab, without communication or sensory input, it would lack meaningful function. For instance, if an animal’s eyes are shut during development and opened later, they may appear functional, but the brain can’t interpret visual input, rendering it effectively blind. This principle applies to all senses and interactions with the world. I believe that an organism’s body must have sensory experiences to develop awareness. Certain patients who lose sensory input can end up experiencing lock-in syndrome, an alarming condition. But these are individuals who have previously engaged with the world. A brain that has never engaged lacks context.

As brain organoid technology progresses, how should we define the boundaries of ethical research?

The field closely intersects with our understanding of consciousness, which is complex and difficult to measure. I’m not even certain I have the definitive answer about consciousness for myself. However, we can undoubtedly assess factors relevant to consciousness, like organization, sensory inputs and outputs, maturity, and size. Mice might meet several of these criteria but are generally not recognized to possess human-like consciousness, largely due to their size. Even fully interconnected human organoids won’t achieve human-level consciousness if they remain small. Establishing these kinds of standards offers more practical methods than attempting to directly measure consciousness.

https://www.youtube.com/watch?v=xa82-7txy50

topic:

Source: www.newscientist.com

Baby’s Brain Development May Affect Their Slow Adaptation to Mites Compared to Adults

The infant’s brain functions at a distinct rhythm compared to that of adults

Goodles/Aramie

When infants attempt to comprehend their surroundings, their brain activity reveals slower rhythms compared to adults, aiding them in grasping new concepts.

Our brains utilize a network of neurons to interpret sensory input. When a neuron receives a sufficiently strong signal from its neighbor, it transmits that signal to other neurons, generating synchronized waves of electrical activity that alternate between activated and silent states.

These brain waves manifest at various frequencies. A specific brain area may show a greater proportion of neurons synchronized to one frequency over others if it exhibits a range of frequencies simultaneously. For instance, prior research indicates that the adult visual cortex displays a diverse range of frequencies when individuals are observing stimuli, but in higher proportions, more neurons synchronize with the waves at a frequency of 10 hertz.

To determine if the same holds true for infants, Moritz Kester from the University of Regensburg in Germany along with his colleagues enlisted 42 eight-month-olds via their parents. The researchers recorded the infants’ brain activity with electrodes affixed to the scalp, exposing them to dozens of friendly cartoon monsters for about 15 minutes, each monster flashing for two seconds.

The team relied on the fact that brain waves tend to oscillate in sync with rapidly flickering images, enabling them to assess the number of neurons synchronized to various frequencies within the infants’ visual cortex. Each monster was toggled on and off at eight different frequencies ranging from 2 to 30 hertz.

Analysis of the brain activity data revealed that the visual cortex produces waves of synchronized activity in response to the flickering cartoons. However, the most prominent signals emerged at four hertz, indicating greater synchronization with this flicker frequency than with others.

Moreover, this 4-hertz signal was consistently present even when the brain was exposed to flickering at higher frequencies, such as 15 hertz. “What’s particularly intriguing is that regardless of the different frequencies presented, a response at 4 hertz was always observed,” comments Kester.

This rhythm falls within a frequency band known as theta, which is associated with the formation of new concepts, potentially facilitating learning for young children as they observe their environment. “It suggests that infants are in a specific learning mode,” Kester explains.

Researchers supporting this theory further discovered that there were no 4-hertz EEG signals in the visual cortex, nor EEG signals at other frequencies, suggesting a broader neural circuit involvement in other brain areas related to concept formation.

Repeating the experiment with seven adults confirmed prior findings that visual brain circuits are predominantly activated by the 10 hertz frequency, which was also found to persist in the background despite varying speeds of the cartoon flickering.

Given adults’ extensive experiences, it appears that the visual sections of their brains are fine-tuned to respond to more frequent stimuli. They block irrelevant information and concentrate on acquiring conceptual knowledge, states Kester.

Further research is necessary to establish whether exposure to 4 hertz flickering images can enhance infants’ capacity to learn new concepts, according to Emily Jones at Birkbeck, University of London. The team is hopeful to gain further insights in an ongoing study, Kester added.

topic:

Source: www.newscientist.com

Brain Structure Mapping Alone Fails to Capture Its Functionality

The human brain contains trillions of connections

Sherbrooke Connectivity Imaging Lab/Connect Images/Alamy

Is it possible to fully comprehend brain function if we can accurately map its structures? Researchers aim to develop a wiring diagram, or connectome, of our neural pathways, yet the task of unveiling the brain’s mysteries is proving to be complex.

The Connectome serves as a roadmap for nerve signal pathways, but Sophie Dovari from Princeton University and her team have found notable gaps in these pathways.

Researchers analyzed the connectome of the nematode worm, Caenorhabditis elegans, and compared it to recorded neural signals. They accomplished this by stimulating each neuron and observing how signals flowed through the connectome. This method is feasible with nematodes due to their relatively simple nervous system, composed of roughly 300 neurons.

Nematode worms are significantly simpler than humans, with approximately 300 neurons depicted in green

Heiti Paves / Alamy Stock Photo

By viewing these two datasets as mathematical networks, researchers can ascertain whether closely connected groups of neurons manifest a high frequency of signal exchanges. They uncovered that this correlation is not always evident.

Dvali notes instances of substantial connection density and overlapping signal exchanges, like how worms eat or the groups of neurons that correspond well. However, even in cases where they appeared significantly connected, a gap remained in understanding their respective functionalities across both networks. Overall, these findings suggest that the biological connectome is insufficient to predict all neural behaviors.

Team member Andrew Leifer, also from Princeton University, points out that signals do not always follow the shortest paths between neurons; some may communicate beyond their direct connections. “While we typically leverage connectomes for research, the multitude of useful connections calls for deeper comprehension,” he explains.

According to Albert Laslo Barabasi at Northeastern University, Massachusetts, criticism surrounding connectomics often revolves around its inability to provide action-oriented insights from structural data. This new paper seeks to address that challenge.

Looking forward, researchers aim to delve deeper into how signals disseminate through the connectome when multiple neurons are activated simultaneously, with aspirations to study more complex organisms, such as fruit fly larvae, recognized for their intricate neural networks. “We are on the verge of a revolution in brain mapping,” Barabasi concludes.

topic:

Source: www.newscientist.com

Books vs. Audiobooks: Is Reading Always More Beneficial for the Brain Than Listening?

Engaging with information can alter how your brain processes and reacts to it

Tony Anderson/Getty Images

As a passionate reader and writer, I often find myself disheartened. Recent reports indicate a decline in reading for enjoyment among younger generations. When a friend asked if her use of audiobooks provided the same cognitive benefits as traditional reading for her daughter, my initial thought was, “Enjoyment matters more than the format.” However, exploring the science revealed that the medium indeed influences our cognitive process in important and distinct ways.

The Advantages of Reading

Reading unquestionably benefits us. Aside from the knowledge it imparts and the opportunities it opens up, numerous studies link childhood literacy with both physical and mental well-being – and even extended lifespans.

It is believed that reading exercises three crucial cognitive functions. Firstly, it promotes “Deep Reading,” allowing us to connect different sections of text, reflect on their relevance to our lives, and engage with the material critically.

Secondly, reading fosters empathy and bolsters our emotional intelligence—traits that are essential for coping with stress and navigating real-life difficulties. Thirdly, there exists a correlation between reading and cognitive development. The “Theory of Mind” refers to our ability to recognize that others may hold different thoughts and beliefs than we do.

However, discerning the impact of reading from other influences can be challenging. Moreover, readers may enjoy advantages like having leisure time, financial resources, or even genetic traits which can affect overall health, cognition, or longevity.

Nevertheless, a study involving over 3,500 participants attempted to account for these factors, revealing that individuals who read for approximately 30 minutes each day were 20% less likely to pass away in the subsequent 12 years. This benefit was more pronounced among book readers compared to those who primarily read newspapers and magazines.

Digital Media: E-readers and Audiobooks

Comparing print reading to digital formats complicates the research landscape. Some studies suggest that screen reading promotes a more superficial understanding compared to reading printed material.

There are also subtle differences. In a series of experiments conducted by Anne Mangen from Stavanger University, Norway, and Frank Hakemalder from Utrecht University in the Netherlands, it was found that frequent readers of short texts on screens tended to seek meaning less diligently than their paper-reading counterparts. Additionally, increased exposure to screen reading correlated with diminished persistence in tackling longer literary works.

Concerning audiobooks, while the evidence remains limited, I find some solace in the findings. Research indicates that comprehension levels are very similar whether one is reading or listening. However, some nuances exist—meta-analyses of 46 studies discovered that reading provides a slight advantage for interpreting the emotions of characters, among other interpretative skills. Thus, drawing inferences from the text is somewhat more effective when reading.

Diverse Cognitive Approaches

Interestingly, listening rather than reading seems to engage different cognitive aspects. For instance, research shows that individuals tended to think more intuitively when responding to verbal questions compared to written ones.

Listening to an audiobook provides the experience of hearing another person’s voice with its unique intonations, rhythms, and emotions, which can greatly influence interpretation. Janet Gaipel from the University of Exeter notes that contrastingly, reading relies on our inner voice, allowing for a more individualized and self-paced experience. These distinctions may significantly affect how information is perceived and utilized.

Nonetheless, “listening to audiobooks is not somehow detrimental,” Geipel argues. “The real issue lies in how you allocate your attention. Listening can be just as effective as reading if you are fully focused, but multitasking while listening can compromise depth compared to immersive reading without distractions.”

Finding What Works for You

Listening alongside reading introduces yet another layer. A meta-analysis conducted by Virginia Clinton Lisell at the University of North Dakota found that while combining listening with reading might yield slight improvements in understanding, this is likely only true for those who struggle with decoding text, such as individuals with low literacy or those learning to read in a non-native language. For skilled readers, however, this dual engagement may lead to diminished comprehension due to “cognitive load theory,” where presenting information in two formats can create redundancy and overwhelm cognitive resources.

Ultimately, various factors—dyslexia, visual impairments, lengthy commutes, or personal preference—can drive the choice for audiobooks rather than print. As for whether you derive equivalent benefits, “there’s no straightforward answer,” Geipel acknowledges.

If options are available, I suggest reserving an engaging podcast or a thought-provoking book for moments of focus, rather than while preparing dinner. However, if someone is immersed in a story purely for enjoyment, as in the case of my friend’s daughter, choosing an audiobook appears to be a far better alternative than missing out altogether.

Topic:

Source: www.newscientist.com

How Fat Storage Locations Impact Brain Function

Our brain activity and health is influenced by various bodily events

Cavallini James/BSIP/Getty Images

The impact of body fat on our movements, emotions, and even the likelihood of developing Alzheimer’s disease varies based on its location within the body.

While many studies emphasize abdominal fat due to its correlation with cognitive decline and heart disease, few have explored fat distribution in other regions, usually with limited participant numbers.

To broaden understanding, Qiu from the Hong Kong Polytechnic University and her team investigated the effects of fat in the arms, legs, torso, and around internal organs on brain health.

The research team analyzed body composition scans and Brain Imaging data of over 18,000 adults with an average age of 62 involved in the UK Biobank Project. After taking age and other factors into account, they associated excess fat in different body regions with specific brain changes.

For instance, higher fat levels in the arms and torso correlated with thinning in the sensorimotor cortex, which is involved in movement. Increased arm fat was also linked to reduced hippocampal volume, a crucial area for memory that is traditionally affected in Alzheimer’s disease, potentially explaining why arm fat is associated with a higher risk of neurodegenerative disorders.

The researchers found that excess leg fat correlated with diminished connectivity in the brain’s limbic network, which is responsible for emotion and reward processing. This may be due to the fact that fat in the lower body releases leptin, a hormone that regulates hunger, with higher leptin levels connected to reduced limbic connectivity.

Conversely, visceral fat (around internal organs) was the type most strongly linked to functional changes in the brain, uniquely associated with white matter degradation—a characteristic of Alzheimer’s disease—rather than its preservation.

This may arise from the fact that visceral fat produces more inflammatory substances compared to fat located elsewhere in the body, according to Sonia Anand from McMaster University in Canada, which can contribute to brain inflammation.

It remains unclear why arm fat appeared to have both protective and detrimental effects on the brain. “Observing such divergent impacts was intriguing,” noted Mikal Schneider Biary from Rutgers University, New Jersey. This complexity underscores the intricate relationship between body fat and brain health, she added.

The research only established correlations between body fat and brain function, so “we can’t infer any causal connections,” Biary cautions. Some brain changes may indeed influence the distribution of body fat. Moreover, the findings might not apply to the broader population, as there was a notable lack of diversity among participants.

Nevertheless, the study emphasizes that different types and locations of fat exert varying effects, according to Anand. This suggests that treatments focusing on reducing visceral fat could have a more significant impact on brain health than generalized weight loss approaches.

Topic:

Source: www.newscientist.com

Scientists May Need to Destroy the Brain to Upload It: Here’s Why

Can scientists transfer animal brains to computers? The answer hinges on how we define “transfer” and “brain.” If we’re a bit flexible in our interpretation, it’s essentially already taking place.

Caenorhabditis elegans are minuscule worms found in soil and decaying plant matter. As multicellular eukaryotes, they technically qualify as animals.

This tiny worm never surpasses 1mm (0.03 inches) in length and is one of the most well-known organisms on Earth.

We have sequenced its genome and mapped all development, encompassing approximately 2,000 cells, including 300 neurons. The variations in this worm are minimal, but what differences do exist have been mapped.

Thus, scientists could model the entire brain on a computer, reproducing not just identical reflex behaviors as found in nature, but even training them to perform new tasks, such as balancing virtual poles (and yes, that’s true).

However, even if we liberally interpret our definitions, this scenario doesn’t entirely hold up.

The C. elegans brain was not uploaded in the conventional sense. Instead, it was replicated using data gathered from years of experiments involving thousands of these worms. There hasn’t been a method to accurately record and transfer the thoughts and memories of an individual creature to a computer.

Caenorhabditis elegans are tiny worms that thrive in soil and decaying vegetation – Image credit: Science Photo Library

Many believe brain uploads represent the future of humanity, viewing it as an “inevitable consequence” of advancements in neuroscience and artificial intelligence (AI), potentially leading to the ultimate solution to death.

Nevertheless, several significant challenges must be addressed before this can become a reality.

As our conscious minds are intricately constructed from the cells and chemicals within our skulls and nervous systems, we must find a way to fully interpret our brain states in exquisite detail.

Next, we need to create a software model that can accurately mimic brain behavior at the molecular, or perhaps even atomic, level.

Over a decade ago, scientists demonstrated that it was feasible to identify neurons and their connectivity in meticulously prepared mouse brains. These brains were stained, sliced to 70 nanometers thick, and then reconstructed into a 3D format using a computer. As expected, the mouse did not survive.

Many believe that brain uploads are the future of humanity – Image credit: Aramie

This serves as an example of a destructive scan. The methods many suggest as necessary for recording a brain in sufficient detail may lead to its destruction.

As medical imaging technology achieves higher resolutions, some speculate that we could one day scan all cell states non-destructively. However, such scans must be instantaneous; otherwise, parts of your brain could be considering new things before the scan finishes.

Could this be achievable with a recently deceased brain? Scientists indicate that it might be essential to scan the brain while it’s actively functioning to ensure all cells accurately model the intended behavior.

Today’s computers are remarkable, yet even the most optimistic futurists predict we may need a century before we can simulate at the atomic scale required.

Moreover, there’s a final profound question. If you can upload your brain non-destructively in 500 years…what happens next? You would exist in a virtual world as computer software, while the original version of you continues to think in your biological form, likely with a slight headache from the scan.

But if you are still alive, did you genuinely trick death? Clearly not. Instead, you’ve allowed for the creation of virtual duplicates that could be used according to their will. That’s a disconcerting thought.


This article answers the question posed by Darcie Walsh from Preston: “Can scientists upload animal brains to a computer?”

We invite you to send us your questions to Question @sciencefocus.com or Message Facebook, Twitter, or Instagram (don’t forget to include your name and location).

Explore more of our ultimate Fun facts and other fascinating science pages.


Read more:


Source: www.sciencefocus.com

Study: Brain Signals in the Visual Area Can Indicate the Colors Observers Are Viewing

Do different observers experience similar neural activity in response to the same color? Does color produce distinct response patterns in specific brain areas? To explore these inquiries, researchers at the University of Tübingen utilized existing knowledge of color responses from various observers’ brains to predict the colors an individual is perceiving based on their brain activity. By estimating general brain commonality and responding to achromatic, spatial stimuli, the authors successfully aligned disparate brain responses within a common response framework linked to the retina. In this framework, derived independently of specific color responses, the perceived color can be decoded across individuals, revealing distinct spatial color biases between regions.

Using a sample of male and female volunteers, Michael M. Bannert & Andreas Bartels examined whether spatial color biases are shared among human observers and whether these biases differ among various regions. Image credit: Vat Loai.

Employing functional MRI scans, researchers Michael Banert and Andreas Bartels from the University of Tübingen captured images of subjects’ brains while they viewed visual stimuli, identifying various signals related to red, green, and yellow colors.

Remarkably, the patterns of brain activity appeared similar among subjects who had not participated previously. This suggests that the colors perceived can be accurately predicted by comparing them to the brain images of other participants.

The representation of color in the brain proves to be much more consistent than previously believed.

While it was already feasible to identify the colors an individual observed using functional magnetic resonance imaging (fMRI), this was only applicable to the same brain.

“We aimed to investigate whether similar colors are encoded across different brains,” Dr. Banert stated.

“In other words, if we only have neuronal color signals from another person’s brain, can we predict the colors they’re perceiving?”

“It’s well established that different brains exhibit roughly similar functional structures.”

“For instance, specific areas are more active when viewing faces, bodies, or simply colors.”

During the color experiment, researchers employed specific classification algorithms to analyze fMRI data, systematically differentiating signals originating from the brains of various groups of individuals by color.

Subsequently, data from new subjects were utilized to ascertain the colors they were perceiving using neuronal signals.

To frame each brain’s orientation, scientists spatially mapped how they responded to stimuli at different locations within their visual field using fMRI measurements.

“At this stage, we did not incorporate colors to avoid any bias in our results—only black and white patterns,” Professor Bartels explained.

“By simply merging this mapping data with color information from another person’s brain, we ensured we correctly identified the ‘new’ brain activity related to what the person was observing at that moment.”

“I was surprised to discover that even subtle variations in individual colors show remarkable similarity across brain activity patterns in specific visual processing regions, something previously unknown.”

Spatial color coding in the brain is domain-specific and organized consistently among individuals.

“There must be functional or evolutionary factors contributing to this uniform development, but further clarification is needed,” the authors noted.

The study was published this week in the Journal of Neuroscience.

____

Michael M. Bannert and Andreas Bartels. Large-scale color biases in the functional architecture of the retina are domain-specific and shared throughout the human brain. Journal of Neuroscience Published online on September 8th, 2025. doi: 10.1523/jneurosci.2717-20.2025

Source: www.sci.news

Research Suggests Artificial Sweeteners May Accelerate Brain Aging

New research suggests that artificial sweeteners may have unexpected risks for brain health.

In a study published in Neurology, researchers analyzed the diets of over 12,700 adults in Brazil, revealing that individuals who consumed higher amounts of calorie-free sweeteners experienced a more rapid decline in memory and cognitive abilities over an eight-year period.

This decline was especially notable among diabetic patients and those under the age of 60.

The study examined seven sweeteners commonly found in diet sodas, flavored waters, yogurt, and low-calorie desserts: aspartame, saccharin, acesulfame-K, erythritol, xylitol, sorbitol, and tagatose.

All except tagatose were linked to cognitive decline, particularly affecting memory and verbal fluency.

Participants were categorized into three intake groups. Those with the highest consumption—approximately 191 milligrams daily, similar to a single can of diet soda for aspartame—demonstrated cognitive aging equivalent to 1.6 additional years, at least 62% faster than those with lower consumption.

“Low and no-calorie sweeteners are often regarded as healthier alternatives to sugar, but our findings indicate that certain sweeteners may negatively impact brain health over time,” stated Professor Claudia Kimmy Sumoto from the University of Sao Paulo.

“Prior research linked artificial sweeteners to conditions such as diabetes, cancer, cardiovascular disease, and depression, but the effects on cognition were previously unexplored.”

Consumption of artificial sweeteners similar to daily cans of diet soda was associated with accelerated cognitive decline, akin to 1.6 years of brain aging – Credit: Getty

Interestingly, the link was primarily observed in adults under 60 years old.

“We anticipated that the association would be more pronounced in older adults due to their increased risk of dementia and cognitive decline,” Sumoto noted. “Conversely, our findings suggest that exposure to sweeteners during middle age could be particularly detrimental, which is crucial as this period is vital for establishing long-term brain health.”

The findings do not conclusively prove that sweeteners are the direct cause of cognitive decline, with limitations including reliance on self-reported dietary habits and the absence of control over sweetener usage in the research.

Nevertheless, Sumoto emphasized the need for further investigation, including brain imaging and studies examining gut health and inflammation.

Her team is already conducting neuroimaging studies to better understand these associations, although results are not yet available.

“More research is essential to validate our findings and to explore whether alternative sweeteners like those from the apple family, honey, maple syrup, and coconut sugar provide effective options,” Sumoto concluded.

About our experts

Claudia Sumoto is an assistant professor at the University of Sao Paulo, Brazil. She is a trained physician with research published in journals such as The Lancet, Nature Neuroscience, and Journal of Alzheimer’s Disease.

read more:

Source: www.sciencefocus.com

The Pioneering Map of Mammalian Brain Activity Might Have Revealed Intuition

Map of mouse brain showing 75,000 neurons

Dan Berman, International Brain Research Institute

The initial comprehensive activity map of the mammalian brain has unveiled groundbreaking revelations regarding decision-making processes.

For many years, neuroscientists aspired to capture neuronal activity throughout the brain at an individual level. However, challenges persist, including the limitations on the number of neurons an electrode can record, the number of electrodes deployable in a single brain, and the number of animals that a solitary lab can study.

To address these hurdles, a collaboration among 12 laboratories is underway, with each conducting identical experiments and recording duplicates to ensure consistency in collected data. This joint effort, tracking the activity of over 650,000 neurons, has resulted in the first comprehensive brain activity map related to complex behaviors.

“This research exemplifies a novel approach to addressing intricate inquiries in contemporary neuroscience,” stated Benedetto de Martino of University College London, who was not a part of this study. “Similar to CERN, which unites physicists to tackle profound issues in particle physics, this project will bring together global laboratories to confront challenges too expansive for individual teams.”

In each facility, mice were trained to maneuver a small LEGO steering wheel to direct a striped target towards the center of the display. The target was easily distinguishable when the stripes contrasted sharply. As contrast dwindled, the target nearly vanished, compelling the mice to rely on prior knowledge to respond accurately for a reward.

Bias was factored into the experiment, impacting the mice’s expectations about the target’s location. For instance, it could appear on either side of the screen. When the bias was inverted, the mice adjusted their expectations accordingly.

The resulting activity map indicates that decision-related processes are dispersed throughout the brain, rather than localized in one specific area. “Many assertions claimed, ‘this region is responsible for this function.’ However, our findings reveal that decision-making involves numerous regions collaborating through a consensus,” remarked team member Alexandre Pouget from the University of Geneva, Switzerland.

Furthermore, the findings support earlier research indicating that decision-related signals form long before an action is executed. Pouget noted that even prior to the commencement of individual experiments, signals linked to forthcoming decisions are evident. These signals accumulate when the target is presented, prompting the mice to move the wheels until a threshold is reached.

The second study reveals that beliefs regarding the target’s position are encoded very early in the brain’s activity. Researchers discovered that whether the signal emerged from the eye or journeyed to the thalamus, the brain’s relay center, advanced expectations regarding the target’s left or right positioning were already established.

This suggests that from the moment sensory information is processed by our brains, it is inherently influenced by knowledge, altering the conscious decision-making process unconsciously, according to Pouget. “While speculative, this may align with what we interpret as intuition,” he added.

Interestingly, the encoding not only captures recent sensory experiences but also seems to document the recent history of choices made. Lawrence Hunt from Oxford University pointed out, “This indicates that our actions and subjective experiences shape our perceptions, rather than the true objective reality.”

Does this imply our decisions are predestined? “The brain and its environment operate as a deterministic system. People often resist this idea, but it is accurate,” Pouget stated. “This means one can predict, to an extent, what actions will be taken before a decision is made. Nevertheless, when new information arises, expectations must be recalibrated, remaining unaware of how the surrounding world will evolve,” he explained.

Looking ahead, researchers are optimistic that the findings and collaborative methodologies will enhance the understanding of conditions like autism. A mouse model of autism suggests these animals struggle to update previous expectations with new information, according to Pouget, which resonates with our behaviors and perceptions.

topic:

Source: www.newscientist.com

How Lack of Hidden Sleep Affects Your Brain

In 1964, a San Diego high school student named Randy Gardner participated in a Science Fair Project by staying awake for an astounding 11 days.

By the second day of the experiment, Gardner began to experience memory lapses. By the seventh day, he suffered from intense hallucinations, and by the 11th day, he exhibited inconsistencies, paranoia, and muscular tremors.

Fortunately, the 17-year-old fully recovered without any lasting effects. No one has surpassed this record since then, as noted in the Guinness Record Book. Due to health concerns, sleep deprivation records were discontinued in 1997.

However, cognitive decline can occur without an 11-day deprivation; even a few nights of poor sleep can lead to diminished functioning, memory recall, and emotional regulation.

Now, let’s explore the science behind sleep and its impact on brain performance.

What happens to your brain while you’re sleeping?

Photo credit: Getty

During sleep, our brains engage in essential repairs and various tasks, including removing waste and detoxifying itself.

Short-term memories are organized, long-term memories in the neocortex are solidified, and REM sleep plays a crucial role in problem-solving and emotional regulation.

But it’s not just all activity; there are restorative phases during non-REM sleep stages 1, 2, and 3, which slow the heartbeat, relax the muscles, and reduce brain wave activity—with brief bursts during stage 2.

In REM sleep, brain activity intensifies, resembling the state of wakefulness. The amygdala and hippocampus are highly active, aiding in memory processing and emotion regulation. This dream phase supports creative thinking when you wake up.

Brain impacts of poor sleep




Lack of sleep or poor sleep quality can impact your brain’s performance in several ways.

The prefrontal cortex, responsible for decision-making and problem-solving, becomes less effective. This leads to reduced attention, cognitive flexibility, and working memory.

An overactive amygdala can hinder the emotional contextualization of information, and difficulties in storing information in the cortex weaken memory integration.

Other short-term effects of inadequate sleep include:
• Impaired judgment
• Slowed reaction times
• Declined risk assessment

When sleep deprivation becomes normal




For individuals with chronic sleep disorders, these short-term consequences are part of their everyday reality.

Moreover, chronic sleep deprivation has serious ramifications. Research conducted by the National Medical Library reveals a link between chronic sleep deprivation and Alzheimer’s disease.

“Studies indicate that sleep performs essential housekeeping, such as clearing potentially harmful beta-amyloid proteins,” states the Sleep Foundation.

“In Alzheimer’s disease, the aggregation of beta-amyloid leads to cognitive decline. Even one night of sleep deprivation can increase the accumulation of beta-amyloid in the brain.”

According to one study, individuals with sleep disorders have a significantly elevated risk of developing Alzheimer’s, with an estimated 15% of cases linked to lack of sleep.

Maintaining brain health and cognitive function heavily relies on regular, quality sleep, making it essential to optimize your sleep environment.

Optimizing sleep quality




Hästens, a bed maker based in Sweden, recognizes the vital importance of quality sleep. Since 1852, Hästens has crafted handmade beds in the Swedish town of Kaepi, taking up to 600 hours and using only natural materials.

“A good night’s sleep will enhance your performance,” notes Hästens. “In today’s fast-paced world, sleep may feel like a luxury, but from a medical standpoint, it’s crucial for a strong immune system and overall health.”

Explore the full range of Hästens beds and accessories, and learn more about the benefits of quality sleep here.




Local Hästens Sleep Spa bed tests can be booked online www.hastens.com or at your nearest certified retailer.

Source: www.sciencefocus.com

Groundbreaking Discovery: Scientists May Have Uncovered a Method to Halt Brain Aging

As we grow older, our cognitive learning and memory capabilities decline—recent studies have identified the proteins responsible for this phenomenon.

Researchers at UC San Francisco have pinpointed the culprit: an iron-associated protein called FTL1. Its detrimental effects hinder cognitive awareness throughout the aging process, and understanding this may allow us to target it in treating neurodegenerative diseases such as Parkinson’s and Alzheimer’s.

“It’s essentially a reversal of the challenges,” said Saul Vilda, PhD, Associate Director and Senior Author of the Papers at UCSF Bakar Aging Research Institute; Natural aging. “It’s about more than just slowing or preventing symptoms.”

The hippocampus, a brain region essential for learning and memory, is particularly susceptible to the effects of aging. Researchers observed an increase in neuronal FTL1 in the hippocampus of older mice, correlating with cognitive decline and reduced intercellular connections.

The hippocampus, shown here, is vital for the formation of new memories (credit: Getty Images)

In an experiment, scientists artificially increased FTL1 levels in young mice, leading to brain and behavior changes reminiscent of older mice. Elevated FTL1 levels hinder synaptic connections, ultimately resulting in poorer memory performance.

Interestingly, their motor skills and anxiety levels remained stable, indicating that the cognitive impairments were specifically linked to memory and synaptic functions.

When researchers reduced FTL1 levels in the hippocampus of older mice, they noted improved neuronal connections and enhanced performance in memory tests, effectively reversing some signs of aging.

The FTL1 protein is involved in iron storage and metabolism, regulating long-term levels in the brain. As we age, alterations in iron metabolism lead to increased FTL1 in neurons.

By reversing aspects of cognitive aging in mice, this discovery could pave the way for treatments that counteract the effects of FTL1 in the brain, potentially restoring cognitive function in older adults.

“Identifying elements that seem to promote aging while keeping your brain youthful is crucial for overall health and activity as you age. FTL1 appears to be an anti-aging champion,” stated Andrew Steel in BBC Science Focus.

“This is an intriguing preliminary study, but as this research was conducted on mice, we must observe whether the same effects occur in humans.”

Read more:

Source: www.sciencefocus.com

A Single Dose of Psilocybin May Be Sufficient to Rewire the Brain

SEI 263506826

Psilocybin, the hallucinogen, is derived from numerous magical mushrooms

microgen@gmail.com

A single dose of the psychedelic compound psilocybin may suffice to alter the connections within specific brain networks.

Psychedelic substances like psilocybin, sourced from various magical mushrooms, impact individuals’ perceptions of time, space, and self. Furthermore, they exhibit potential in addressing mental health issues such as depression and anxiety, largely due to their capability to enhance brain plasticity—though the mechanisms remain unclear.

Currently, Alex Kwan, from Cornell University in Ithaca, NY, and his team conducted experiments where mice received injections of either psilocybin or saline. The following day, they injected a genetically modified rabies virus, known for crossing synaptic gaps and indicating neuron connections.

Scans and dissections helped visualize the virus’s effects throughout the brain, revealing new neuronal connections. This research demonstrated that mice treated with psilocybin fortified the links between the brain’s resfluniur cortex—which integrates imagination, memory, and sensory data—and areas of the prefrontal cortex tied to planning and social behavior, in comparison to those receiving saline solutions.

Moreover, psilocybin seems to reduce connections involved in the cortical recurrent loops, which, while valuable for holding onto important memories, can, in some mental health conditions, perpetuate negative thoughts and behaviors. It is theorized that disrupting these loops is vital in addressing various mental health conditions.

“I believe this is the next phase we need to clarify,” stated Michael Wheeler from Brigham and Women’s Hospital in Boston. “Understanding these circuits that connect associative regions could be pivotal to unlocking mechanisms.”

“The modifications brought about by psilocybin treatment play a significant role in its effects on mood disorders,” said Eero Castén from the University of Helsinki, Finland. He added, however, that psilocybin merely offers a chance for remodeling; the actual circuits that strengthen or weaken may depend on the animal’s actions.

This line of research suggests that in the future, one might be able to select which brain connections to alter based on the mental health condition being treated. “Our findings present an exciting pathway for future work that combines neuroregulation with psychedelics to precisely target specific neuroplastic circuits,” the researchers noted in their publication.

Psilocybin use illustrates the “set and setting” phenomenon linked with psychedelic substances, exploring how various activities and environments influence brain alterations. The user’s mindset and surroundings can significantly impact drug effectiveness, resulting in “good” or “bad” trips.

Although this study was conducted on mice, it remains uncertain if the same connectivity changes occur in humans after consuming psilocybin. Nevertheless, Wheeler suggests the mechanisms might be comparable. This mouse study parallels findings in human brain scan research from 2024, which indicates that psilocybin may enhance connectivity in specific brain areas.

Topics:

Source: www.newscientist.com

Tailored Brain Implants Offer Lasting Relief from Chronic Pain

Deep brain stimulation is already utilized for Parkinson’s disease

Living Art Enterprise/Science Photo Library

Brain implants capable of detecting pain and responding with deep brain stimulation may provide relief for individuals suffering from previously untreated chronic pain.

Chronic pain affects approximately 20% of the population in the United States, and many find little relief through traditional pain management approaches. This could be due to underlying changes in brain circuitry.

Deep brain stimulation (DBS) involves using tiny electrodes to stimulate the brain, showing potential but also yielding inconsistent outcomes. The conventional method has typically applied a one-size-fits-all targeting of brain regions, despite indications that pain can stem from varying circuits in different individuals.

Thus, Prasad Shirvalkar and his team at the University of California, San Francisco, explored whether a personalized system might yield better results. In their study, six individuals with previously untreated chronic pain had their intracranial brain activity recorded and stimulated across 14 locations in the brain for ten days.

Out of five participants, the researchers pinpointed specific sites and stimulus frequencies that resulted in the most significant pain relief. While one participant noted no substantial relief, he could hold his wife for the first time in years, a notable improvement in his physical capabilities.

The research team employed machine learning to analyze and differentiate the electrical patterns associated with high and low pain levels. Consequently, they implanted permanent DBS electrodes personalized for each participant to monitor brain activity and optimize stimulation for pain detection and deactivation during sleep.

After six months of adjustments, each device underwent a trial where participants experienced real personalized stimulation for three months, followed by fake stimulation for another three months, or vice versa. The false stimulation targeted non-ideal locations with very low frequencies, and pain metrics were monitored multiple times daily throughout the trial.

On average, authentic stimulation led to a 50% reduction in daily pain intensity compared to the increase observed with spurious stimulation. Notably, the daily step counts increased by 18% during the false stimulation phase. Participants also reported fewer depressive symptoms and less pain interfering with daily life when undergoing real stimulation. These improvements persisted for over 3.5 years post-trial.

“This significant study employs the latest tools,” remarks Tim Dennison from Oxford University.

A previous challenge with DBS technology involved habituation; the brain would adapt to continuous stimulation, diminishing its effectiveness. Dennison suggests that extended benefits may arise from stimulating participants only when pain levels are elevated. The next phase will involve comparing adaptive versus constant stimuli to evaluate differences in outcomes.

“The other major hurdle lies in the economic feasibility and scalability of this method,” Dennison notes.

Topic:

Source: www.newscientist.com

Unlocking Emotions: Uncovering Hidden Indicators of Brain Aging

As we age, it’s common to perceive others as more content, as revealed by a recent study.

Researchers have discovered that older adults often exhibit a “positive bias” in interpreting facial expressions. This suggests they are more inclined to classify neutral or negative faces as happy rather than sad or angry.

“This indicates they tend to interpret vague or ambiguous expressions as ‘happy’ instead of ‘sad’ or ‘angry,'” noted Dr. Noham Wolpe in an interview with BBC Science Focus. “Crucially, this bias correlates with subtle cognitive decline and alterations in the specific brain circuits responsible for emotional processing and decision-making.”

Using data from over 600 adults, the research team examined this phenomenon through emotion recognition tasks along with brain imaging techniques.

They found structural variations in the hippocampus and amygdala—key regions for memory and emotion—and changes in connectivity with the orbitofrontal cortex, which plays a role in weighing emotional information and guiding decisions.

“These regions form crucial networks that aid in interpreting emotional signals and informing decisions,” Wolpe explained, highlighting how the relationship between the orbitofrontal cortex and amygdala strengthens in adults facing cognitive decline.

“This enhancement may lead them to perceive ambiguous or neutral emotional signals as positive, a phenomenon known as positive bias,” he remarked, noting that the reason behind this increased connectivity associated with cognitive decline is still unknown.

Researchers remain uncertain why the interamygdala connectivity and orbitofrontal cortex, highlighted in red, strengthen in individuals with cognitive decline – Trust: Getty

Although this research is in its nascent stages, its implications are significant. Positive biases might one day serve as early indicators of dementia, as changes in emotional processing frequently precede memory impairment.

“While emotion recognition tests are not yet ready to replace current cognitive assessments, in the future they could be combined with standard screening methods to enhance early detection,” Wolpe stated.

Wolpe and his team are already investigating innovative approaches, such as immersive virtual reality tasks, to better understand how people instinctively respond to emotional signals.

The next objective is to determine if this positive bias can actually forecast cognitive decline. The team has recently concluded a follow-up evaluation of participants in the Cambridge Aging and Neuroscience Research, approximately 12 years after the original assessments. Participant data is also being linked to GP records to monitor dementia diagnoses.

“A crucial takeaway,” Wolpe mentioned, “is that subtle biases in how we perceive others’ expressions can signal early brain changes, long before the typical signs of dementia manifest.”

“Grasping these connections could pave the way for quicker detection and ultimately more effective interventions.”

read more:

About our experts

Noham Wolpe is a senior lecturer at the Sagol School of Neuroscience at Tel Aviv University. His research focuses on understanding the interplay between cognition, mental health, and behavior, both in health and disease.

Source: www.sciencefocus.com

Innovative Perspectives on Brain Health Could Lead to Essential New Treatments

Sumaid Pal Singh Bakshi/Unsplash

Healthcare has witnessed remarkable advancements over the past few decades. In high-income nations, the survival rate for certain types of pediatric leukemia has increased from about 10% to over 90%. HPV vaccinations have decreased the incidence of cervical cancer, and early detection of HIV can lead to life expectancies similar to that of the general population.

In contrast, progress in mental health treatment has been less pronounced. Psychiatry often struggles with a perception of stagnation in treatment methodologies. Historically, it has heavily relied on psychopharmaceuticals developed in the mid-20th century. The field has remained largely anchored to these early drug treatments.

This stagnation is not due to a lack of effort. In the 1970s, molecular psychiatry emerged, focusing on the molecular basis of mental health conditions through proteins, genes, and signaling pathways. The goal was to anchor diagnostics and treatments to biological mechanisms instead of merely interpreting subjective symptoms. Despite advances in genetic research, including exploring the genetic links to schizophrenia, we have yet to see significant improvements in mental health treatment paralleling those in physical health.

The new approach is targeting chronic inflammation.

Given that approximately 8-16% of individuals in high-income countries like England experience anxiety and depression, a fresh perspective is crucial. Current innovative approaches focus on chronic inflammation, a phenomenon linked not just to heart disease and type 2 diabetes, but also to mental health.

For many, chronic low-grade inflammation results from the pace of modern life, often fueled by factors such as stress, obesity, and poor dietary choices. Promising developments suggest that certain anti-inflammatory medications may have potential benefits for the brain, alleviating issues associated with depression and dementia (“Chronic inflammation harms your mind. Here’s how to calm it down”).

These findings also clarify that managing mental health can be approached through actions such as regular exercise, relaxation techniques, and nutritious eating.

While this path may not work for everyone, given that antidepressants fail to help approximately 30% of those treated for depression, any progress is welcomed.

Topics:

Source: www.newscientist.com

Brain Implants Restore Decades-Long Forgotten Joy to Humans

A man who underwent brain stimulation had previously tried 20 treatments for his depression

Damien Fair et al./cc-by 4.0

Men suffering from severe depression for over 30 years have seemingly found relief through a personalized brain “pacemaker” designed to selectively stimulate various brain regions.

“He’s felt joy for the first time in years,” states Damien Fair from the University of Minnesota.

Treatment-resistant depression is often characterized by minimal improvement after trying at least two antidepressants. While procedures like electroconvulsive therapy (ECT) may provide some benefits, they don’t always yield relief. “They’re effective for all sizes. You’ll target the same brain area,” Fair explains. Yet, as every brain is unique, he often doesn’t hit the exact target needed for individual relief.

Fair and his team have now created a tailored method for a 44-year-old man, who was first hospitalized for depression at 1 PM. He had attempted 20 different treatments, including antidepressants, therapy, ECT, and more, all without lasting success. “It’s one of the most severe depression cases I’ve seen; he has attempted suicide three times,” Fair notes.

Initially, the researchers conducted a 40-minute MRI scan to delineate the boundaries of four brain activity networks linked to depression. This particular network in the man was found to be four times more active than that of individuals without depression, potentially exacerbating his symptoms, according to Fair.

The team then surgically implanted clusters of four electrodes at these defined boundaries, entering through two small openings in the skull. Just three days later, they began sending weak electrical pulses through wires attached to the electrodes, stimulating each brain network separately.

Upon stimulating the first network—default mode, related to introspection and memory—the man cried tears of joy. “I felt so much better,” Fair recalls.

Stimulation of the Action Mode and Salience Networks also led to reduced feelings of anxiety, while the team noticed enhanced focus when targeting the parietal networks involved in decision-making.

Using the man’s feedback, the team connected the electrode wires to tiny batteries placed just beneath the skin near the collarbone, allowing him to maintain these benefits outside the hospital. This setup acts like a “brain pacemaker,” as Fair describes it, stimulating various networks for a minute each day.

For six months, the man utilized an app linked to the pacemaker to alternate between different stimulation patterns crafted by the team every few days. He also documented his depression symptoms daily. The team optimized the stimulation based on this data during the first six months post-surgery.

Even seven weeks post-surgery, the man reported no suicidal thoughts. By the nine-month mark, he was in remission as per the Hamilton Depression Rating Scale. This improvement persisted for over two and a half years, apart from a brief period when his symptoms slightly recurred after contracting Covid-19.

“This is an incredible outcome,” states Mario Juruna from King’s College London. “It serves as a crucial proof of concept for patients unable to tolerate traditional depression treatments.”

Researchers have noted that compared to previous attempts at personalized brain stimulation, their method required fewer computational resources and led to shorter hospital stays.

It’s plausible that the expanded salience network of the man played a role in the treatment’s success. This is often present in depression; however, it’s premature to conclude if individuals with a lower level of salience network expansion would respond similarly, Juruena states.

To confirm the safety and effectiveness of this approach, randomized controlled trials assigning various individuals with depression to either stimulation or placebo will be necessary, according to Juruena. The team aims to conduct these trials within two years after testing the method on additional individuals, according to Fair.

If you need someone to listen, reach out: Samaritans in the UK at 116123 (Samaritans.org); US 988 Suicide & Crisis Lifeline: 988 (988lifeline.org). Visit bit.ly/suicidehelplines for resources in other countries

Topics:

Source: www.newscientist.com

What Déjà Vu (or Its Absence) Reveals About Your Brain Health

If you think we’ve already asked, do you know precisely what Déjà Vu is? If you’re among the wise, you’ll say it’s a peculiar sensation that you’ve experienced something before.

However, many neuroscientists argue that this definition lacks a touch of the enigmatic. Experts like Dr. Akira O’Connor, a Senior Psychology Lecturer at St Andrews University, indicates that Déjà Vu (French for ‘already seen’) is not just a friendly notion but also a metacognitive perception where these feelings can be misleading.

“Déjà Vu essentially represents a conflict between the perception of familiarity and the realization that something feels incorrectly familiar. This deception makes Déjà Vu unique compared to other memory occurrences,” he explains.

“Most healthy individuals recognize a sense of familiarity but do not tend to alter their behavior, even when they know something feels logically off.”

So, what occurs in the brain during Déjà Vu? And why do some individuals experience this phenomenon more frequently than others? Dive into the complete guide below for more insight.

What Does Neuroscience Say About Déjà Vu?

Sadly, as far as we understand, 60% of individuals report having experienced Déjà Vu at least once in their lifetime, so there’s more to it than mere glitches in the matrix.

However, neuroscientists have determined that this memory illusion does not signify an unhealthy brain. Far from a memory error, it is more about the brain’s functions. According to O’Connor, Déjà Vu surfaces when the frontal lobe attempts to rectify inaccurate memories.

“For most individuals, experiencing Déjà Vu is likely a positive sign that the brain regions responsible for factual checks are functioning effectively and preventing misremembering events.

“In healthy individuals, such false memories can emerge daily due to the complexity of memory involving millions and billions of neurons. It’s quite intricate,” he states.

Regrettably, there isn’t a universally accepted model that clarifies what transpires in the brain during Déjà Vu. Nevertheless, most leading theories converge on the idea that Déjà Vu arises when a brain area (like the temporal lobe) provides the frontal region with signals that past experiences are being replayed.

“Afterward, the decision-making region at the front checks if this signal aligns with reality. It’s essentially asking, ‘Have I been here before?'”

“If you have actually been in that location before, you might strive to recall more memories. Otherwise, the realization of Déjà Vu kicks in.”

Why Do Some Individuals Experience Déjà Vu More Frequently?

O’Connor estimates that the average healthy person feels Déjà Vu around once a month, but certain factors can heighten the chances of feeling this sensation.

First, your level of fatigue and stress plays a significant role. “When your brain is exhausted, it hasn’t had the chance to recover and regulate itself. Consequently, your neurons may be slightly misaligned, making you more prone to experiencing Déjà Vu,” he explains.

Research also highlights the connection between dopamine (a well-known mood-enhancing neurotransmitter) and Déjà Vu.

“Dopamine is what we label as an excitatory neurotransmitter. When discussing brain areas that signal familiarity, there’s a dopaminergic influence on those neurons, indicating dopamine’s involvement,” O’Connor adds.

“This might explain why recreational drugs, which affect dopaminergic activity, frequently lead to experiences of Déjà Vu.”

©Getty

There are additional factors as well. Have you ever wondered why your Déjà Vu experiences have diminished? According to rigorous scientific studies, it could be because you’re inadvertently a step ahead.

Unfortunately, like many memory phenomena, it is a natural consequence of aging, and you might not be capable of noticing the discrepancies,” O’Connor clarifies.

“It’s intriguing to observe that younger individuals frequently experience more Déjà Vu. Older adults are generally expected to have more memory issues; however, this generally results from them having heightened excitatory activity in their brains.

“When I embarked on my research into Déjà Vu nearly 20 years ago, I experienced it regularly, but now it’s much less frequent!”

Is Déjà Vu Ever Unhealthy?

Now that we know Déjà Vu is a healthy cognitive mechanism—far from dangerous—what if you find yourself experiencing it constantly? What could it mean if all new encounters feel familiar?

Interestingly, this can happen to some individuals. “In Finland, there are intriguing cases of individuals who have taken a combination of flu medications known to overly stimulate certain dopamine neurons.

“They found it particularly fascinating and continued taking those medications for a while.

However, not everyone can afford to step back from this existential déjà vu. Those who suffer from ‘Déjà vécu’ (French for ‘already experienced’) have an ongoing sensation of having already undergone their current situation. Essentially, nothing feels novel to them.

“What’s particularly captivating about individuals with Déjà vécu is that they often lose their ability to fact-check these feelings. Many cease watching television because they feel they’ve already seen every episode,” O’Connor observes.

“It sounds fascinating and innovative, but it’s genuinely distressing because it can often occur in individuals with dementia and may signal worsening degeneration.”

Explore More About Memory Science:

Déjà Vu: Jamais Vu is another curious phenomenon of similarity. It refers to the inability to recognize familiar scenarios logically. Though often linked with amnesia, it goes beyond mere memory lapses.

“This isn’t a typical form of forgetfulness,” O’Connor elaborates. “When you recognize a task at hand but are puzzled because you can’t identify something familiar. The crucial aspect is the perception element. You recognize that feeling as being fundamentally incorrect.”

“It occurs more frequently than Déjà Vu, yet likewise tends to happen when individuals are fatigued and is more common among younger people than older adults.”

Some laboratory experiments appear to induce Jamais Vu in participants. For instance, one study from the University of Leeds instructed 93 participants to write down the word ‘door’ as many times as possible within two minutes.

At the end of the interval, more than 70% of subjects questioned whether the word “door” was spelled correctly, even though it was entirely accurate.—despite their logical comprehension of it.

What’s particularly intriguing about this study is that it can be replicated anywhere. So, if you have two minutes and a pen handy, we encourage you to repeat after us: door, door, door, door, door…

About Our Expert – Dr. Akira O’Connor

Akira O’Connor is a senior lecturer at the School of Psychology and Neuroscience at St. Andrews University. His primary focus is on how memories influence decision-making and how we perceive them.

Discover more about memory and the brain:

Source: www.sciencefocus.com

Brain Activity May Indicate Future Friendships Among Strangers

Movie nights may have deeper significance

South_agency/Getty Images

Research indicates that individuals are more inclined to forge friendships if their brains react similarly to movie clips, implying that neural responses can forecast relationships.

Humans typically gravitate toward others with similar mindsets, a phenomenon that helps to explain why prior studies have identified neural parallels among friends. However, the question remained whether these similarities emerged because friends experienced similar upbringings or were attracted to those with comparable thought processes.

Carolyn Parkinson and her team at UCLA gathered brain scans from 41 students before they entered a graduate program. During the scan, participants viewed 14 diverse film clips, ranging from documentaries to comedies, covering topics like food, sports, and science. The researchers then assessed neural activity across 214 regions of each participant’s brain.

Two months later, participants completed a survey along with an additional 246 students in the program. The findings showed that those who were closer to Mark in terms of friendship tended to display more similar neural responses than those further removed in the social network, particularly in areas of the left preorbital cortex associated with subjective value processing. This correlation held true even after accounting for personal tastes based on individual enjoyment and interest in the clips.

After two months, the neural similarity between friends remained consistent, suggesting that initial friendships may form based on proximity before evolving into closer relationships over time. This was further supported when the researchers analyzed changes in friendships over the interim. Participants approaching this phase exhibited notable neural similarities compared to those whose activity drifted among 42 brain regions. These connections remained significant even after considering variables such as age, gender, and hometown. “The sociodemographic factors seem to account for some variations observed, at least in terms of measurable factors,” stated Parkinson.

Many of these brain regions are part of networks that facilitate understanding narratives, which may explain the similarity in how individuals perceive the world around them. “Individuals with like-minded thought processes find it easier to connect,” noted Robin Dunbar from Oxford University. “When they communicate, they intuitively grasp what others are thinking because it’s aligned with their own thought patterns.”

Dunbar, who did not participate in the study, expressed that these results resonate with long-held assumptions. “It’s akin to random groups of people unintentionally forming bonds based on compatibility; they are inherently attracted to one another,” he explained. “In essence, close friendships are not merely coincidental; they are composed and cultivated.”

Topic:

Source: www.newscientist.com

Ibogaine: A Psychedelic Drug That May Alleviate PTSD by Slowing Brain Waves

Ibogain, a psychedelic substance, is derived from the roots of the Iboga plant

Farmer Dods / Alamy

The psychedelic substance ibogaine has been shown to slow brain wave activity in individuals with traumatic brain injuries, potentially accounting for its effectiveness in treating symptoms of post-traumatic stress disorder (PTSD).

A study conducted last year revealed that ibogaine, sourced from the African Iboga plant, significantly enhanced the overall mental and physical well-being of military veterans suffering from traumatic brain injuries. Yet, the precise mechanisms behind these effects were previously unknown.

To investigate further, Jennifer Lismore from Stanford University and her team examined brain imaging of 30 individuals involved in the initial study. During a 5-day treatment session at a facility in Mexico, participants received a dosage of 12 milligrams of ibogaine per kilogram of body weight and participated in supportive activities like yoga, meditation, and therapy.

As part of the study, the researchers collected EEG data that recorded participants’ brain electrical activity. These scans were taken 2-3 days prior to and 3.5 days following the ibogaine treatment.

By comparing the EEG findings, Lismore and her team observed an overall deceleration in brain wave activity post-treatment, particularly in the gamma waves—the fastest brain waves—which exhibited nearly a 16% reduction in strength in the occipital region after ibogaine therapy. While gamma wave intensity saw a slight rebound after one month, levels remained significantly below those recorded prior to treatment.

Additionally, the intensity of slow theta waves rose by approximately 17% in the back of the brain and 13% at the front 3.5 days post-treatment. However, this increase lost its significance after one month.

Lismore suggests that the observed reduction in brain wave activity may clarify why ibogaine is effective for alleviating PTSD symptoms in many patients. “The deceleration of brain function has allowed patients, particularly those experiencing hyperawareness and sensitivity associated with PTSD, to find relief,” she noted. “One way to understand this slowing process is as a mitigation of the heightened distress often seen in PTSD cases.”

The temporary spike in slow theta waves could also indicate that ibogaine promotes neuroplasticity—the brain’s ability to adapt and rewire itself. Previous studies in animals have associated theta wave activity with brain adaptability, Lismore explains. By inducing a short-term increase in theta wave presence, ibogaine may create conditions conducive to improving mental health.

“Ibogaine essentially addresses the chaotic, restless nature of the brain, facilitating a sort of normalization,” remarked Conor Murray from the University of California, Los Angeles. “Ultimately, it instills a sense of security for participants, reassuring the brain.”

However, he cautions that these findings don’t fully reveal the mechanisms through which ibogaine effects these brain changes.

Another challenge is the absence of control measures, complicating the assessment of the influence from other treatment components, points out Lismore. Nonetheless, she asserts that these insights represent “a significant first step toward understanding why this treatment is so impactful.”

Topics:

Source: www.newscientist.com

Research Discovers Unusual Glow Emitted by the Human Brain

Our brains are glowing. While this phenomenon isn’t visible to the naked eye, scientists have the ability to detect faint light that permeates the skull. Recent studies indicate that this light varies based on our activities.

All living tissues generate a subtle light known as Ultraweak Photon Emissions (UPE). This emission ceases once the organism dies. The human brain, however, emits a considerable amount of this light due to its high energy consumption, accounting for around 20% of the body’s total energy.

“Ultraweak photon emissions, or UPE, are extremely faint light signals produced by all types of cells throughout the body—trillions of times weaker than the light from bulbs,” stated Dr. Nirosha Murugan, an Assistant Professor of Health Sciences at Wilfrid Laurier University in Ontario, Canada. BBC Science Focus.

“Although UPE is a weak signal, the energy expenditure of the brain generates more light than other organs,” she explained. “Consider the hundreds of billions of brain cells; each one emits a weak light signal, but together they create a measurable collective glow outside the head.”

Murugan’s research team aimed to explore whether this glow fluctuated with brain activity and if it could be utilized to assess brain functions.

To investigate, scientists equipped participants with caps containing electrical sensors to track both electrical impulses and light emitted from the brain. Twenty adults were invited to sit in a darkened room.

Participants were directed to open and close their eyes and follow simple audio instructions.

Comparisons were made between the captured electrical signals and UPEs, revealing notable correlations.

“We discovered that the optical signals detected around the head correlate with electrical activity in the brain during cognitive tasks,” Murugan noted. “These patterns of light emission from the brain are dynamic, intricate, and informative.”

The brain emitted this light in a slow, rhythmic pattern, occurring less than once per second, creating the illusion of stability throughout the two-minute tasks.

All living cells emit ultrawave light as a byproduct of chemical reactions such as energy metabolism – Credit: Sean Gladwell via Getty

Murugan indicated that measuring this brain light could offer scientists and medical professionals a novel method for brain imaging, potentially identifying conditions like epilepsy, dementia, and depression.

This light is not merely a by-product; it might also play a functional role in the brain. Murugan emphasized that examining it could “uncover hidden dimensions” of our cognitive processes.

“I hope that the possibility of detecting and interpreting light signals from the brain will inspire new questions previously deemed unfathomable,” she stated. “For instance, can UPEs permeate the skull and influence other brains within the vicinity?”

This study serves as a preliminary exploration, suggesting that plenty remains to be uncovered about our illuminating brains.

Nonetheless, Murugan expressed hope that the team’s discoveries will “ignite a new discussion regarding the significance of light in brain functionality.”

read more:

About our experts

Dr. Nirosha Murugan is an assistant professor in the Department of Health Sciences at Wilfrid Laurier University, Ontario, Canada. She was recently appointed as Tier 2 Canada Research Chair of Biophysics at the University of Algoma in Ontario.

Source: www.sciencefocus.com

Breakdown of Protein Production May Contribute to Brain Aging

Ribosome (center) responsible for synthesizing protein (red) from mRNA. Dark purple strands illustrate transfer RNAs involved in protein production.

The underlying factors contributing to cellular senescence may have been uncovered, revealing insights into various aging processes at the cellular level.

Studies on the brains of a type of freshwater fish known as Killifish reveal that as these fish age, their internal protein factories begin to malfunction, leading to critical protein classes being synthesized abnormally and creating a damaging feedback loop.

This revelation could pave the path for innovative approaches to addressing cognitive decline in aging; Alessandro Cellerino from the Leibniz Institute on Aging in Germany states, “Our focus is more on enhancing cognitive function and preventing cognitive impairment, rather than merely extending life span.”

Within cells, the templates for protein synthesis are encoded in DNA. When proteins are required, these instructions are transcribed into mRNA molecules.

This mRNA is then processed and transported to ribosomes, the cellular factories responsible for protein assembly. Ribosomes attach to and traverse mRNA strands, interpreting the three-letter codons and translating them into amino acid sequences, ultimately forming proteins.

Typically, a greater quantity of mRNA leads to increased protein synthesis. However, numerous studies indicate that this relationship falters in aging human cells, suggesting that protein output may diminish even if mRNA levels remain unchanged.

Through their investigation of aging ribosomes in the brains of Killifish, Cellerino and his team may have identified the cause of this phenomenon. Employing advanced imaging techniques, the researchers captured dynamic movements of ribosomes on constrained mRNA.

The findings revealed that, as the Killifish brain aged, an unexpected buildup of ribosomes occurred, particularly at codons for the amino acids arginine and lysine, leading to stalled ribosome activity and incomplete protein synthesis.

Arginine and lysine are crucial for numerous biomolecules associated with DNA and RNA, and their charged nature suggests that these stallings could significantly disrupt RNA and DNA-binding proteins.

These protein malfunctions pose a serious issue, as they are integral to crucial cellular processes such as RNA synthesis, splicing, and DNA repair.

“Aging is associated with increased DNA damage, reduced RNA production, decreased splicing efficiency, and diminished protein synthesis,” explains Cellerino. “We propose that this ribosome stalling binds these diverse senescence phenomena together.”

Moreover, Cellerino notes that ribosomes themselves harbor RNA-binding proteins, creating a detrimental cycle of stalling that further reduces ribosome availability and, accordingly, protein production.

The pressing question remains whether ribosomal stalling is also present in the human brain. Recent work by Jean Yeo at UC San Diego indicates that RNA-binding proteins diminish in aging human neurons, echoing Cellerino’s findings, although the underlying causes are still uncertain. “This change in RNA-binding proteins could explain their declining levels,” Yeo states.

If these observations hold true for humans, it could herald new strategies for treating age-associated cognitive disorders. Additionally, in Killifish, ribosomal stalling triggers stress signals that instigate inflammatory responses. “The persistent activation of this pathway leads to chronic inflammation,” warns Cellerino. “Chronic inflammation is a significant factor in brain aging.”

Experimental drugs that may mitigate this condition by blocking the associated signaling pathways are on the horizon, according to Cellerino.

“However, it is premature to draw definitive conclusions regarding their potential impact on longevity,” he cautions. This uncertainty arises from the lack of understanding regarding the initiation of ribosomal stalling at specific amino acids, as well as whether the same stalling mechanism exists across all organs.

topic:

Source: www.newscientist.com

The Role of Brain Mitochondria in Initiating Sleep

Mitochondria may have more functions than just energy production

CNRI/Science Photo Library

The energy-producing organelles in cells, known as mitochondria, may also influence sleep patterns. Research on fruit flies indicates that these organelles in the brain can promote sleep after prolonged wakefulness.

Scientists have begun to unravel the brain’s response to sleep deprivation. Findings include alterations in neuronal firing, changes in cell structure, and gene expression patterns. They have also pinpointed specific neurons triggered during sleep onset, yet the complexities of how these neurons act remain unclear.

“Sleep presents one of biology’s significant mysteries,” notes Gero Miesenböck of Oxford University. To delve deeper, he and his research team employed gene sequencing and fluorescent markers to observe gene activity in sleep-related neurons from around 1,000 female fruit flies (Drosophila melanogaster), which typically sleep for 13-16 hours, mainly during daylight hours.

The group allowed half the flies to rest overnight while keeping the others awake by gently agitating their containers or through genetic modifications that activated wake-promoting neurons with temperature increases.

Among the sleep-deprived flies, the researchers noted a surge in activity from sleep-inducing neurons that regulate genes tied to mitochondrial function and upkeep. The mitochondria displayed signs of stress as well, like fragmentation, damage repair efforts, and increased connections to nearby cellular structures.

This stress is likely due to the mitochondria continuing to generate energy even when neurons are inactive. The research indicates this can cause electron accumulation, leading to the formation of free radicals (unstable molecules capable of damaging DNA), thereby contributing to sleep pressure, according to Miesenböck. Once the flies were permitted to sleep, they repaired the mitochondrial damage.

Further findings showed that fragmented mitochondria in sleep-inducing neurons resulted in flies feeling less sleepy than usual and unable to recover after prolonged wakefulness. Conversely, flies engineered to facilitate mitochondrial fusion demonstrated superior repair capabilities, sleeping more than normal and bouncing back more effectively from sleep deprivation. This reinforces the hypothesis that mitochondria play a role in sleep regulation.

In another phase of the study, flies were genetically altered to enhance mitochondrial activity in response to light. This led to a 20-25% increase in sleep duration after just one hour of artificial light compared to the control group.

While this research focused on fruit fly neurons rather than human cells, mitochondria among different species share notable similarities. According to Ryan Mailloux at McGill University in Quebec, Canada, this adds credence to the idea that the energy production processes in mitochondria across various animals can underscore sleep pressure in humans.

This newfound insight could pave the way for novel treatments for sleep disorders. “This presents exciting possibilities for targeting these pathways to develop effective therapies for individuals struggling with sleep issues,” states Mailloux.

Michele Bereshi of Camerino University in Italy remarked, “This paper is certainly impactful and thought-provoking,” though he expresses concerns regarding the experimental design. “Sleep deprivation does not merely prolong wakefulness; it may introduce additional stressors that elicit cellular responses unrelated to the accumulation of sleep pressure.”

In response, Miesenböck explained that his team utilized diverse methods to keep the flies awake, including non-stressing temperature adjustments through gene editing, all achieving similar effects on mitochondrial activity. “What this study illustrates is that sleep homeostasis actively employs its own mitochondria to assess the need for sleep,” he asserts.

Topic:

Source: www.newscientist.com

The pandemic might have accelerated brain aging, even before we contracted Covid-19.

Changes in brain structure over time

Temet/Getty Images

The Covid-19 pandemic may have hastened brain aging, even prior to infection. Studies indicate that early in the outbreak, the brain may have undergone changes equivalent to 5.5 months of aging, potentially attributed to stress and shifts in lifestyle.

Many individuals suffering from long Covid report experiencing brain fog. However, the wider neurological implications of the pandemic are not completely understood a few years post-Covid-19’s emergence.

To investigate this, Ali-Reza Mohammadi-Nejad at the University of Nottingham, along with his team, trained machine learning models using 15,000 brain scans to analyze structural changes related to aging.

A model was then applied to brain scans from 996 volunteers participating in the UK Biobank Study. This comprised 564 individuals who underwent both scans prior to March 2020, which acted as the control group. The remaining 432 volunteers had one scan before March 2020 and another later, with scans averaging three years apart and a minimum gap of two years.

The research revealed that the pandemic may have induced an acceleration of brain aging by 5.5 months, as evidenced by structural changes in both white and gray matter. This effect was also observed in individuals who had recorded Covid-19 infections as part of the Biobank project.

This accelerated aging effect was notably more significant among men and those from lower socioeconomic backgrounds. However, the results may not be generalizable, as biobank participants typically exhibit better health, higher income, and less ethnic diversity than other demographics within the UK.

Researchers propose that these alterations might have been driven by the isolation and stress of lockdowns, alongside changes in lifestyle factors like physical activity and alcohol use during that period.

In their study, the authors indicate that these structural brain changes could be “at least partially reversible,” while also acknowledging limitations stemming from the study’s UK-based participant pool, suggesting that the findings may not accurately represent lockdowns’ impact elsewhere. “Our conclusions may actually underestimate the pandemic’s effects on more vulnerable populations,” Mohammadi-Nejad asserts.

topic:

Source: www.newscientist.com

Study: Common Sweetener Erythritol May Impact Brain Cells and Elevate Stroke Risk

A recent study from the University of Colorado Boulder indicates that erythritol, a widely used non-nutritive sweetener, may be linked to a higher risk of cardiovascular and cerebrovascular events.



Berry et al. Our study demonstrates that erythritol, at concentrations commonly found in standard size sugar-free beverages, negatively impacts cerebral microvascular endothelial cell oxidative stress, ENOS activation, NO production, ET-1 expression, and T-PA release in vitro. Image credit: Tafilah Yusof.

Erythritol is a popular alternative to non-nutritive sugars due to its minimal effects on blood glucose and insulin levels.

This four-carbon sugar has a low-calorie content of 60-80%, being as sweet as sucrose, and commonly replaces sugar in baked goods, confections, and beverages.

Authorized by the FDA in 2001, erythritol is recommended for individuals with obesity, metabolic syndrome, and diabetes, as it aids in regulating calorie consumption, sugar intake, and minimizing hyperglycemia.

Found naturally in small amounts in certain fruits, vegetables, and fermented foods, erythritol is quickly absorbed in the small intestine through passive diffusion.

In humans, erythritol is produced endogenously from glucose and fructose by erythrocytes, liver, and kidneys via the pentose phosphate pathway, making its levels dependent on both endogenous production and external intake.

“Our findings contribute to the growing evidence that non-nutritive sweeteners, often considered safe, could pose health risks,” stated Professor Christopher Desouza from the University of Colorado.

A recent study involving 4,000 participants from the US and Europe revealed that individuals with elevated erythritol levels are at a significantly increased risk of experiencing a heart attack or stroke within three years.

Professor Desouza and his team sought to determine what factors were contributing to this heightened risk.

They exposed human cells lining blood vessels in the brain to erythritol for three hours, using concentrations similar to those found in standard sugar-free beverages.

The treated cells exhibited several alterations.

Notably, they produced significantly less nitric oxide, a molecule critical for dilating blood vessels, while increasing the expression of endothelin-1, which constricts blood vessels.

Furthermore, the challenge of a thrombogenic compound called thrombin significantly slowed the cell’s production of T-PA, a naturally occurring compound that promotes coagulation.

Cells treated with erythritol also generated more reactive oxygen species, or free radicals, which can lead to cellular damage and inflammation.

“We’ve been diligently working to share our findings with the broader community,” noted Auburn Berry, a graduate student at the University of Colorado in Boulder.

“Our research indicates that erythritol may indeed heighten the risk of stroke.”

“Our study solely focused on sugar substitutes,” emphasized Professor Desouza.

“For individuals consuming multiple servings daily, the potential impact could be even more pronounced.”

The researchers caution that their findings are based on lab research conducted on cells, necessitating larger-scale studies involving human subjects.

Nonetheless, they advise consumers to check product labels for erythritol or “sugar alcohol.”

“Considering the epidemiological evidence informing our research, along with our cellular discoveries, monitoring the intake of such non-nutritive sweeteners seems wise,” Professor Desouza remarked.

The study was published today in the Journal of Applied Physiology.

____

Auburn R. Berry et al. 2025. The non-nutritive sweetener erythritol negatively affects brain microvascular endothelial cell function. Journal of Applied Physiology 138(6):1571-1577; doi:10.1152/japplphysiol.00276.2025

Source: www.sci.news

Scientists Say Learning Music Can Reverse Brain Aging, Even in Older Adults

Recent research indicates that older adults who play musical instruments tend to have healthier brains.

One investigation examined the impacts of decades of music practice, while another focused on learning new instruments later in life.

In both studies, engaging in music was linked to better brain health and a decrease in age-related cognitive decline.

The first study was published in PLOS Biology and involved collaboration between Canadian and Chinese researchers. They recruited 50 adults with an average age of 65, half of whom had been playing instruments for at least 32 years, while the others had no musical experience.

Additionally, they included 24 young adults with an average age of 23 who had no musical training.

The researchers utilized magnetic resonance imaging (MRI) to assess blood flow in the brains of the participants.

During the scans, participants listened to a recording of speakers amid background noise, where 50 other voices were present, and were tasked with identifying what the main speaker was saying.

The scans revealed that older musicians’ brains responded to challenges similarly to those of the younger participants.

Nonetheless, older adults showed signs of cognitive decline. Specifically, musicians exhibited strong neural connections on the right side of the brain that non-musicians lacked, which could place additional strain on their brain.

“The brains of older musicians remain finely tuned due to years of training, so they don’t need to play well-tuned instruments at high volumes,” stated co-author Dr. Yi from the Chinese Academy of Sciences.

“Our findings suggest that musical experience helps mitigate the additional cognitive strain typically associated with age-related challenges, particularly in noisy environments.”

A 2025 YouGov poll revealed that 25% of UK adults can play at least one instrument, with the guitar being the second most favored instrument after the piano.

As individuals age, cognitive functions such as memory, learning, and perception often deteriorate, eventually contributing to dementia.

However, researchers posit that cognitive reserve—the brain’s capability to manage damage and decline—can enhance resilience against this deterioration.

The precise mechanisms remain unclear, as noted by Morten Scheibye-Knudsen, Associate Professor of Aging at the University of Copenhagen, Denmark, in an interview with BBC Science Focus.

Some studies suggest that “exercising” the brain through activities like playing instruments, learning new languages, and solving puzzles can improve brain health, but results from other research have been inconsistent.

“Overall, we advocate for brain training, but the evidence is not conclusive,” Scheibye-Knudsen remarked.

Conversely, another recent study, published in Imaging Neuroscience, indicated that musical practice can enhance brain health, even when individuals start playing in later life.

According to a 2024 poll from the University of Michigan, 17% of US adults aged 50-80 engage in playing instruments at least several times a year – Credit: DMP via Getty

Researchers at Kyoto University in Japan continued previous studies that included 53 elderly individuals (average age 73) who took music lessons for four months. Initial findings indicated no significant differences in brain health among participants.

Four years later, the same participants underwent MRI scans (13 of whom had maintained their music practice).

Those who ceased playing their newly learned instruments showed declines in memory performance, with a noticeable reduction in the volume of the putamen—a brain region associated with motor function, learning, and memory.

However, those who continued playing music over the four years exhibited no cognitive decline.

Scheibye-Knudsen noted that the study demonstrates that “playing an instrument not only helps preserve cognitive function as we age, but it may also directly contribute to maintaining the structural integrity of the brain.”

He added, “Engaging in music beyond what this study covered offers additional advantages, such as enhanced social interaction.”

“I encourage people to start making music; it’s never too late to learn.”

Read more:

About Our Experts

Morten Scheibye-Knudsen is an associate professor of aging at the University of Copenhagen, Denmark, and leads the Scheibye-Knudsen Research Group. He also serves as the president of the Nordic Aging Association.

Source: www.sciencefocus.com

Brain Changes from Eating Disorders Mirror Those Seen in OCD and Autism

False-colored nuclear magnetic resonance images of children’s brains

CNRI/Science Photo Library

New research indicates that children with anorexia nervosa are undergoing significant brain changes that go beyond what starvation can explain. This insight helps clarify the neurological mechanisms behind the disorder, potentially paving the way for improved treatment strategies.

Anorexia nervosa is noted for severe dietary restrictions and a distorted body image, making it a less understood condition. While previous studies have highlighted that the brain’s outer layer, or cortex, is notably thinner in these individuals, it remains uncertain whether such changes stem from malnutrition or are intrinsic to anorexia.

Clara Morrow from The University of Montreal, Canada, examined brain scans of children with anorexia alongside those with Avoidant/Restrictive Food Intake Disorder (ARFID). Although both conditions encompass significant food restrictions and weight loss, ARFID lacks the body image concerns that characterize anorexia. Instead, individuals with ARFID may avoid food due to sensory sensitivities, disinterest in eating, or fear of adverse consequences like choking, vomiting, or gastrointestinal distress. The comparison could shed light on the unique brain changes associated with each condition and malnutrition, according to Moreau.

The study analyzed brain scans from 124 children diagnosed with anorexia, 50 with ARFID, and 116 without eating disorders. All participants were under 13 years old and resided in France. Researchers examined the extent of brain differences between those with and without eating disorders.

On average, children diagnosed with anorexia exhibited a significantly thinner cortex compared to those without eating disorders. Once body mass index (BMI) was taken into account, anorexia correlated with cortical thinning across 32 brain regions, particularly in the superior head lobule, an area involved in sensory information processing. “This aligns with our understanding, as we know anorexic patients often struggle with their perception of weight and size,” stated team member Anael Ayrolles from the University of Paris.


These alterations are akin to those observed in older adolescents and adults suffering from anorexia, notes Moreau. “The effect size is among the most significant in psychiatry,” she comments. “It appears as if they’ve experienced accelerated brain aging or early Alzheimer’s disease, though they show no symptoms of Alzheimer’s. However, if their BMI is normalized, brain recovery is often observed, though not in every case.”

In contrast, no significant differences in cortical thickness were observed between children with ARFID and those without any eating disorders. “We anticipated some overlap with anorexia potentially reflective of BMI,” explains Moreau. “However, our findings did not reveal many similarities between the two conditions.” The reason for this remains unclear, especially since this is the inaugural brain imaging study focused on ARFID. Given that ARFID typically manifests before the age of five, the brain may have adapted to limited food intake, suggests Moreau.

The researchers subsequently contrasted these brain differences with findings from previous studies on other disorders, including obsessive-compulsive disorder (OCD), ADHD, and autism. They found a notable correlation between anorexia and OCD, whereas ARFID displayed brain changes similar to those associated with autism. This aligns with Moreau’s assertion that sensory sensitivity is prevalent in both autism and ARFID. Conversely, OCD and anorexia exhibit obsessions, rituals, and preconceived notions.

Nevertheless, individuals with OCD and anorexia frequently present other mental health challenges, notes Joanna Steinglass from Columbia University in New York. Approximately 14% of those diagnosed with anorexia also meet the criteria for OCD. This complicates the understanding of whether a genuine neurological resemblance exists between the two conditions or if other mental health challenges underpin this correlation.

“We were cautious not to over-interpret our results,” said Ayrolles. However, these discoveries imply that malnutrition alone may not account for all the brain changes observed in anorexia. “Mental illness is fundamentally a brain-based illness, and understanding this helps us address patient experiences more effectively, often leading to less blame,” remarks Steinglass. “This insight could drive the development of more effective treatments.”

Topic:

Source: www.newscientist.com

Life-Saving Treatments for Fatal Genetic Disorders Through Brain Immune Cell Replacement

Microglia are specialized immune cells in the brain

Science Photo Library/Alamy

The process of replacing immune cells in the brain halts the advancement of a rare and terminal brain disorder known as ALSP. This also paves the way for future clinical trials targeting other neurological ailments.

Extensive research indicates that impaired microglia—specialized immune cells within the brain—play a role in various neurological disorders, including Alzheimer’s disease and schizophrenia. The term ALSP stands for adult-onset leukoencephalopathy with axonal spheroids and pigmented glia, characterized by mutations in genes responsible for the survival of these cells, resulting in a reduced number of microglia and leading to progressive cognitive decline. Currently, no effective treatment exists for this fatal illness.

To address this, Bo Peng from Fudan University in China and his team employed a novel treatment called microglia replacement therapy. Prior experiments in rodents have shown that implanted stem cells—capable of developing into different cell types—can effectively replace microglia. However, it is necessary to first eliminate existing microglia in the brain to facilitate this. This can be achieved using drugs that target protein microglia.

Pursuing this avenue, Peng and his colleagues conducted initial tests on five mice with genetic mutations analogous to those associated with ALSP. As the mutations already impacted protein microglia, the researchers did not need to deplete these proteins with medication. Subsequently, they transplanted stem cells from healthy mice into the affected mice. Fourteen months later, treated mice exhibited approximately 85% more microglia in their brains compared to six untreated mice harboring the same mutation. Notably, these treated mice also demonstrated improvements in motor function and memory.

Encouraged by these promising findings, the researchers extended the treatment to eight individuals diagnosed with ALSP, using donor stem cells without preconditions. One year post-treatment, brain scans revealed minimal changes in participants compared to scans taken before the procedure. In contrast, four untreated individuals displayed significant brain deterioration and lesions over the same period. This implies that microglial replacement therapy effectively halted the progression of the disease.

At the study’s outset, all participants underwent cognitive assessments using a 30-point scale, where a decrease in score indicated cognitive decline. Reassessments a year later showed that, on average, scores remained stable for those who received the microglia replacements.

These results point to microglial replacement therapy being a potentially effective solution for ALSP. However, since this represents the inaugural human trial, “we remain unaware of any potential side effects,” comments Peng. “Given the rapidly progressive and lethal nature of this disease, prioritizing benefits over possible side effects might be crucial.”

Chris Bennett from the University of Pennsylvania cites the historical use of stem cell transplants for treating neurological disorders. “It has demonstrated effectiveness, particularly through microglia replacement,” he states. Recent FDA approvals for two similar therapies addressing other rare brain conditions further support this. “While prior studies may not have used this exact terminology, they effectively addressed similar conditions,” Bennett elaborates. “I’d describe this as a smart and innovative application of stem cell transplants. Nonetheless, microglia replacement therapy has been evolving for decades.”

Despite this, the results underscore the broader implications of microglial replacement therapy. Experts believe this strategy could one day address more prevalent brain disorders. For example, certain genetic mutations significantly heighten Alzheimer’s disease risk and affect microglial function. Replacing these malfunctioning cells with healthy human equivalents could offer a promising avenue for treatment.

Topics:

  • Neuroscience /
  • Immune System

Source: www.newscientist.com

The Role of Your Young Brain and Immune System in Longevity

All organs seem to be equally unimportant for longevity

westend61 gmbh / alamy

In the quest for a long life, it appears that not all organs hold equal significance. Research indicates that maintaining a youthful brain and immune system is crucial, overshadowing even the aging of the heart or lungs.

We already know that different organs age at varying rates, but the factors that most significantly affect lifespan remain elusive. Hamilton Sehawee from the Icahn School of Medicine at Mount Sinai, New York, leads this inquiry.

To explore this, his team assessed the levels of around 3,000 proteins in blood samples from over 44,000 participants aged between 40 and 70 years, all part of the UK Biobank Study.

Leveraging genetic data from earlier studies, the researchers mapped the locations of these proteins in the body, identifying several that were notably concentrated in 11 regions, including the immune system, brain, heart, liver, lungs, muscles, pancreas, kidneys, intestines, and adipose tissue. Elevated levels of these proteins suggest vital roles in the proper functioning of these organs and systems.

The team then employed machine learning models to estimate the ages of participants based on half of the data, developing distinct models for each of the 11 body areas. Generally, these predictions were consistent with the actual ages of the participants, although some models did occasionally overestimate or underestimate, supporting the notion that organs indeed age differently, according to Oh.

Using their trained model, the researchers predicted the organ and immune system ages of the other half of participants who were monitored for an average of 11 years after blood samples were taken.

They discovered that having even one organ showing signs of premature aging or an aging immune system correlated with a 1.5 to 3 times higher risk of death during follow-up, with the stakes increasing alongside the number of aging organs.

Interestingly, exceptions arose in cases where the heart and lungs appeared considerably younger than anticipated, which did not correlate with a lower mortality risk during the study period. However, possessing a youthful brain or immune system was associated with a roughly 40% reduction in death risk. These areas also intensified the overall risk reduction to 56%, particularly when both were young.

“The brain and immune system influence numerous other bodily functions, so it’s expected that their deterioration could significantly impact life expectancy,” remarked Alan Cohen from Columbia University in New York.

Nonetheless, Cohen cautions that protein markers may not entirely encapsulate the aging process. “There may be gaps in our understanding of the exact origins of these proteins. Certain organs may release their proteins into the bloodstream more readily than others, skewing perceptions of their importance,” he notes.

Moreover, further research involving a broader demographic that includes more ethnic and economically varied populations is necessary, as the current study participants were predominantly affluent individuals with European ancestry, according to Richard Shiou of King’s College London. Oh and his team are planning additional studies to explore this further.

Even if these findings hold true, concrete methods for curbing the aging processes in the brain and immune system remain elusive. Oh mentions that pinpointing aging markers in these areas could pave the way for medication targeting.

topic:

Source: www.newscientist.com