Vitamin K is a crucial nutrient primarily found in green vegetables and may play a vital role in safeguarding the brain from cognitive decline.
Recent research suggests that vitamins, particularly vitamin K, could help in preserving the cells of the hippocampus, which is the brain’s memory center.
In a recent study, scientists conducted an experiment where 60 middle-aged mice were fed either low or regular diets supplemented with vitamin K for six months. Subsequent behavioral tests revealed the impact of vitamin K on mouse learning and memory.
The study showed that mice lacking vitamin K struggled with memory and learning tasks. Compared to mice on a regular diet, those deficient in vitamin K had difficulty recognizing familiar objects, indicating memory loss. They also faced challenges in spatial learning tasks, as evidenced by their performance in a water maze.
Green vegetables like spinach, kale, lettuce, Brussels sprouts, broccoli, and cabbage are excellent sources of vitamin K. Avocados and kiwi fruits also contain high levels of this nutrient – Credit: Mediterranean via Getty
Further analysis of the mice’s brain tissue revealed reduced neurogenesis in the hippocampus of vitamin K-deficient mice. Neurogenesis, the process of generating new neurons, is essential for maintaining brain health and protecting against damage.
“Neurogenesis is believed to be crucial for learning and memory functions, and its impairment may contribute to cognitive decline,” stated Ton Zheng, a research scientist at Tufts’ Center for Human Nutrition (HNRCA).
In addition to reduced neurogenesis, the brains of vitamin K-deficient mice also showed signs of inflammation, further linking vitamin K deficiency to cognitive decline.
While the study highlights the importance of vitamin K, researchers emphasize the significance of obtaining nutrients from a balanced diet rather than relying on supplements.
“It’s essential for people to consume a healthy diet rich in vegetables,” advised Professor Sarah Booth, senior author of the study and director of the HNRCA.
Most individuals typically obtain sufficient vitamin K from their diet, with sources like spinach, kale, peas, Brussels sprouts, broccoli, cabbage, parsley, avocados, and kiwi. However, older adults are more prone to vitamin K deficiency.
The human brain is so complex that the scientific brain has a hard time understanding it. Nerve tissue, the size of a grain of sand, could be packed with hundreds of thousands of cells connected by miles of wiring. In 1979, Nobel Prize-winning scientist Francis Crick concluded that the anatomy and activity of only a cubic millimeter of brain material would forever surpass our understanding.
“It’s useless to seek the impossible,” says Dr. Crick. I wrote it.
46 years later, a team of over 100 scientists achieved that impossible by recording cell activity and mapping the structure of cubic millimeters of the mouse brain. In achieving this feat, they accumulated 1.6 petabytes of data. This is equivalent to 22 years of non-stop high-resolution video.
“This is a milestone,” said Davi Bock, a neuroscientist at the University of Vermont. the studywas published in the journal Nature on Wednesday. Dr. Bock said that it enabled advances that allowed it to cover the cubic bones of the cubic brain to map the entire brain wiring of a mouse.
“It’s completely doable and I think it’s worth doing,” he said.
Over 130 years It has passed since Spanish neuroscientist Santiago Ramon y Kajal first spies on individual neurons under a microscope, creating a unique branching shape. Scientists from subsequent generations have resolved many of the details about how neurons send voltage spikes into long arms called axons. Each axon makes contact with small branches or dendrites of adjacent neurons. Some neurons excite their neighbors and fire their own voltage spikes. Some quiet other neurons.
Human thinking emerges in some way from this combination of excitation and inhibition. But how this happens remains a ridiculous mystery as scientists could only study a small number of neurons at a time.
Over the past few decades, technological advances have allowed scientists to begin mapping the whole brain. 1986, British researcher Published A small worm circuit made up of 302 neurons. The researchers then charted larger brains, including 140,000 neurons in the fly’s brain.
After all, is Dr. Crick’s impossible dream possible? The US government began in 2016 100 million dollar effort Scan cubic millimeters of mouse brain. The project was called Cortical Network (or Mechanical Intelligence from Microns) and was led by scientists from the Allen Institute of Brain Science, Princeton University, and Baylor School of Medicine.
Researchers have zeroed into part of the mouse’s brain, which receives signals from the eyes and reconstructs what the animal is seeing. In the first phase of the study, the team recorded the neuronal activity in that area as they showed mouse videos of different landscapes.
… (remaining content also rewritten with HTML tags)
In the dimly lit room, broken symphonies of rattles, hums, and wobbles danced off the walls. However, the musicians responsible were nowhere to be seen.
Upon closer inspection, fragments of performers could be discerned, although their presence was not palpable.
In the midst of the room, spectators floated around an elevated pedestal, craning their necks to catch a glimpse of the brain behind the operation. Beneath the magnifying lens lay two white masses resembling miniature jellyfish. Together, they constituted a “mini-brain” cultivated in the laboratory of the late American composer Alvin Lucier.
“You’re peering into the Abyss”: the central pedestal of the revival, housing the “mini-brain” grown in Lucier’s lab. Photo: Rift Photography
Lucier, a trailblazer in experimental music, passed away in 2021. However, here in the art galleries of Western Australia, his legacy has been resurrected through cutting-edge neuroscience.
“Gazing down at its central pedestal, one pierces the veil,” remarks Nathan Thompson, the project’s artist and creator. “You peer deep within, observing what is alive. Unlike yourself.”
The Four Monsters who orchestrated the resurrection: Guy Benley, Matt Gingold, Nathan Thompson, and Stuart Hodgitz. Photo: Rift Photography
The revival is the handiwork of a self-proclaimed “four monsters” alongside a tight-knit team of scientists and artists who have dedicated decades to pushing the boundaries of biological arts: Thompson, along with fellow artists Ben Ally and Matt Gingold, and neuroscientist Stuart Hodgetts.
Lucier proved to be an ideal collaborator. In 1965, he became the first artist to utilize brain waves to produce live sounds in innovative solo performances. In 2018, the revival team, long-time admirers of Lucier’s work, brainstormed ideas with him. By 2020, at the age of 89 and battling Parkinson’s disease, Lucier consented to provide blood for the resurrection.
Dear reader, I have a confession: I suffer from the illness that young people call “Brain corruption” Things I can’t think deeply after scrolling too much on my phone. It’s difficult to complete a book these days.
Many people have this problem. Many have created a category of minimalist tech products that strive to eliminate us to be distracted, from AI pins, the artificially intelligent lapel pins that take notes to phones that only have basic features.
The latest example, $600 Light Phone IIIa peeled mobile phone that does little from a Brooklyn startup. The latest version, which began shipping in March and has been set to a wider release in July, can call, text, take photos, view map instructions, play music and podcasts, and many others can’t.
There is no web browser. Also, there is no app store. That means there are no Ubers who welcome rides, slacks, or social media. There’s no even an email.
“When you use it when you need it and turn it back on, it goes away in your life,” said Kaiwei Tang, chief executive of Light, a startup that has developed multiple iterations of light phones over the past nine years. “We tell many customers that they feel less stressed, they become more productive and creative.”
I used it as my main phone for a week, because I wanted to know if a light phone can cure me brain rot. There was a moment when I enjoyed it. I didn’t want to stare at the phone screen while I was waiting for the train, resting at the gym or eating alone. The phone sounded wonderful and clear. The Maps app did an amazing job navigating me around town.
It reminded me of a simple time when we mostly used our phones for Converse before we put them away to focus on other tasks.
But for a week, the flaws of the stupid phone call were lacking in my enjoyment. I suddenly realized I couldn’t enter the station. We looked up the names of our new restaurants and controlled the garage doors.
Part of it has nothing to do with the light phone itself, which is a decent product, but how society as a whole relies on advanced smartphone capabilities.
This is how my week ran errands, commute, and went out on Lower Technology phones.
Get started
When I set up a light phone review unit over the weekend, the phone, which looked like a black rectangular slab, was quite bare bones. The phone’s menu was a black screen showing a white text list for mobile phones, cameras, photo albums and alarm functions. To add more tools, I had to access the dashboard using a web browser on my computer. There, we were able to install features such as the map app, notepad, and timers.
I was ready to go, so I decided to live without my iPhone for at least a while.
I’m commuting
On Monday morning, I took the train from Oakland, California to San Francisco and started commuting. When I arrived at the station I realized that I couldn’t get in without an iPhone. This is because many years ago, I had converted my physical transit pass, Clipper cards, into virtual cards stored in my smartphone’s mobile wallet.
The light phone didn’t have a mobile wallet to load a virtual transit card, so I went back home badly to get my iPhone and eventually showed up in the office 30 minutes late.
Go to the gym
One night, I got a similar hit at a rock climbing gym. To enter, members use their mobile phones to log in to the gym website and generate a temporary barcode that is scanned at the entrance. The light phone didn’t have a web browser and could not create a barcode, so we had to wait in line at the front desk.
Text a friend and take photos
I added some of my closest friends to my address book over a light phone and texted them explaining my experiment. When I typed the device’s keyboard, some felt slow as there was no auto-correct feature to fix typos. As a result, the conversation was concise.
The cheer continued as I sent pictures of people. The unlit and grainy image appeared to have been created with telephone cameras for at least 15 years.
“Retro!” said one friend in response to a blurry photo of my daughter.
“Wow, that’s bad,” another friend said of the dimly lit photo of my corgi Max.
Photo taken on the author’s Corgi’s light phone, Max looked unlit and grainy.credit…Brian X. Chen/New York Times
The founders of Light said they are proud of the Light Phone camera, which has a nostalgic feel to it.
I’m running errands
One afternoon I had to drop off Amazon’s return at the UPS store. We have selected the most convenient shipping options, including displaying QR codes for scanning.
problem? Light phones didn’t have an email app or web browser to download codes. Instead, I loaded it onto my computer screen and snapped mediocre photos on my phone.
When I brought the package to UPS and presented the photos, I held my breath and hoping the image was clear enough. UPS employees kept the scanner and after three attempts they heard beeps and transport labels printed.
Not only is it a relief, but how troublesome.
lunch
Another afternoon my wife and I went out for an improvised lunch. I had to back out the car and ask my wife to use her iPhone to close the garage door with the app myq. (Our physical garage door opener stopped working years ago.)
After that, I was trying to remember the name of a new sushi restaurant I read recently on my food blog. It was inevitable that I would dig deeper into my blog posts on a light phone. In the end we speculated and went to the wrong restaurant. However, it was good to have lunch together without the temptation to check my email.
Conclusion
I admire the goal of light phones, but my experience shows that there is nothing realistically possible or can buy to bring us back to a simpler era. Many aspects of our lives revolve around highly capable smartphones, travelling around town, working, paying for things, dominating home appliances.
This light cell phone experiment reminded me of glamping.
I can’t think of many people who make them work to make light phones realistically use only their mobile phones. Many of us rely on tools like Slack and email to communicate.
A light phone may be a good choice for unplugging while you’re off work, as a secondary leisure phone similar to a weekend car. But even so, camera quality may be a contract breaker for some.
Light’s CEO Tang admitted that Light Shone is not for everyone, but added that parents are considering buying a mobile phone for their children not distracted at school. The company is also working on adding more tools, including the ability to request mobile payments and Lyft cars.
An investigation is underway at a Boston area hospital involving five nurses who worked in the same department and developed brain tumors.
Mass General Brigham Newton Wellesley Hospital reported a total of 11 employees in the fifth floor obstetrics department have raised health concerns, with five of them being diagnosed with benign brain tumors. Two of these tumors are meningiomas, the most common and benign types of brain tumors.
“The investigation did not find any environmental risks associated with the development of brain tumors,” said hospital administrator Jonathan Sonis, in a statement alongside Associate Nurse Sandy Muse Jonathan Sonis.
The hospital conducted the investigation in collaboration with government health and safety officials, ruling out disposable masks, water supplies, nearby X-rays, and chemotherapy treatments as possible sources of the issue.
“Based on these findings, we can assure our staff and patients that there are no environmental risks within our facilities,” the administrator assured.
Exterior of Mass General Brigham Newton Wellesley Hospital in Newton, Massachusetts. Google Maps
The Massachusetts Nurse Association, currently negotiating nurse compensation at the hospital, expressed their commitment to ongoing investigation.
The union highlighted nurses’ concerns about workplace health, leading to the discovery of individuals with tumors.
“The hospital’s environmental tests were not comprehensive, and they only spoke to a few nurses,” stated MNA spokesman Joe Markman. “The hospital cannot sweep this issue under the rug.”
The state agency and federal Occupational Safety and Health Administration are yet to provide conclusive information on the matter.
According to the American Cancer Society, a cancer cluster would involve an unusually high number of cancer cases within a specific area sharing common characteristics.
“Four out of ten people in the US develop cancer during their lifetime,” stated the association, emphasizing the frequency of cancer occurrences.
Ralph Hollogay, a pioneering anthropologist who emphasized the importance of changes in brain structure in human evolution, passed away on March 12th at his Manhattan home at the age of 90.
His death was announced by the School of Anthropology at Columbia University, where he had been a professor for nearly 50 years.
Holloway’s theory challenged the notion that brain size alone distinguished humans from apes and early ancestors, highlighting the significance of brain organization.
Although no brains from millions of years ago exist, Dr. Holloway focused on creating fossil skull endocasts from latex to overcome this limitation.
In a 2008 paper, he detailed how he obtained information from these casts, providing insight into brain structure by examining the outer edges of the brain.
Using endocasts, Dr. Holloway concluded that the fossil skulls from South Africa’s Town’s Children quarry belonged to early human ancestors, supporting Raymond Dart’s controversial discovery.
His meticulous research included studying natural endocasts found in the quarry to validate his conclusions, emphasizing the importance of independent investigation in scientific discovery.
Dr. Holloway’s focus on the Lunath groove behind the endocast provided evidence that aligned with human brain positioning, confirming the accuracy of Dr. Dart’s initial findings.
The contentious debate surrounding the Town’s Children’s findings has subsided, with Dr. Holloway’s and Dr. Dart’s conclusions about the Lunate Sulcus now widely accepted in the scientific community.
Dr. Holloway’s emphasis on brain structure over volume played a pivotal role in validating human ancestry, highlighting the significance of reorganization in evolutionary development.
Throughout his career, Dr. Holloway’s dedication to studying brain evolution through three-dimensional modeling remained unwavering, emphasizing the importance of understanding the human brain’s journey to its current complexity.
His contributions, such as his work on TaungChild, continue to shape our understanding of human origins and evolution.
Dr. Holloway’s legacy extends beyond his scientific achievements, as he leaves behind a lasting impact on the field of anthropology and evolutionary studies.
His commitment to rigorous research, innovative methods, and interdisciplinary collaboration sets a standard for future generations of scientists.
Dr. Hollogay’s contributions will continue to inspire and guide anthropologists, researchers, and educators in their quest to unravel the mysteries of human evolution.
His impact will be felt for generations to come, shaping the future of evolutionary studies and advancing our understanding of human origins.
Ralph Hollogay’s legacy lives on through his groundbreaking research and profound influence on the field of anthropology.
His work continues to shape our understanding of human evolution and the complexities of brain development.
A device has been created by scientists that can translate speech ideas into spoken words in real time.
Although still in the experimental stage, the goal is to develop a Brain Computer Interface that can give voice to individuals unable to speak.
In a recent study, the device was tested on a 47-year-old woman with quadriplegia who had been speech-impaired for 18 years since experiencing a stroke. The device was implanted in her brain during surgery as part of a clinical trial.
According to Gopala Anumanchipalli, co-author of the study published in Nature Neuroscience, the device “translates the intent to speak into fluent text.”
Most brain computer interfaces for speech experience a delay between thought and speech, which can disrupt conversations and cause misunderstandings. However, this new device is considered a significant advancement in the field.
The device works by recording brain activity using electrodes and generating speech based on this activity. An AI model is then trained to translate this neural activity into spoken words.
The UCSF Clinical Research Coordinator will connect a neural data port to the head of the ANN, a participant in El Cerrito, California, on May 22, 2023.Noah Berger/UCSF, via AP files via UC Berkeley
Anumanchipalli of the University of California, Berkeley, explains that the device operates similarly to existing systems used for transcribing meetings and phone calls in real time.
Located in the brain’s speech center, the implant translates signals into spoken sentences as they are heard. This “streaming approach” ensures a constant flow of audio to the recorder without waiting for the sentence to finish.
Rapid speech decoding enables the device to keep up with natural speech pace, enhancing language naturalness according to Brumberg.
Funded in part by the National Institutes of Health, further research is necessary before the technology can be widely available. Anumanchipalli suggests that with sustained investment, the device could potentially be accessible to patients within the next decade.
Chinese researchers have made progress in the field of inter-animal organ transplantation with a successful pig kidney transplant reported on Wednesday. They believe that pig liver may also prove to be useful in the future.
This Chinese patient is the third person worldwide known to be living with gene-edited pig kidneys. The research team has also successfully experimented with implanting pig liver into brain-dead individuals.
Scientists are genetically modifying pigs to make their organs more human-like in the hopes of addressing the shortage of organ transplants. While previous xenografts in the US were short-lived, two recipients of pig kidneys – an Alabama woman in November and a New Hampshire man in January – have shown promising results. Clinical trials in the US are now commencing.
Nearly three weeks after the kidney transplant, the Chinese patient is reported to be doing “very well” with the pig kidneys functioning effectively, according to Dr. Lin Wang of Xijing Hospital. The patient is a 69-year-old woman who has been suffering from kidney failure for eight years.
The next challenge for xenotransplantation is learning to transplant pig livers. In an experiment reported on Wednesday, pig liver was successfully transplanted into a brain-dead individual for 10 days. While the pig liver produced bile and albumin, essential for basic organ function, it did not perform as well as a human liver.
Dr. Wang believes that the pig liver could potentially support a failing human liver to some extent. In the US, a similar approach is being studied by pig developer Egenesis, where a pig’s liver is externally attached to support a brain-dead individual’s liver function.
In China, the team led by Dr. Wang did not remove the deceased person’s own liver but instead implanted the pig liver nearby.
Dr. Parsia Vagefi, a liver transplant surgeon, commented on the experiment, stating that while it shows promise, there are still many questions that need answers. Dr. Wang’s team plans to analyze the results of another brain-dead individual who received a pig liver transplant.
Last year, another Chinese hospital reportedly transplanted a pig liver into a living patient after removing part of their cancerous liver, but the outcome of the experiment is unclear.
The Budgerigars are some of the most fashionable birds, and it is reflected in their brains. The Budgie Brains contain maps of voice sounds similar to those found in the human brain, not seen in other birds.
Budgerigars (Melopstitacus undulatus), also known as a paraquiet, is a small parrot native to Australia. They are epic vocal learners and can mimic a variety of sounds, including human speech. The boudgie, known as the pack, had a vocabulary of about 1,728 words. According to the Guinness World Records. “The ability to mimic phonetically is very rare in the animal kingdom,” Long says.
and Zetian Yang, Additionally, NYU medical schools used silicon probes for a long time to record electrical activity in the Budgies' brains. They focused on a part of the forebrain, the central nucleus of the forebrain horn, which was known to be involved in motor control of vocalization. When Budgies made the call, Long and Yang tracked how their electrical activity had changed.
“Our research was the first to measure parrot brain activity during vocalization,” Long says.
The pair discovered neurons in the central nucleus of the anterior horn thyroid. “There are cells that are active because of consonants,” Long says. Others make vowels, but some are active for high-pitched sounds, others for low pitch.
This brain structure is compared to a keyboard. “There's this kind of key, or in this case, a set of brain cells, and you can represent each of these vocal outcomes and play whatever it wants,” he says. “What the parrot presented is this beautiful and elegant solution to creating vocal sounds.” The human brain has a similar vocal map.
Long and Yang repeated the experiment with a zebra finch (taeniopygia guttata), not vocal mimic. “They have one song they learn,” Long says. “It's about two seconds, sometimes less.” It takes several months to perfect.
Unlike the Budgerigars, the Zebra Finch showed no signs of a “map” of the sound of the brain's voice. Instead, “A Zebra Finch develops chords that are almost almost inexplicable for this song,” says Long. He says that Budgie's brain uses a simple, intuitive system to generate complex calls, while Zebra Finch Brain uses a complex system to make something simple.
“It shows that neural activity and associated vocal behavior are closer to parrots and humans than songbirds and parrots.” Erich Jarvis At Rockefeller University in New York.
“Almost everything we know about the detailed mechanistic basis of learned vocalization comes from several species of songbirds singing relatively simple songs.” Jesse Goldberg At Cornell University in New York. “The parrot therefore offers an incredible opportunity to study both the mechanisms and evolution of complex vocal learning and production.”
I say there are several reasons why I evolved imitation. Zhilei Zhao At Cornell University. One is courtship. “Women actually prefer men with the ability to copy,” he says, and if a man loses his ability, “they are more likely to fool him.” Also, the Budgies have a very dynamic social life. “Form small groups for several days.” Once the group is established, members begin to create unique “contact calls.” “People think it might be something like a password for this group,” says Zhao.
Other skilled mimics may have similar vocal maps in their brains. “My very strong speculation is that other parrots have the same functionality, but they are simply not explored.” He also doubts something similar, the Lyrebirds, a phenomenal mimic that can even mimic artificial sounds like camera shutters.
In the long run, I hope that studying how boudgies produce sounds for a long time will help people understand language disorders. People with strokes often experience aphasia. I can't call the correct words in my head. “You reach for those words and it’s not there,” Long says. “Now we have the opportunity to fight to understand what we think is at the root of many communication disorders that affect people in devastating ways.”
We share 98.8% of DNA with our closest living relatives, chimpanzees. However, despite this almost identical genetic blueprint, chimpanzees have not built civilizations, fought wars, or mastered the art of Tiktok dance routines.
But what exactly makes us stand out? Now, neuroscientists may finally have the answer.
New research published in the journal jneurosci looks at new data from the brains of humans, chimpanzees, and macaques.
“We were interested in finding things that ticked different brains.” Professor Logier Mars, the study co-author said to BBC Science Focus. “And the human brain is something we were particularly interested in, for obvious reasons.”
According to Mars, most studies comparing human brains with other animal brains tend to focus on factors such as overall size, the size of a particular region, or the number of neurons. “But our philosophy is that if we really want to understand what is going on, we need to look into how our brains are organized,” he said.
With that approach in mind, Mars and his team set out to investigate. Similar to the scans used in hospitals, published MRI data were used to create a “connectivity blueprint” for three different species of brains. These blueprints essentially map out whether different regions of the brain communicate with each other.
One area the team expected to find a difference was in the prefrontal cortex. This region is related to complex thinking, planning, and decision making.
This area, often referred to as the “personality center” of the brain, plays an important role in regulating emotions and teaching behaviors. At first glance, it seems to be an obvious place to search for the essence of what makes us human. In fact, this study revealed that this region exhibited more connectivity than in other species.
But was that the whole story?
This image highlights the (red) behavioral domains of the left and right hemispheres showing high divergence after comparison. -Bryant et al. , Jneurosci 2025
“The prefrontal cortex is where researchers tend to see when they look for something unique about humans,” Mars said. “But we have found a difference in many places in the cortex of time just above your ears.
Temporal cortex plays an important role in the processing of sensory information – especially visual, sound, and language. Given our highly social and cooperative nature, it is probably not surprising that these areas are connected more intricately in the human brain.
“We are a very social and cooperative species,” explained Mars. “So these properties are likely the driving force behind the changes we observe.”
All of these suggest that there is no single definition switch that makes humans human. Some believe that highly evolutionary events have led us to dominance, but reality can be more complicated.
Like relatives not too far in the trees, we are the result of the progressive, widespread evolutionary changes that have shaped us over time.
Or, as Mars said, “There’s nothing big that makes us different.”
About our experts
Rosier Mars is a professor of neuroscience at Oxford University. His work focuses on the differences between primate brains, especially humans. Mars’s research is published in the following journals: Natural Communication, Frontiers of human neuroscience, and Science.
Have you ever measured your hip to hip ratio? Chances are, you probably haven’t. However, there is an important reason why you should start.
Recent research published in Nutrition, obesity, exercise suggests that these measurements may be linked to cognitive decline. The study found that individuals with smaller hips have a significantly lower risk compared to those with larger hips.
Feeling concerned about your numbers? Don’t worry too much just yet – researchers emphasize that your risk is not set in stone. Making healthier dietary choices can actively reduce the risk of cognitive decline and support long-term brain health.
BMI and Waist-to-Hip Ratio
While most scientists use Body Mass Index (BMI) to measure body size by comparing weight to height, this system has faced criticism for its inaccuracies. For instance, muscular individuals may be categorized as overweight even if they are not at risk for diseases like type 2 diabetes or heart disease.
Therefore, researchers are increasingly turning to alternative measurements such as waist-to-hip ratios as a more accurate indicator of health risks related to size than BMI. According to the authors of the study, this measurement is more reliable.
“We found a connection between healthier waist-to-hip ratios and better cognitive function scores,” stated Dr. Dahlia Y Jensen in an interview with BBC Science Focus.
The study, which was published recently, examined the relationship between diet, body size, and brain health over several decades. 664 British civil servants had their waists and hips measured multiple times between the 1950s and 1960s over approximately 21 years.
Comparing waist and hip sizes indicates the amount of central fat accumulation, which is associated with a higher risk of diseases like type 2 diabetes and heart disease. – Credit: FluxFactory via Getty
Diet Evaluation and Brain Health Measurement
A group of 512 civil servants completed three dietary surveys between the ages of 48 and 60. Scientists assessed dietary quality based on various components including vegetables, fruits, whole grains, nuts, legumes, fats, sugary drinks, meat, salt, and alcohol.
When participants reached about 70 years of age, brain scans were conducted to measure cognitive performance. The findings revealed that middle-aged individuals with healthier diets and slimmer hips had better brain health later in life.
Brain imaging techniques such as magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) were used to analyze the brain structure of the participants, with a particular focus on the hippocampus.
“The hippocampus is crucial in dementia research, and numerous studies have highlighted its importance in memory and learning,” explained Jensen. While previous studies emphasized the significance of hippocampal volume, this study explored its associations with other brain regions.
“We observed a strong link between better diet, functional connectivity of the hippocampus with other brain regions, and waist-to-hip ratio,” Jensen added. Improved white matter connections associated with a slim waist indicated better communication between brain regions.
This suggests that individuals who follow healthier diets and maintain slimmer waists in middle age are at a reduced risk of cognitive decline and diseases like dementia later in life.
“If you’re looking to improve your brain health, it’s never too late to start, but the earlier, the better,” Jensen advised.
The study had some limitations, with only 20% of female participants as they were civil servants recruited in the 1980s. However, Jensen deemed the study “exciting” and believes it will aid in understanding the link between mid-age dietary health and future brain health.
Alzheimer’s Disease Association estimates that 982,000 people in the UK currently live with dementia. Jensen hopes the study will encourage a shift towards preventive healthcare.
About our experts:
Dr. Dahlia Y Jensen is a postdoctoral researcher at the Department of Cognitive Neuropathy Clinic, University Medical Center Leipzig, and the Department of Neurology at Max Planck Human Brain Science Institute in Germany. She also serves as a visiting researcher at the Oxford University School of Psychiatry and is a corresponding author of the study.
Have you found that the orange circle on the left is smaller than the orange circle on the right?
Radoslaw Wincza et al. (2025)
Optical fantasies may make you feel like a fool, but you may be able to train your brain to resist your brain.
“People in the general population are very likely trained to unravel illusions and have the ability to perceive the world more objectively,” he says. Radoslaw Wincza At Lancaster University, UK.
Wincza and his colleagues recruited 44 radiologists with an average age of 36. He spent over a decade finding small details such as fractures from a medical scan. They also saw 107 college students, an average of 23 years old, studying medicine and psychology.
Each participant displayed four fantasies one at a time on the screen. With each illusion, participants had to look at size or length size or length shape or line pairs and choose larger or longer ones.
In the three illusions, other objects made larger shapes or longer lines smaller and shorter lines. The team found that radiologists were less susceptible to these illusions than students.
“Radiosists have this ability to really focus on the key elements of the visual scene, where they ignore unrelated contexts and have tunnel vision,” says Wincza. “By adjusting your targets, they don’t experience that much illusion.”
In the fourth illusion, one of the shapes was vertical, and the pair was horizontal. This made the latter look even wider, even if it was actually narrower. Both groups were equally susceptible to fantasy. This is probably because this didn’t involve much of an adjustment to background distraction, as it didn’t contain any surrounding objects.
“It suggests that if everyone trains themselves, they can gain the ability to be susceptible to illusions,” he says. Carla Evans At York University, UK. Focusing on certain aspects of photography, for example, could improve this ability, but she says there is more work to see how fast this can be. “It could take years or weeks.”
Are you looking to keep your mind sharp as you age? One secret could be as simple as regularly exercising your brain with mathematics and reading comprehension, whether it’s at work, at home, or even while reading this article.
A groundbreaking new study led by Professor Eric Hanushek of Stanford University suggests that engaging in mathematics and reading can help prevent age-related cognitive decline. The research shows that individuals who regularly practice these skills do not experience significant declines in cognitive abilities over time, whether they are reading emails, doing calculations, or reading articles like this one. This challenges the notion that cognitive decline is inevitable with age.
The study reveals that cognitive skills typically peak in a person’s 40s before slowly declining. However, those who consistently engage in mathematics and reading, whether at work or in their daily activities, show no decline at all.
“Our findings indicate a significant increase in average skills in literacy and numeracy until the 40s. After this point, there is a slight decrease in literacy skills and a more noticeable decline in numeracy,” the study notes.
This study may inspire you to tackle your taxes – Credit: Skynesher
These findings challenge previous studies that suggested cognitive decline begins in early adulthood. Unlike past research that compared different age groups at one point in time, this study followed the same individuals over several years, providing a more accurate understanding of how cognitive abilities change with age.
Hanushek and his team propose that what was previously thought to be age-related cognitive decline might actually be due to differences in skill levels between generations rather than the natural effects of aging itself.
By analyzing data from language and mathematics assessments in the German population aged 16-65 and retesting the same group 3.5 years later, researchers found that women’s math skills declined significantly over time, indicating potential gender disparities. Further exploration of these differences is planned for future studies.
“Brain corruption” was named the term Oxford’s year 2024. This is defined as the “degradation of a person’s mental or intellectual state” that arises from seeing “trivial” content online, such as a Tiktok video.
It’s a term that is often joked about, but what If there is a grain of truth? This is the seemingly scary implications of a new study published by a large team of brain scientists based at Tianjin Division University in China.
What did this study find?
They scanned the brains of over 100 undergraduates and completed a survey on their habits of watching short online videos. The survey included statements such as “My life will be empty without a short video” and “Not able to watch a short video will be as painful as losing a friend,” indicating how much they agreed.
Interestingly, researchers found that those who felt most obsessed with short videos had significant differences in brain structure. These participants had more gray matter in the orbitofrontal cortex (OFC). This is an area near the front of the brain that is involved in decision-making and emotional regulation. Similarly, they had more gray matter in the cerebellum – the small cauliflower-shaped structures behind the brain play a role in movement and emotions.
The team concluded that this is bad news, as for Tiktok enthusiasts, having an oversized OFC could be a sign that it is described as “an increased sensitivity to rewards and stimuli associated with short video content.” They speculated that watching too many Tiktok videos could have led to this nerve distension.
Similarly, they suggested that enhanced cerebellum could help the brain process short video content more efficiently – perhaps the result of frequent rampages. This can create a reinforcement cycle. In this cycle, watching more videos strengthens these brain pathways and habits become even more ingrained.
Over 23 million videos are uploaded to Tiktok every day – Photo Credit: Getty
But that’s not all. The team also performed a second brain scan to track participants’ brain activity while participants were resting with their eyes closed.
They found a greater synchronization of activity within multiple regions of the brain. These include the dorsal prefrontal cortex (areas involved in self-control), the posterior cingulate cortex (areas involved in thinking about oneself), the thalamus (a type of relay station for brain signals), and the cerebellum.
The researchers suggested that these functional brain differences could reflect a variety of issues among addiction participants. The issues include the tendency to overly social comparisons while having trouble leaving the video and watching them.
They also asked participants to fill out a survey on “promising temperament.” This is a factor measured by agreeing to statements such as “I strive to reach other people’s outstanding results.”
Interestingly, scientists have found that many links between video addiction and brain differences are also linked to a higher level of envy. This suggests that feeling of envy can make someone more likely to watch a short video. And over time, this habit can lead to potentially harmful changes in the brain.
Does Tiktok cause brain decay?
If you are an avid consumer of fun online videos, or a related parent, the idea that seeing habits can reconstruct brain structures is no surprise.
However, it is important to consider this study in a broader historical context in which new technologies and media have long been causing exaggerated neurological claims. It is also important to understand the deep limitations of research.
It’s been nearly 20 years Atlantic Ocean The magazine ran a cover function that asked, “Is Google making us stupid?” And, in a nutshell, the answer that was asserted was “Yes!” Author Nicholas Kerr lamented that he was once a “scuba diver in the sea of words,” but now, thanks to Google, he zipped “along the surface like a jet ski man.”
Countless brain imaging studies of questionable quality were also published in the same era. Many aim to demonstrate the disaster effect of the World Wide Web on our vulnerable minds.
A few years later, Professor Susan Greenfield, a neuroscientist professor of Baronness, launched a media campaign claiming that “mind change” (the impact of the internet and video games on the brain) is just as serious threat to humanity as climate change.
She even wrote dystopian novels about the dehumanizing effects of the internet, but received mixed reviews (One critic (I questioned whether this was one of the worst science fiction books ever written).
Scientists still don’t know how much Tiktok affects the brains of young people, but research is still underway. – Photo credit: Getty
Almost 20 years later, we’re fine. At least I don’t think our brains have been transformed into mash. But of course, these previous horrors were before the appearance of Tiktok. Perhaps there is something uniquely damaging about the types of short, scrollable, meaningless content available today.
I asked Professor Peter Etchellsif this is plausible, expert on the psychological impact of digital technology at Bathspa University. “As far as I know, there is no good science to support the idea that short videos are either tangible or uniquely bad in terms of their impact on the brain,” he says.
read more:
Is short video brain research a good science? Not so, but the evidence suggests that it is not.
What is wrong with this research?
Let’s take a look at some of the limitations of the research. If the goal was to prove that seeing tiktok is harmful to the brain, a more effective approach would be to scan participants’ brains and then consume different amounts of harmful content.
However, this study is completely cross-sectional, meaning that only a single snapshot was captured in time. This was not a pre- and post-comparison of causes and effects.
Or, as Etchell says: “[From this study] I can’t say anything about whether watching a short video will cause brain changes, or whether certain types of brain structures precede certain types of video consumption.
“This research doesn’t really add anything that will help us understand how digital technology affects us.”
Even if we accept the speculative leap of researchers that Tiktok’s videos may have caused the brain changes they observed, there are still some issues to consider.
First, the researchers searched the entire brain for differences that correlated with the scores on the video addiction scale. This approach is a common problem in brain imaging studies as it increases the risk of finding false positives. In other words, the more comparisons you make, the more likely you will stumble over random differences that seem important but are actually just a coincidence.
Second, even if we accept that the observed brain differences are real and caused by seeing Tiktok, interpreting them involves a lot of speculation. Researchers enveloped an increase in brain synchronization (known as regional homogeneity (Reho). However, Rejo itself is not inherently a good or bad thing. In fact, other studies have associated with an increase in Reho in certain brain regions. positive Results such as results observed during meditation training.
Perhaps the biggest flaw in the study relies on questionable survey-based measures of short video addiction that lacks strong scientific validity.
As Etchells put it, “Short video addiction is essentially an invented term, not a formal diagnostic clinical disorder.”
Taken together, these issues suggest that we should not be overly concerned that Tiktok fundamentally shapes the brains of young people in harmful ways.
That said, the excessive amount of time spent watching frivolous videos can still be a problem for some. However, it is more productive to focus on developing healthy media habits rather than worrying about brain changes or addiction.
“In many cases, when research like this hits the news, it’s a good opportunity to pause and reflect on whether we’re happy with the use of the technology,” says Etchells.
“If there’s concerns there, it’s worth thinking about what you can do to eliminate your frustration, knowing that you’ll benefit a lot from these technologies.”
About our expert, Professor Pete Etchell
Pete is a professor of psychology at Bath Spa University. His research focuses on how playing video games and using social media affects our mood and behavior. He is the author of I got lost in a good game We are currently investigating whether game mechanics can promote gambling behavior in other parts of our lives.
If you’re like Khloe Kardashian, who recently turned 40, you may have considered testing your biological age to see if you feel younger than your actual age. But while these tests can tell you a lot about your body’s aging, they often overlook the aging of your brain. Researchers have now developed a new method to determine how quickly your brain is aging, which could help in predicting and preventing dementia. Learn more here.
Unlike your chronological age, which is based on the number of years since you were born, your biological age is determined by how well your body functions and how your cells age. This new method uses MRI scans and artificial intelligence to estimate the biological age of your brain, providing valuable insights for brain health tracking in research labs and clinics.
Traditional methods of measuring biological age, such as DNA methylation, do not work well for the brain due to the blood-brain barrier, which prevents blood cells from crossing into the brain. The new non-invasive method developed at the University of Southern California combines MRI scans and AI to accurately assess brain aging.
Using AI to analyze MRI brain scans, researchers can now predict how quickly the brain is aging and identify areas of the brain that are aging faster. This new model, known as a 3D Convolutional Neural Network, has shown promising results in predicting cognitive decline and Alzheimer’s disease risk based on brain aging rates.
Researchers believe that this innovative approach can revolutionize the field of brain health and provide valuable insights into the impact of genetics, environment, and lifestyle on brain aging. By accurately estimating the risk of Alzheimer’s disease, this method could potentially lead to the development of new prevention strategies and treatments.
Overall, this new method offers a powerful tool for tracking brain aging and predicting cognitive decline, bringing us closer to a future where personalized brain health assessments can help prevent and treat neurodegenerative diseases.
Could scientists use parasites in your brain to treat diseases? The concept of utilizing parasites as a medical tool may sound unconventional, but it offers hope for conditions like Parkinson’s and Alzheimer’s. Researchers believe that if parasites can transport drugs directly to the brain, it could aid in treating these ailments.
An international team of scientists is doing just that. They are utilizing single-cell parasites called Toxoplasma gondii, which causes the infection toxoplasmosis. These parasites naturally travel from the human intestine to the central nervous system and provide proteins to host cells. In their experiment, bioengineers manipulated the internal system of T. gondii cells to produce and release proteins outside the cell, creating a secretion system.
The team explained that delivering medications to the brain is challenging due to the blood-brain barriers that safeguard the brain from harmful substances. T. gondii has evolved the ability to overcome these barriers, which could be beneficial in this process. Initially, they tested whether T. gondii can cross the blood-brain barrier in mice and then in human brain cells, specifically neurons, before moving on to testing on intact mouse brains to potentially apply the findings to humans.
Their drug delivery system mediated by T. gondii consists of proteins created from at least two regions of different genes that are combined and translated into a single unit, known as a protein fusion. They incorporated a therapeutic drug with a T. gondii protein called takihorin to transport medicine to the brain.
Initially, scientists faced challenges in determining the appropriate dilution factor for the drug compound. They had to find a balance between allowing the proteins to pass through the blood-brain barrier while ensuring they were still therapeutically effective. Through trial and error, they found the correct dilution factor and successfully administered the treatment to the targeted brain area.
The next step involved delivering therapeutic proteins to brain cells through T. gondii. Researchers used lab-grown mouse brain cells and specific proteins that regulate the movement of molecules across the cell membrane, known as vesicle transport protein. They demonstrated that the engineered T. gondii successfully transported healing proteins to the brain cells of lab-grown mice.
The researchers then tested the treatment process on human brain cells cultivated outside of the body. They confirmed that the therapeutic proteins delivered by T. gondii could bind to the DNA of human brain cells. This binding altered gene expression in the laboratory-grown brain cells.
Finally, engineers demonstrated the success of this therapy on whole mouse brains. By ensuring that the therapeutic proteins could pass the blood-brain barrier in live mice, they then evaluated the brains post-euthanasia. Utilizing 3D imaging, they illuminated specific neural pathways and markers in the mouse brain, confirming that the engineered proteins effectively delivered therapeutic proteins to the brain.
The researchers concluded that their findings show progress in drug delivery via engineered parasites but emphasized the need for further research to determine the potential advantages and drawbacks of this method. With the success of this study, they proposed that utilizing engineered parasites for drug delivery could offer new treatment options for brain-related diseases.
The human brain is like an electrified sphere, considered the most complex object in the universe. Sometimes, don’t you wish it would just quiet down, especially at night?
One common symptom of insomnia is a racing mind when trying to sleep. Our brains bounce thoughts around like a pinball machine due to caffeine, anxiety, or stress.
This mental perturbation includes repetitive negative thoughts that we may not even be aware of, focusing on mistakes and worrying about the future.
While this is challenging for those with mental health conditions, it affects everyone when overwhelmed with tasks or life challenges.
Insomnia affects about 1 in 3 people – Photo Credit: Getty
Psychologist Dr. Luc Beaudoin believes that cognitive shuffle, a technique that mimics the brain’s natural processes during sleep onset, can help control runaway thoughts while trying to sleep.
The cognitive shuffle involves imagining unrelated images, like a photo show, to bring light structure to racing thoughts. By focusing on connecting images related to a chosen word, it aims to quiet the mind before falling asleep.
Counting sheep may not be as effective as cognitive shuffle – Photo credit: Getty
Cognitive shuffling strikes a balance between conscious and unconscious thoughts, keeping unwanted thoughts at bay without overwhelming executive functioning.
Dr. Beaudoin’s broader sleep initiation theory, Somnolent Information Processing, explores factors that help or hinder the process, with spiritual perturbation as a common hindrance.
Cognitive shuffling is still under research, but promising results suggest it can aid sleep initiation along with other scientifically proven techniques.
About our experts
Dr. Luc Beaudoin is an adjunct professor at Simon Fraser University and founder of Cogsest. He has published books on cognitive productivity and is recognized in various academic journals.
Manipulating neuron types can make snacks more likely to resist
5M3Photos/Getty Images
Neurons in the mouse brain tell them to stop eating when they have enough food. And since people probably have the same cells, they may one day manipulate them to help treat obesity.
“The main question we were trying to answer was how our brains sense and respond to different signals.” Alexander Nectau At Columbia University in New York.
To learn more, he and his colleagues used a kind of molecular profiling to distinguish between different cell types in the mouse brain. In the dorsal trunk nucleus, part of the brainstem associated with functions such as feeding, mood, and sleep, we encountered cells that produce a hormone called cholecystokinin, which helps regulate appetite.
To study what these cells feel to make them work, researchers measured their activity as mice spent the day. “Every time an animal eats a bite, activity has risen and then it has become corrupted,” says Nectow. “These neurons sense and use information such as food smells and sights, food tastes, food sensations in the intestines, and neurohormones released in response to intestinal foods and so on. You can actually finish your meal.
Next, researchers used a technique called optogenetics. This involves engineering neurons so that they can turn them on and off with light. The mice slowed their diet when they used light to activate them. The more intense the activation, the slower and stopped the animal.
Neurons sit in the brainstem and are similar ancestor characteristics across vertebrates, so Nectow probably thinks we have them too. “We didn’t confirm that, but my guess is that humans have these neurons.”
The team also discovered that mouse neurons can be activated by compounds called glucagon-like peptide-1 (GLP-1) agonists. Brand names such as Ozempic and Wegovy.
If these neurons have the same function in people, theoretically, they can either control the feeding habits of obese people or combine this approach with GLP-1-based drugs to increase greater weight loss. They can be adjusted to achieve, says Nectau.
“Understanding the circuits governing meal halts is particularly important in an environment of near-ubiquitous food availability,” he says. Jeff Davis At Swansea University, UK. “The authors used elegant methods to identify these important cell populations.”
A virtual drone was steered through an obstacle course by imagining moving a finger.
Wilsey et al.
A paralyzed man with electrodes implanted in his brain can pilot a virtual drone through an obstacle course just by imagining moving his fingers. His brain signals are interpreted by an AI model and used to control a simulated drone.
Research on brain-computer interfaces (BCI) has made great progress in recent years, allowing people with paralysis to write speech on a computer by precisely controlling a mouse cursor or imagining writing words with a pen. It became. However, so far it has not yet shown much promise in complex applications with multiple inputs.
now, Matthew Wilsey Researchers at the University of Michigan created an algorithm that allows users to trigger four discrete signals by imagining moving their fingers and thumbs.
The anonymous man who tried the technique is a quadriplegic due to a spinal cord injury. He was already fitted with Blackrock Neurotech's BCI, which consists of 192 electrodes implanted in the area of the brain that controls hand movements.
An AI model was used to map the complex neural signals received by the electrodes onto the user's thoughts. Participants learned how to think about moving the first two fingers of one hand to generate electrical signals that can be made stronger or weaker. Another signal was generated by the next two fingers, and another two by the thumb.
These are enough to allow the user to control the virtual drone with just their head, and with practice they will be able to expertly maneuver it through obstacle courses. Wilsey said the experiment could have been done using a real drone, but was done virtually for simplicity and safety.
“The goal of building a quadcopter was largely shared by our lab and the participants,” Wilsey says. “For him, it was a kind of dream come true that he thought was lost after he got injured. He had a passion and a dream to fly. He felt so empowered and capable. He instructed us to take a video and send it to a friend.
Although the results are impressive, Willsey says there is still much work to be done before BCIs can be reliably used for complex tasks. First, AI is required to interpret the signals from the electrodes, but this depends on individual training for each user. Second, this training must be repeated over time as function declines. This could be due to slight misalignment of the electrodes in the brain or changes in the brain itself.
As individuals age, their brains may experience difficulties in learning and decision-making due to a decrease in brain cells and cognitive function. However, neuroscientists have found that the brain can rewire connections to compensate for age-related cognitive decline through a process known as compensatory scaffolding. This involves forming new connections in the brain, strengthening existing ones, and even generating new brain cells. Yet, the specifics of how these new connections operate and interact, as well as their limitations, remain unclear.
Research conducted by Leonardo Bonetti and colleagues indicates that older individuals may exhibit more compensatory scaffolding and less unilateral brain activity compared to younger individuals when processing auditory information like music. Previous studies have shown that certain brain areas involved in memory and task processing decline faster in aging individuals, prompting Bonetti’s team to investigate how age impacts the brain’s response to compensatory scaffolding during music processing.
To test their hypothesis, Bonetti and his team studied brain activity in 37 young adults (aged 18-25) and 39 older adults (aged 60 and above) as they listened to music. Using magnetoencephalography and magnetic resonance imaging devices, the researchers mapped brain activity in specific regions responsible for sound processing and decision-making, such as the temporal lobe, frontal lobe, and hippocampus.
During the study, participants were asked to memorize a musical sequence and distinguish between the original version and modified versions with altered notes. Older participants showed less brain activity in most regions compared to younger participants, except for the left auditory cortex. This increased activity in the left auditory cortex suggested that the aging brain can reorganize and maintain function in certain areas, despite decreased activity in memory-related regions.
Notably, younger participants displayed more brain activity in memory and working memory areas, enabling them to detect modified musical sequences more effectively than older participants. The study also revealed that individuals with strong working memory were better at recognizing modified sequences, regardless of age group.
Overall, the research highlights that parts of the brain linked to memory and cognitive function may decline with age, but healthy aging can trigger brain reorganization to mitigate functional decline. Bonetti’s findings challenge previous notions that aging does not impact brain pathways associated with memory, decision-making, and other executive functions.
The International Classification of Diseases (ICD) is a text created by the World Health Organization that summarizes all medical problems recognized by the organization.
When it comes to the latest version, ICD-11was created and added the category of addictive behaviors to the section on addictive disorders. It is now medically accepted that people can become addicted not only to substances but also to certain activities. The most important of these behaviors is gambling.
Gambling addiction is definitely real and a big problem. therefore, UK government introduces measures Hopefully, we can curb or at least reduce that harm.
But why do people become addicted to gambling? And why is it often so difficult to treat compared to more “typical” substance-based addictions?
The “method” is relatively simple. The main attraction of gambling is essentially the ability to win large amounts of money with little effort.
When making decisions, humans brain You are constantly weighing effort against potential reward. When something leans heavily toward the latter (for example, paying a small amount of money and receiving a large amount in return), we tend to really approve of it.
Up to 4 percent of people in the United States may have a gambling problem – Photo credit: Getty
There's also the fact that the human brain is complex enough to recognize money as important in a biological sense, even though it's a technically abstract concept. Our brains also prioritize novelty and unpredictability.
All of this together means that gambling can and does affect the brain's reward system in the same way as certain drugs and substances. Addiction develops and all the subsequent effects are felt on the individual.
Of course, this does not happen to everyone who gambles. There are many people who don't gamble at all. Many people instinctively dislike risk and loss, but these are unavoidable aspects of gambling. However, some people are not as sensitive and are more willing to accept gambling as a form of entertainment.
read more:
But while the similarities in the brain's responses to gambling and drugs may explain why gambling is often addictive, it is the combination of both that can make gambling addiction particularly difficult to treat. That's the difference.
Gambling addiction lacks a biological substance, making it easier to overlook and hide. As a result, some evidence suggests that 90% of gambling problems go unreported and untreated.
Also, the absence of specific substances that support gambling addiction means that there is nothing to “take away”, so to speak. Even in cases of long-term chronic drug addiction, there is an option to remove the drug (going “cold turkey”) and allow people's brains and bodies to adapt to the absence of the drug. Indeed, this is often a very unpleasant and even dangerous option. But it's still an option.
This is not the case with gambling. It is an action, not a substance. As long as someone has money and autonomy, it is very difficult to deny them access to gambling. Even if you could, it still might not make any difference because of your gambling experience.
The nature of gambling means that it is not experienced as a direct “stimulus = reward'' process of the kind that applies to drug taking and that underlies the basic learning processes of classical conditioning (which is why addiction is established in the first place). key aspects of the system). .
Such a relatively simple process is also easy to unlearn. When a stimulus stops producing a reward, the association “dies” in the brain. When you do this to the source of your addiction, the addiction loses its power over you. The human brain is so complex that this will be quite difficult to achieve, but at least we can try.
According to the Journal of Gambling Studies, men are twice as likely to be frequent gamblers than women – Photo courtesy of Getty
However, think about this. If an alcoholic found out that only one random drink out of 20 had alcohol in it and the others made him feel nauseous, it would probably be much easier to kick the habit. Dew. But that doesn't work with gambling. Because that's how you experience gambling.
Gambling rewards occur through a variable schedule of reinforcement. You never know when you will win, and losing is inevitable. But as long as you win often enough, keep doing it. And then an addiction to that behavior develops.
Knowing the negative consequences of gambling is already part of the process. It's like trying to treat an alcoholic by making him pay for his own drinks. They always already are.
These are just some of the reasons why gambling addiction is a difficult problem to address medically. This means it is essential to work towards reducing exposure to gambling and the development of addiction in the first place. We have to even the odds somehow.
It’s been said that in times of intense stress or sudden anger, a primitive part of our brain takes control. This irrational aspect of ourselves doesn’t stem from our highly evolved human faculties, but rather from the remnants of our reptilian ancestors that have persisted in our brains despite the process of evolution. Some call it the “lizard brain.”
The lizard brain theory was formulated by neuroscientists in the 1960s, particularly by Paul McLean. As he studied the brains of humans and other animals to explore the origins of negative emotions, he found common behaviors between reptiles and mammals related to survival instincts like establishing routines and defending territory, as well as unique mammalian behaviors.
Through his research and advancements in neuroscience allowing for the comparison of brain structures, MacLean proposed that the human brain evolved from a reptilian brain with ancient lizard characteristics still preserved. He identified three distinct brains within the human brain, which he called the “brain trinity”: the oldest reptilian brain, the paleomammal complex or limbic system, and the new structures that emerged with higher primate evolution.
Paul MacLean's 'Trinity Brain' model now widely discredited by scientists – Photo credit: Getty
Despite the popularity of the triune brain theory, recent studies have challenged the notion of the lizard brain. Neuroanatomists have pointed out that the brain is not structured like an onion with successive layers resembling different species’ brains, as the Trinity theory suggests.
For instance, while the amygdala within the limbic system is more developed in primates than in rats, indicating a more complex evolutionary trajectory, the concept of progress in evolution suggests that older animals are more primitive and newer ones more sophisticated. Evolution is not just about adding new features while leaving the old ones unchanged.
Explore our fascinatingcollection of fun factsand delve into more intriguing scientific topics.
Good neighborhoods are defined by the people who reside there. The presence of a helpful individual can enhance the community, while a negative neighbor can detract from its overall quality. The same concept applies to the brain, as revealed in a recent study indicating that brain cells behave like communities. Some cells contribute to a nurturing environment, promoting health and resilience in adjacent cells, while others spread stress and damage like bad neighbors.
Throughout one’s life, the composition of this brain community influences the aging process. Negative relationships can accelerate aging and lead to issues such as memory loss, while a healthy brain community can work collectively to combat aging. Researchers at Stanford University believe that these findings could potentially inform the development of treatments to slow or reverse aging.
Published in the journal Nature, the study identified 17 cells that influence aging positively or negatively. Notably, T cells and neurons were highlighted for their significant impact as bad and good neighbors, respectively. T cells, typically involved in fighting infections, can contribute to inflammation in the brain and hasten aging, while neural stem cells play a vital role in rejuvenation and maintaining a youthful brain.
The researchers conducted gene activity mapping across 2.3 million cells in the mouse brain, constructing a “spatial aging clock” to predict the biological age of individual cells. This innovative approach could lead to new biological discoveries and interventions, such as inhibiting pro-aging factors released by T cells or enhancing the efficacy of neural stem cells.
These findings have implications for understanding diseases like Alzheimer’s and potential strategies to strengthen the brain’s natural repair mechanisms and prevent cognitive decline. The research offers hope for uncovering ways to support brain health and combat aging-related challenges.
Researchers trained artificial intelligence model to measure people's age from brain scans
Laboratory/Alamy
The abundance of 13 types of proteins in the blood appears to be a strong indicator of how quickly the brain is aging. This suggests that blood tests could one day help people track and even improve their brain health.
Most previous studies have looked at protein markers of brain aging in the blood. Less than 1000 peoplesay nicolas seyfried from Emory University in Atlanta, Georgia, was not involved in the new study.
To get a broader idea of the effects of these proteins, Liu Weishi Researchers from Fudan University in China analyzed MRI brain scan data from around 11,000 adults (approximately 50 to 80 years old at the time of the images) who took part in the UK Biobank project.
Liu's team trained an artificial intelligence model using data from 70% of the participants to determine features of brain images, such as the size of different brain regions and how different parts are connected to each other. The age of the participants was predicted based on When the model was applied to the remaining 30% of participants, its predictions were accurate to within 2.7 years of their actual age.
The researchers then used the model to predict the age of another group of about 4,700 people, with an average age of 63, who also underwent brain imaging for UK Biobank. The researchers calculated the difference between these participants' actual ages and their AI-predicted ages, called the brain age gap. “The higher the age predicted by the AI compared to the actual age, the faster the brain ages,” Liu says.
The group also provided blood samples around the same time as the brain imaging. From this, the research team identified eight proteins that appear to increase in abundance as brain age increases, and five proteins that appear to decrease in abundance.
In an analysis of data from previous studies, researchers confirmed that these proteins are produced by brain cells and that their levels can influence the risk of dementia and stroke.
This suggests that blood tests for these proteins may reveal how quickly the brain ages. “These markers may be canaries in the coal mine that say, 'Hey, look, let's start doing interventions that slow brain aging while there's still plenty of time,'” Seyfried said.
But for this to be helpful, we need to know that these proteins can change with lifestyle changes. “If I run this much, I'll lose this much weight, if I change my diet, [then] We can correct these levels and bring them back into normal range,” Seyfried says.
Because the study was conducted primarily among wealthy white people, Seyfried said more research is needed to see if the results hold true for other populations with more diverse ethnicities and income levels.
The research team now hopes to conduct studies in animals to determine exactly how the 13 proteins affect the brain. For example, researchers might test whether disrupting levels of these proteins affects cognition or even the development of neurodegenerative conditions, Liu says. “In the coming decades, this could open up ways to target proteins to slow aging and disease.”
Tattoos printed on a person’s scalp can detect electrical activity in the brain and transmit the signals to a recording device.
Lu Nanshu
Printing temporary tattoos on people’s heads could make it easier to analyze their brain waves.
Electroencephalography (EEG) is a method of measuring electrical activity in the brain through electrodes placed on the scalp. It can be used to test patients for neurological conditions such as epilepsy, tumors, and damage from stroke or head trauma.
Because human skulls vary in size and shape, technicians must spend considerable time measuring and marking the scalp to obtain accurate values. The gel helps the electrodes detect brain signals, but when it dries it stops working. The cables that connect to the electrodes can also cause discomfort and interfere with delicate electrical signals.
Lu Nanshu A team of researchers at the University of Texas at Austin hopes to get around this problem by printing temporary tattoos on test subjects’ scalps. Tattoo ink is made of two polymers called poly(3,4-ethylenedioxythiophene) (PEDOT) and polystyrene sulfonic acid (PSS). It has excellent conductivity and durability, and does not irritate the skin.
A computer program creates a personalized tattoo design based on a 3D scan of your scalp, and a printer controlled by a robotic arm applies ink directly to your scalp. The ink comes in two different formulations, one for the electrodes that receive brain signals and one for the connection that goes to the back of the neck. From there, physical wires transmit the signal to small devices that record the data.
“Our technology embodies the first hair-compatible temporary electronic tattoo that enables high-quality brain monitoring,” says Lu.
This tattoo has been found to work well on bald heads and people with buzz-cut hairstyles. “This method has not yet been extensively tested on long, thick, curly hair, but it could be possible in the future by changing the nozzle design or incorporating robotic fingers into the hair parting. “It could be possible,” she says. The researchers say it is also possible to make the process completely wireless by embedding a data transmitter into the tattoo.
A recent study conducted by researchers at the University of Reading and the University of Durham has revealed that the increase in relative brain size, known as encephalization, during the seven million years of human evolution was a result of incremental changes within individual species.
Modern humans, Neanderthals, and other recent relatives on the human family tree evolved large brains much more rapidly than earlier species. Image credit: SINC / Jose Antonio Peñas.
“One of the most striking evolutionary changes in human evolution, closely linked to the unique cognitive and behavioral characteristics of humans, is the increase in brain size,” explained lead author Thomas Puschel and his colleagues.
“The question of encephalization in human evolution has been a topic of debate, with various studies comparing the brain capacities of different hominin species and exploring adaptive mechanisms that might have influenced differences in brain size among hominins. Our research proposes
“Some argue for a gradual growth pattern over time, while others suggest a pattern of rapid increases followed by periods of stagnation.”
“Certain studies support a combination of both models, while others claim that they are indistinguishable.”
In their recent study, the authors compiled the largest dataset of ancient human fossils spanning seven million years and utilized advanced computational and statistical methods to identify gaps in the fossil record.
These innovative approaches have provided the most comprehensive understanding to date of the evolution of brain size over time.
“This study has completely altered our perception of how the human brain evolved,” noted study co-author Professor Chris Venditti.
“Previously, it was believed that brain size varied significantly between species, like upgrading to newer computer models.”
“However, our study reveals a pattern of steady, incremental ‘software updates’ occurring within each species over millions of years.”
This study challenges the traditional notion that certain species, such as Neanderthals, remained unchanged and were unable to adapt, suggesting instead that the increase in brain size was a gradual and continuous driving force in evolution. It underscores the significance of changes.
“Major evolutionary shifts do not always require dramatic events,” Pushel stated.
“They can result from making small incremental improvements over time, akin to the learning and adaptation processes observed today.”
The researchers also identified a notable pattern: larger-bodied species tend to have larger brains, but the variation observed within individual species does not consistently correlate with body size.
Hence, the evolution of brain size over long evolutionary timescales spanning millions of years has been influenced by factors distinct from those observed within individual species, underscoring the complexities of evolutionary pressures on brain size. It’s remarkable.
“The reasons behind the evolution of large brains in humans are a key aspect of human evolution,” added study co-author Dr. Joanna Baker.
“Through analyzing the brain and body sizes of various species over millions of years, we have demonstrated that the characteristic large brains of humans primarily emerged through gradual changes within individual species. This became evident.”of study On November 26, 2024, Proceedings of the National Academy of Sciences.
_____
Thomas A. Puschel others. 2024. The increase in human brain size was revealed by intraspecific encephalization. PNAS 121 (49): e2409542121;doi: 10.1073/pnas.2409542121
We are able to enjoy music because of our ability to recognize musical boundaries.
NDAB Creativity/Shutterstock
We may finally understand how the brain processes beat drops: People use two distinct brain networks to predict and identify the transitions between musical segments.
Musical boundaries – the moments when one part of a composition ends and another begins – are important to enjoying music, especially in the Western musical tradition. Without them, he says, your favorite hits can sound like a monotonous, random stream of notes, “like reading a text without punctuation.” Ibarra Burnat Perez At the University of Jyväskylä, Finland.
To understand how the brain processes musical boundaries, she and her colleagues analyzed brain activity while listening to 36 adults listen to instrumental pieces from three different genres: Adios Nonino Astor Piazzolla, an American progressive metal band Stream of consciousness Dream Theater and Russian Ballet Classics ofSpring Festival Works by Igor Stravinsky. All of the listeners had attended school in Finland, and half of them considered themselves semi-professional or professional musicians.
The researchers found that just before musical boundaries, a brain network they call the early auditory network activates in anticipation of the end of a musical phrase. This network primarily involves auditory regions located in the posterior, or back, outer region of the brain called the cortex.
Another network becomes active during and after musical transitions. This network, called the border-transition network, is characterized by increased activity in auditory areas toward the middle and anterior, or front, parts of the cortex. Perez says that this change in brain activity between the two regions is similar to how the brain understands the difference between sentences in a language.
During and after the musical boundary, several brain regions, including the right ventrolateral prefrontal cortex, which is involved in complex cognitive tasks and decision-making, deactivate, suggesting that the brain redirects attention and resources to integrating new musical information as a new segment begins, Perez says.
Musicians and non-musicians also used these two brain networks differently. For example, musicians relied on brain regions important for higher-order auditory processing and integration, which may reflect a more specialized approach to understanding musical boundaries, Perez says. Non-musicians, on the other hand, showed greater connectivity across broader brain regions, indicating a more general approach.
In addition to shedding light on how the brain processes music, Perez says, these findings could also help develop music therapy for people who have difficulty comprehending language. For example, incorporating elements of musical boundaries into speech transitions (such as matching syllables to a melody) might make sentences easier to understand, she says.
Fragments of mitochondrial DNA can be added to the cell's main genome
wir0man/Getty Images
Mutations in which DNA from energy-producing mitochondria is mistakenly added to a cell's main genome were thought to be extremely rare. Now, studies of brain tissue show that such mutations occur in all of us, and their numbers may be a factor in ageing.
“Not only are they present, but they are abundant in the dorsolateral prefrontal cortex, an area of the brain associated with cognitive abilities.” Ryan Mills At the University of Michigan.
In human cells, almost all of the DNA (about 6 billion letters) resides in the nucleus, but the energy-producing organelles called mitochondria have their own tiny genome of about 16,600 letters.
That's because mitochondria were once free-living bacteria with their own large genome. In the roughly 2 billion years since those bacteria formed a symbiotic relationship with our distant ancestors, most of the original bacterial genome has been lost or transferred to the main genome in the nucleus.
This evidence of transfer has led biologists to know for a long time that fragments of mitochondrial DNA could somehow find their way into the nucleus and then be added to the main genome. But this kind of mutation was thought to be very rare, Mills says. Over the past few years, work by his team and others has shown that this isn't as uncommon as we thought. At least in cancer cells.
Mills and his colleagues showed that these types of mutations also occur in non-cancerous cells by sequencing the DNA of brain tissue samples taken from 1,200 people during post-mortem examinations.
Although another team took the samples and sequenced them, Mills and his colleagues looked for mutations that add mitochondrial DNA to the nuclear genome. “We were just curious,” Mills says.
Not only did they find such mutations, but they also found that they were more prevalent in people who, on average, died younger.
It's not clear whether these mutations are just a symptom of aging or a cause of it, Mills says. “The jury is still out,” he says. “But if you take the entire mitochondrial sequence and put it somewhere in the genome, it's hard for me to believe that it wouldn't have an effect.”
ohRan Knowles, a British teenager with a severe form of epilepsy called Lennox-Gastaut syndrome, became the first person to try the new brain implant last October, with astonishing results: his daytime seizures reduced by 80 percent.
“The device has had a huge impact on my son's life as he no longer falls and injures himself like he used to,” said his mother, a consultant paediatric neurosurgeon at Great Ormond Street Hospital in London (Gosh), who implanted the device. She added that there has been a huge improvement in her son's quality of life as well as his cognitive abilities. He is more alert and outgoing.”
Oran's neurostimulator is implanted under the skull and sends constant electrical signals deep into the brain with the aim of blocking the abnormal impulses that cause seizures.The implant, called Picostim, is about the size of a cell phone battery, is charged through headphones and works differently during the day and at night.
“The device has the ability to record from the brain, to measure brain activity, and we can use that information to think about how to improve the effectiveness of the stimulation that children are receiving,” says Tisdall. “What we'd really like to do is to make this treatment available on the NHS.”
As part of the trial, three children with Lennox-Gastaut syndrome will be fitted with the implant in the coming weeks, with a full trial planned for 22 children early next year. If the trial is successful, academic sponsors Ghosh and University College London plan to apply for regulatory approval.
Tim Denison, a professor of engineering science at the University of Oxford and co-founder and chief engineer at Amber Therapeutics, a London-based company that developed the implant in collaboration with the university, hopes that the device will be available on the NHS and around the world within the next four to five years.
The technology is one of a number of neural implants being developed to treat a range of conditions, including brain tumors, chronic pain, rheumatoid arthritis, Parkinson's disease, incontinence and tinnitus. These devices are more sophisticated than traditional implants in that they not only decode the brain's electrical activity but also control it, and this is where Europe is racing against the US to develop life-changing technology.
The latest generation of brain implants can not only detect brain activity but also control it. Photo: UCL
Amber isn't the only company working on brain implants to treat epilepsy. California-based Neuropace has developed a device that responds to abnormal brain activity and has been cleared by US regulators for use by people aged 18 and over. But the battery is not rechargeable and must be surgically replaced after a few years. Other devices are implanted in the chest with wires running to the brain that must be reinserted as the child grows.
When most people think of brain chips, they think of Neuralink, another California-based startup from Elon Musk that just implanted a brain chip in a second patient with a spinal cord injury. The device uses tiny wires thinner than a human hair to capture signals from the brain and translate them into actions.
The first recipient, Noland Arbaugh, was in January and is paralyzed from the neck down. Some of the wires had shifted and the implant needed to be adjusted. The implant allows Arbaugh to control a mouse cursor on a computer screen with his mind, as if he were watching a movie. Star Wars A Jedi who “uses the Force.”
Other US companies, such as Syncron, backed by Bill Gates and Jeff Bezos, have also recently implanted brain-computer interfaces (BCIs) in people who cannot move or speak.
But scientists say these implants simply decode electrical signals. In contrast, a number of companies in the U.S., Britain and Europe, like Amber, are working on so-called “BCI therapy,” or modulating signals in deep brain stimulation to treat disease. Amber's implants are also being used in academic trials for Parkinson's disease, chronic pain and multiple system atrophy, a condition that gradually damages nerve cells in the brain. The company is also sponsoring an early trial in Belgium to treat incontinence, with promising results.
Professor Martin Tisdall led the team that gave Oran Noorsson, who suffers from severe epilepsy, the implant last October. Photo: UCL
A different kind of technology will be tested in humans in clinical trials starting in a few weeks, using the first brain implant made from graphene, a “miracle material” discovered 20 years ago at the University of Manchester.
Medical teams at Salford Royal Infirmary will implant a device with 64 graphene electrodes into the brains of patients with glioblastoma, a fast-growing form of brain cancer. The device will stimulate and read neural activity with high precision, to spare other parts of the brain while removing the cancer. The implant will be removed after surgery.
“We use this interface to map out where the glioblastoma is and then remove it. [cut it out] “Without affecting areas of function such as language or cognition,” says Carolina Aguilar, co-founder and CEO of InBrain Neuroelectronics, the Barcelona-based company that developed the implant in collaboration with the Catalan Institute of Nanoscience and Nanotechnology and the University of Manchester.
Traditionally, platinum and iridium have been used in implants, but graphene, made from carbon, is ultra-thin, harmless to human tissue, and can be decoded and modulated very selectively.
InBrain plans to conduct clinical trials of similar artificial intelligence-powered implants in people with speech disorders caused by Parkinson's disease, epilepsy and stroke.
Professor Costas Kostarellos, head of nanomedicine at the University of Manchester, co-founder of InBrain and principal investigator on the glioblastoma trial, says the company's goal is to “develop more intelligent implantable systems”.
Equipped with AI, the device, with 1,024 electrical contacts, “will help provide optimal treatment for each patient without the neurologist having to program all those contacts individually, as they do today,” he says.
InBrain has partnered with German pharmaceutical company Merck to use its graphene device to stimulate the vagus nerve, which controls many bodily functions including digestion, heart rate and breathing, to treat severe chronic inflammatory, metabolic and endocrine diseases such as rheumatoid arthritis.
Galvani Bioelectronics, founded in 2016 by the UK's second-largest pharmaceutical company GSK and Alphabet's Verily Life Sciences, has a pioneering treatment that treats rheumatoid arthritis by stimulating the splenic nerve. Galvani has begun clinical trials with patients in the UK, US and the Netherlands, with first results expected within the next 6-12 months.
Bioelectronics, which combines biological sciences and electrical engineering, is a market worth $8.7 billion today and is predicted to reach more than $20 billion (£15 billion) by 2031. According to Verified Market Research:The field focuses on the peripheral nervous system, which transmits signals from the brain to organs and from organs to the brain. When brain-focused neuromodulation and BCIs are added, Aguilar believes the overall market could be worth more than $25 billion.
While U.S. neuromodulation companies are making waves with devices targeting chronic pain and sleep apnea, a growing number of European startups are also working on the technology. MintNeuro, a spinout from Imperial College London, Working on developing next-generation chips The company is developing an implant that can be combined into a smaller implant and has partnered with Amber. With the support of an Innovate UK grant, its first project will be to develop an implant to treat mixed urinary incontinence.
Geneva-based Neurosoft has developed a device that uses a thin metal film attached to stretchy silicon – soft enough to put less pressure on the brain and blood vessels – to target severe tinnitus, which affects 120 million people worldwide.
“Tinnitus begins with ear damage, typically caused by loud noise, but it can also cause changes in the wiring of the brain, making it effectively a neurological disorder,” said Nicholas Batsikouras, the company's chief executive officer.
Founded in 2009 by 13 neurosurgeons, neurologists, engineers and other scientists from the Policlinico Research Center and the University of Milan, Neuronica has developed a rechargeable deep brain neurostimulator that can be used to treat Parkinson's disease. The device provides closed-loop stimulation and adapts moment-to-moment to the patient's condition, and is currently being tested on patients.
“Europe and the UK can compete head-to-head with the US when it comes to getting treatments onto the NHS and distributing them around the world,” Denison said. “It's a fair competition and we're going to give it our all.”
The researchers studied mouse neurons in a part of the brain that is common to all mammals.
Klein & Hubert/naturepl.com
Scientists have identified neurons that become active when mouse pups interact with their mothers, seemingly reducing stress — and the same neurons may also be involved in the mother-child bond in humans.
The zona incerta, located in the center of the mammalian brain, is thought to be involved in integrating sensory information. As we develop, our roles change. In infants, neurons in the zona incerta send information to other brain regions, such as the cortex, promoting neural growth elsewhere.
Its role appears to be changing, Lee Yue Heon Researchers at the Yale School of Medicine and their colleagues suspected that this part of the brain may be involved in establishing a bond between mother and child.
To investigate, the researchers first assessed which neurons in the zona incerta become active when puppies, who are still dependent on milk, interact with their mothers. This involved surgically implanting fiber-optic probes in the brains of some of the puppies, allowing the researchers to detect light emitted when neurons became active.
The researchers found that activated neurons express a hormone called somatostatin, which regulates several bodily functions by inhibiting the release of other hormones, including the stress hormone corticosterone.
Free social interactions between mothers and infants activated these neurons, but contact with toys did not.
The researchers were also interested in how other social interactions might affect this brain region: They found that contact with lactating females who were not the pups' mothers, non-lactating females, siblings, or unrelated males also activated neurons, but not as much as contact with the pups' mothers.
“Our findings suggest that social interaction with the mother elicits the greatest response compared to other social stimuli, with a difference of about 1.5-fold in mean response levels,” the team said. Marcelo de Oliveira Dietrich, Even at Yale University.
In another part of the experiment, the team monitored the infants' brain activity while they were socially isolated: during these periods, which lasted between 10 minutes and 12 hours, no neurons were active, but this changed once the infants were reunited with their mothers.
Reunion also reduced the puppies' stress response, as measured by whether they made crying noises and released corticosterone.
Finally, the team wanted to see whether artificially activating neurons while the puppies were isolated could reduce stress in them: Activating the neurons using chemicals suppressed the puppies' crying and blunted their release of corticosterone.
The researchers believe that the zona incerta may be involved in early social relationships in mammals, as well as the development of other parts of the brain. “The distinct bond between infant and mother” is “a hallmark of mammals,” they write. The bond may be involved in the development of parts of the brain, with the zona incerta acting as “a nodal point that intertwines the elements that define mammalian biology.”
Robert Froemke A researcher from NYU Langone Health in New York says the study demonstrates that certain neurons “essentially serve to soothe infants,” but “it's still a bit unclear how infants sense their mothers — which aspects of smell, touch, or temperature are important,” he says.
“Another open question is how much contact is needed to send a safety signal, and how long that contact lasts? What promotes healthy development, as opposed to neglect?” In humans, “visual and auditory input — the sight and sound of the caregiver — is also likely to be important, or perhaps more important, than olfactory cues,” he says.
Activity within brain networks appears to differ between boys and girls
People Images/Getty Images
Artificial intelligence can now distinguish the brain patterns of 9- to 10-year-old boys and girls according to their sex and even gender, but not everyone is convinced of the accuracy of the results.
The prevalence of pain, headaches, heart disease, and other illnesses Varies by genderHowever, little is known about neurological variation in this regard or among sexes, particularly among children.
You can learn more and Elvisha Damara Researchers at the Feinstein Institute for Medical Research in New York analyzed thousands of sets of magnetic resonance imaging (MRI) data from more than 4,700 children, roughly equal in gender, all aged 9 to 10, who were participating in the Adolescent Brain and Cognitive Development Project.
Sex was defined based on “anatomical, physiological, genetic and hormonal structures at birth,” while gender was determined based on “an individual's attitudinal, emotional and behavioral characteristics.”
Parents weren't asked directly about their thoughts about their child's gender, but were assessed with a series of questions, such as how often their child imitates male or female characters on TV or in movies, whether they wanted to be a girl or a boy, whether they said they disliked their genitals, etc. All these questions were weighted equally and combined into a score.
A separate score was created from questions that asked the children themselves, such as whether they felt like a boy or a girl.
The researchers did not disclose the different genders the children identified as, or how many of the children had a gender that was different from their own gender. “We thought of gender as a continuum, not a binary,” Damala said. “We did not limit our analysis to gender categories, so we cannot comment on how many children had a gender that was different from their own gender.”
The researchers first looked at the relationship between brain networks and sex, and then looked at the relationship between these networks and sex for each assigned sex. They found that sex and gender differences were associated with distinct patterns of functional connectivity, a measure of communication between distant brain regions.
Gender was associated with connectivity between the visual cortex, which controls movement, and the limbic system, a group of deep brain structures involved in regulating emotion, behavior, motivation, and memory. These networks were “important in distinguishing participants based on their gender,” Damala said.
Gender-related networks were widespread throughout the cerebral cortex (the outer layer of the brain that is also associated with memory, movement, sensation and problem solving), both when using gender scores constructed from responses to parental questions and when using separate scores constructed by asking questions of the children themselves.
“In assigned females, sex mapped to networks involved in attention, emotion processing, motor control, and higher-order thinking,” Damala says. “In assigned males, the same relationships existed, but there were additional networks involved in higher-order thinking and visual processing. Although there was some overlap between sex- and gender-related brain networks, they were very distinct from each other.”
Once the researchers trained an AI model on some of the MRI data, it was able to identify a child's gender based on brain connectivity patterns in other datasets. It could also predict gender, but this was much less accurate and was based solely on the gender reported by parents, not the child themselves.
A better understanding of how brain activity patterns differ by sex could help scientists learn more about conditions that affect boys and girls at different rates, such as ADHD, Damala said.
The findings could also have implications for how human brain research is conducted, she says: “This shows that sex and gender need to be considered separately in biomedical research. This applies to how data is collected, how it is analyzed, and how results are interpreted and communicated,” Damala says.
but Ragini Verma The University of Pennsylvania researcher says the study tells us little about the neurological basis of gender. Because of the study's large sample size, the team was likely only able to find signals of different brain activity patterns between the sexes, but “any variability in gender predictions is based on low precision,” she says.
An astrophysicist and a surgeon walk into a bar. No, this is not the start of a bad joke. A few years ago, an astrophysicist Franco Vazza I met my childhood friend Alberto FerrettiAnd then he became a neurosurgeon. Vazza was modeling the structure of the universe, while Ferretti was delving into the brain. The two men reminisced and talked about their work. And then an idea occurred to them: What if they compared?
Vazza, based at the University of Bologna in Italy, has done just that. He used statistical techniques to compare neurons in a region of the brain called the cortex to the cosmic web, the pattern of matter distribution throughout the universe. Vazza looked at the number of nodes in each network and how densely connected each node is. The results surprised him.“It's a really interesting level of similarity,” he says. Ignoring the difference in the structures' sizes, which are about 27 orders of magnitude, “the two patterns kind of overlap,” Vazza says.
Some physicists cannot ignore this similarity, suggesting that the universe may “think” – that is, be conscious in some sense – an idea that has roots in the philosophy of panpsychism.
Traditionally, researchers have explained consciousness in one of two ways. Materialists argue that there is only matter, and consciousness somehow arises from that. Dualists argue that there are fundamentally two kinds of matter: matter and consciousness. There has been much discussion about the shortcomings of both views. For example, how can consciousness arise from pure matter?
Hydrogel-made brain sensor is small enough to be injected with a needle
Hanchuan Tang and Jianfeng Zang
Tiny sensors can be injected into the skull with a needle to monitor brain health until they dissolve within a few weeks. These sensors have been tested in animals, and in the future, they may enable minimally invasive, implantable sensors in the human body that can monitor traumatic brain injury and neurological disorders such as epilepsy.
“To my knowledge, this is the first wireless sensor that can monitor conditions inside the body without the need for surgery,” he said. Jules Magda The researcher is from the University of Utah, but was not involved in the study.
The sensor is a soft hydrogel cube about 2 millimeters wide, about the width of a grain of rice. Jiangfeng Zhan Professors from China’s Huazhong University of Science and Technology created structured “metagel” sensors by creating precisely spaced air columns throughout a hydrogel. When an external ultrasound source is aimed at the sensor, the channels guide the ultrasound waves. The shape of the sensor changes subtly in response to changing conditions in the brain, such as pressure or temperature, which can be seen in the reflected ultrasound.
“No wiring or electronics are required,” Zhang says. “It’s as if the metagel acts as a tiny acoustic mirror that changes its reflection depending on the environment.”
Zhang and his colleagues showed that when metagel sensors were injected into the brains of rats and pigs, they could measure pressure, temperature, pH levels, and flow rates in nearby blood vessels. They obtained results comparable to wired probes traditionally used to monitor brain health. Their experiments also found that metagel broke down into relatively harmless components, such as water and carbon dioxide, within four to five weeks.
Injecting the sensor into the brain requires a thick needle, which could still cause pain or discomfort, Magda said, and he noted that researchers also need to make sure the dissolved metagel is non-toxic.
Zhang says that the rats in the study showed little swelling in brain tissue or buildup of immune cells after the sensors were implanted and degraded, but he says that longer-term testing in larger animals is still needed to show that the metagel works reliably and safely before clinical trials in humans can begin.
Studies indicate that adolescents with internet addiction exhibit alterations in brain chemistry that can contribute to further addictive behaviors.
In a study published in PLOS Mental Health, researchers analyzed fMRI studies to explore how brain regions interact in individuals with internet addiction.
The findings revealed changes in neural network activity in the brains of young individuals, with increased activity during rest and reduced connectivity in areas involved in cognitive functions like memory and decision-making.
These alterations were linked to addictive behaviors, mental health issues, cognitive abilities, and physical coordination in adolescents.
The study reviewed 12 prior studies involving 237 young individuals diagnosed with internet addiction from 2013 to 2023.
Recent surveys show that nearly half of British teens feel addicted to social media platforms.
Lead researcher Max Zhang from the University of London emphasized the vulnerability of adolescents to internet addiction due to developmental changes during this crucial stage.
The study suggests that early intervention for internet addiction is essential to mitigate negative impacts on adolescent behavior and development.
Experts recommend targeted treatments focused on specific brain regions or therapies to combat internet addiction symptoms.
Parental education plays a crucial role in preventing internet addiction, enabling better management of screen time and impulsive online behaviors.
Lead author Eileen Li from GOS ICH emphasizes the importance of setting boundaries on internet usage and being mindful of its effects on mental and social well-being.
The COVID-19 pandemic has had long-lasting impacts on society and the health of millions of Americans who are still experiencing symptoms. Long-haul COVID-19 can result in chronic symptoms lasting for months, including weakness, palpitations, fatigue, headaches, and cognitive impairment. Scientists are still uncertain about the extent to which COVID affects brain function, leading to what is colloquially known as “brain fog.” Forgetfulness.
So, what causes brain fog in long COVID-19 patients? Researchers propose that the dysfunction of specialized cells lining the brain’s blood vessels plays a crucial role. Known as the Blood-Brain Barrier (BBB), this filter prevents toxins, pathogens, and large molecules from entering the brain. It is theorized that a leaky BBB could allow harmful substances to enter, disrupting normal processes and causing brain fog.
To investigate the link between a leaky BBB and COVID-related brain fog, researchers led by Matthew Campbell, PhD, and Colin P. Dougherty, PhD, examined the brains of patients previously infected with COVID. They studied a group of men and women over 18 years old, including 10 COVID survivors and 22 long-haul COVID patients (symptoms lasting more than 12 weeks), with 11 experiencing brain fog and 11 without it.
Using high-resolution MRI, the team measured BBB permeability by injecting a contrast agent into the patients’ blood to track blood flow through the BBB and into the brain. Patients with brain fog showed higher leakage rates compared to those without brain fog, suggesting a link between BBB dysfunction and persistent brain fog.
Further analysis revealed that patients with long COVID and brain fog had elevated levels of inflammatory markers in their blood, indicating brain inflammation potentially caused by a leaky BBB. The team also observed higher levels of a cell-signaling protein associated with chronic fatigue syndrome in patients with brain fog.
Investigating the immune system’s role in brain inflammation during long COVID, researchers examined gene activity in white blood cells. White blood cells from long COVID patients with brain fog showed significantly more active genes related to sustaining the immune response, suggesting ongoing inflammation causing BBB dysfunction and brain fog.
Lab experiments with brain cells exposed to patient blood samples further supported the link between inflammation, BBB dysfunction, and brain fog. Additionally, direct exposure of brain cells to COVID virus proteins resulted in increased inflammatory gene activity.
In conclusion, researchers found that BBB dysfunction during long COVID leads to chronic inflammation, contributing to brain fog. This insight may aid in understanding other long-term COVID effects and could guide future research on restoring BBB function to treat long COVID patients.
Continued use of drugs such as cocaine and morphine is thought to affect the way the brain prioritizes the body’s basic needs, but we are only now understanding how this happens.
When people repeatedly misuse drugs, they can experience long-term behavioral changes, where they choose to take drugs instead of doing what they need to do, such as eating or drinking.
A brain pathway called the mesolimbic reward system is thought to be involved in this process, but few studies have directly compared the system’s response to drug intake and its response when its innate needs are not met.
now, bowen tan from Rockefeller University in New York and colleagues showed that the same neurons are activated in these two situations. They revealed this using sophisticated microscopy equipment that can track the activity of individual neurons in the brains of mice in a state of withdrawal after repeated exposure to these drugs.
“There has long been a debate in this field about whether there are specialized cell types that encode only drug value and specialized cell types that encode only natural reward value,” Tan said. To tell. “What we saw is that these drugs of abuse typically activate the same set of neurons as natural rewards.”
The researchers also observed that after giving mice cocaine or morphine, their food and water intake decreased, while the neural responses needed to satisfy basic needs were disrupted.
“What’s really remarkable about this finding is that the strong neural responses to food and water are almost replaced by responses to drugs,” he says. Jeremy Day At the University of Alabama at Birmingham. “[This suggests] Drug rewards can override the way the brain converts desire states into behaviors that satisfy those desires.”
Tan and his team also identified a gene called.Rev which appears to be necessary for the drug to have this effect. Rev Because it is part of a cell signaling pathway that is also found in humans, future research could explore how inhibiting this pathway could be used as a treatment for substance misuse, he said. To tell.
Hala Point neuromorphic computer is powered by Intel’s Loihi 2 chip
Intel Corporation
Intel has developed the world’s largest neuromorphic computer, a device that aims to mimic the behavior of the human brain. The company hopes to be able to run more advanced AI models than traditional computers can run, but experts say the device will not be able to compete with, let alone surpass, the cutting-edge. says there are engineering hurdles to overcome.
Expectations for neuromorphic computers are high because they are inherently different from traditional machines. While regular computers use a processor to perform operations and store data in separate memory, neuromorphic devices use artificial neurons for both storage and calculation, similar to our brains. To do. This eliminates the need to pass data between components, which can be a bottleneck in today’s computers.
This architecture has the potential to result in much greater energy efficiency, and Intel says its new Hala Point neuromorphic computer will solve an optimization problem that involves finding an optimal solution to a problem given certain constraints. It claims to use 100 times less energy than traditional machines when running. It also trains and runs AI models that use chains of neurons, similar to how a real brain processes information, rather than mechanically passing input through each layer of artificial neurons as in current models. New methods may also become possible.
Hala Point contains 1.15 billion artificial neurons across 1152 Loihi 2 chips, capable of 380 trillion synaptic operations per second. mike davis Despite this power, Intel says it takes up only six racks of space in a standard server case, which is about as much space as a microwave oven. Larger machines will also be possible, Davis said. “We built a system of this scale because, honestly, one billion neurons was a good number,” he says. “So there were no special technical engineering challenges that would cause us to stop at this level.”
No other existing machine can match Harapoint’s scale, but Deep South, a neuromorphic computer due for completion later this year, is said to be capable of 228 trillion synaptic operations per second.
The Loihi 2 chip is still a prototype that Intel has produced in small numbers, but Davis said the real bottleneck is the processing required to take a real-world problem, translate it into a format that can run on a neuromorphic computer, and run it. It is said to be in the software layer. process. This process, like neuromorphic computing in general, is still in its infancy. “Software is a big limiting factor,” he says. That means there’s still little point in building a large machine.
Intel has suggested that machines like Hala Point could create AI models that continuously learn, rather than having to be trained from scratch to learn new tasks like current models do. Masu.but james knight Researchers at the University of Sussex in the UK dismissed this as “hype”.
Knight points out that current models like ChatGPT are trained using graphics cards running in parallel, which means many chips can be used to train the same model. But since neuromorphic computers operate on a single input and cannot be trained in parallel, it could take decades to even initially train something like ChatGPT on such hardware. He says it’s expensive, let alone come up with a way to enable continuous learning once it’s up and running.
Although current neuromorphic hardware is not suitable for training large-scale AI models from scratch, Davis said that one day pre-trained models could be used to learn new tasks over time. He said he hopes it will be possible. “Although this method is still in the research phase, this is a kind of continuous learning problem that large-scale neuromorphic systems like Hala Point can solve in a very efficient way in the future. “It’s considered,” he says.
Knight said neuromorphic computers could solve many other computer science problems as the tools needed for developers to write software for these problems to run on their own hardware become more mature. We are optimistic that we can improve this and increase efficiency at the same time.
It may also offer a better path toward human-level intelligence, also known as artificial general intelligence (AGI), although many AI experts believe that large-scale language models that power things like ChatGPT I think it’s impossible. “I think it’s becoming less and less of a controversial opinion,” Knight says. “The dream is that one day neuromorphic computing will allow us to create brain-like models.”
A lawyer representing O.J. Simpson, who passed away from cancer at the age of 76 last week, announced on Sunday that Simpson’s body will be cremated in the coming days and there are no plans to donate his brain for research purposes, according to his attorney Malcolm LaVergne.
LaVergne mentioned that there had been inquiries about studying Simpson’s brain for chronic traumatic encephalopathy (CTE), a degenerative brain disease linked to repeated head trauma in football players, but stated firmly that the entire body, including the brain, will be cremated.
Further details about the cremation and decision regarding brain research were first reported in The New York Post.
As the executor of Simpson’s estate, LaVergne mentioned plans for a small “celebration of life” gathering restricted to close friends and family. Simpson had three children from his previous marriages and was famously acquitted in the murders of his ex-wife Nicole Brown Simpson and Ronald Goldman in 1995.
Regarding financial matters, LaVergne expressed that he does not want the Goldman family, victims’ relatives, to receive any funds from Simpson’s estate. He acknowledged the need to handle the situation calmly and impartially.
Mr. Simpson’s debts, including those to the IRS, will be addressed as his estate is evaluated, and assets are inventoried to settle claims. Creditors will be prioritized for payment, with the Goldman family amongst them.
Despite potential legal battles over financial assets, Cook emphasized that the main goal is post-acquittal justice and accountability for the deaths of Brown Simpson and Goldman.
Looking ahead, LaVergne seeks funding for a suitable memorial at Simpson’s burial site as specified in his will, emphasizing the importance of carrying out his wishes without contention.
If you’ve ever felt like your cognitive abilities are not as sharp as they used to be, you might be struggling to recall names of actors or politicians in the news, for example. Perhaps mental arithmetic is not as easy for you anymore. This reflection may lead you to ponder the state of your brain and whether it’s on a downward trajectory.
It’s important to consider these aspects early on because brain development typically peaks in your 20s, and then cognitive functions gradually decline with age. Additionally, there is a growing risk of dementia, particularly associated with diseases like Alzheimer’s, in aging populations. However, both cognitive decline and dementia risk can be influenced by what experts call “modifiable risk factors,” offering a beacon of hope that there are lifestyle changes you can make to maintain mental acuity and lower the risk of dementia.
Be mentally active and boost your cognitive reserve
Psychologists and gerontologists often talk about cognitive reserve, which refers to the brain’s ability to adapt to aging and disease challenges. People with high cognitive reserve can perform well on cognitive tests despite exhibiting biological markers of Alzheimer’s disease, like protein build-up that impairs brain function. Engaging in activities such as reading, learning a new language, solving puzzles, and playing musical instruments can help boost cognitive reserve and maintain mental agility.
Interact with others
While brain-training games may not have broad benefits beyond the specific tasks they target, socializing with peers has been found to be a potent brain-training activity. Social isolation is considered a major risk factor for dementia, emphasizing the importance of engaging in lively conversations, joining clubs, or volunteering to keep your brain active and healthy.
Stay physically active
Physical activity not only benefits cardiovascular health but also contributes to better brain function and reduced cognitive decline. Incorporating exercises like running, swimming, or even gardening into your routine can help maintain cognitive abilities and lower the risk of dementia.
Eat a healthy diet
Avoiding excessive saturated fats and consuming plenty of fruits and vegetables can support brain health by eliminating harmful byproducts and providing essential nutrients. The Mediterranean diet, rich in fruits, vegetables, legumes, nuts, and olive oil, has been recommended for its brain-protective properties.
Stay curious
Personality traits like openness to experience are linked to better brain health and lower dementia risk. Activities that spark curiosity and awe can enhance cognitive abilities and mental flexibility. Incorporating habits like exploring new environments, trying new experiences, and enjoying cultural activities can promote brain health.
Think positively
Your mindset about aging can significantly impact your brain health. Maintaining a positive outlook, along with engaging in mentally stimulating activities and healthy habits, can contribute to long-lasting mental sharpness. Seeking out positive role models and adopting a proactive approach to brain health can help unlock your brain’s full potential.
Memory is a mysterious phenomenon. Some life events remain sharp in our memories no matter how long ago they occurred, while details from the previous day can quickly fade away.
A recent study published in the journal Science has uncovered the mechanism behind this phenomenon. Researchers have identified a system in the brains of humans and other mammals that determines which experiences are significant enough to be stored in long-term memory and which are forgotten.
Experiments conducted on mice demonstrated that specific patterns of brain activity called “sharp ripples” in the hippocampus, the area responsible for memory formation, occur during wakefulness. These patterns act as tags for important experiences, which are then transferred to long-term memory during sleep.
Although the study was carried out on mice, the lead author, Dr. Johnson, believes that the findings are applicable to humans as well, given the similarities in certain brain processes across mammalian species.
György Buzaki, the Biggs Professor of Neuroscience at New York University Langone Health, emphasized the unconscious nature of this memory consolidation process.
In the study, mice were rewarded with a treat after successfully navigating a maze, while their brain activity was monitored using implanted electrodes. The researchers observed that specific brain activity patterns observed during wakefulness were replayed during sleep, facilitating the conversion of important experiences into long-term memories.
This process highlights the crucial role that sleep plays in memory formation, as experiences deemed important during waking hours are transformed into lasting memories during rest.
According to the researchers, experiences that do not trigger the formation of sharp ripples are less likely to be stored in long-term memory.
To enhance the likelihood of memory retention, Dr. Buzaki suggests taking breaks after significant experiences to allow for the consolidation of memories.
Long-term memory requires relaxation
Research indicates that intentional pauses after experiences can aid in the formation of long-term memories. Dr. Buzaki recommends engaging in relaxing activities post-experience to facilitate the creation of sharp ripples in the brain, a process crucial for memory storage.
For example, after watching a movie, going for a leisurely walk can enhance the chances of remembering the film, as it allows for the encoding of memories.
Dr. Daniela Schiller, a professor of neuroscience and psychiatry at Icahn School of Medicine, highlighted the study’s intriguing discovery regarding brain activity patterns during rest and their resemblance to real-life experiences.
Dr. Daphna Shohamy, director of the Zuckerman Institute at Columbia University, emphasized the importance of pauses and bursts of brain activity in memory formation, noting that these elements enhance the likelihood of experiences being stored in long-term memory.
In conclusion, the study provides valuable insights into the unconscious mechanisms behind memory formation and underscores the significance of rest and relaxation in preserving lasting memories.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.