Attention Deficit Hyperactivity Disorder (ADHD) is officially classified as a human condition. However, many dog owners have observed similar traits, such as hyperactivity, impulsivity, and a tendency to become easily distracted in their canine companions.
Research indicates that approximately 20 percent of dogs display ADHD-like behaviors. These dogs often skip training classes, captivated instead by the instructor’s shoelaces or engaged in rambunctious parkour.
While these lovable rogues are delightful, their behavior can make training challenging. Common signs of ADHD-like behavior in dogs include excessive barking, biting, chasing, and stealing.
If these symptoms hinder your dog’s daily activities—such as learning new commands or interacting positively with you—it may indicate an ADHD-like disorder.
A recent study reveals that different dogs experience ADHD-like traits variably. Higher instances of hyperactivity and inattention are particularly common in young male dogs that spend extended periods alone at home.
It’s essential to remember that different dog breeds exhibit varying behavioral traits. For instance, breeds like Cairn Terriers, Jack Russells, and German Shepherds display more impulsive behaviors, whereas Chihuahuas, Rough Collies, and Chinese Crested Dogs are less likely to show these traits.
If you find yourself dealing with a challenging dog, understand that their behavior is not intentional. Much like individuals with ADHD, dogs process their environment differently due to various genetic and environmental factors.
Fortunately, there are effective strategies to help manage these behaviors. Professional behavior therapy can be beneficial, but often, increased exercise and engagement can lead to significant improvements.
Short, frequent training sessions that utilize positive reinforcement (i.e., treats) to reward good behavior can yield excellent results. Additionally, calming enrichment activities like lick mats or puzzle toys can provide much-needed stimulation.
This article addresses the query from Rhys Brooks via email: “Does my dog have ADHD?”
For further questions, please reach out to us at:questions@sciencefocus.com or connect with us on Facebook, Twitter, or Instagram(please include your name and location).
Discover more with our ultimatefun facts and dive into more fascinating science content!
New research reveals that a remarkable collection of over 700 Ediacaran fossils from the late Ediacaran period indicates that significant animal groups, including the early ancestors of vertebrates, began diversifying millions of years earlier than previously believed.
Restoration of the Egawa biota. Image credit: Xiaodong Wang.
The Ediacaran-Cambrian transition marked one of the most crucial turning points in Earth’s biological history.
However, the fossil evidence presents a fragmented view of this significant change, as Ediacaran biological communities are quite different from those of the Cambrian, leaving key moments of evolution elusive.
Dr. Gaorong Li from the University of Oxford states, “Our findings bridge a critical gap in the narrative of early animal diversification.”
“For the first time, we show that complex organisms typically associated with the Cambrian existed during the Ediacaran, indicating they evolved much earlier than fossil records previously suggested.”
In their study, Li and colleagues analyzed over 700 specimens from recently identified fossils in Yunnan province, China.
This fossil group, dating back 554 to 539 million years, is part of the intriguing Egawa biota.
Unlike many Ediacaran fossil sites that predominantly showcase traces of life on sandstone, these fossils are preserved as carbonaceous membranes, mirroring preservation styles found in renowned Cambrian sites like Canada’s Burgess Shale.
Dr. Luke Parry from the University of Oxford commented, “This groundbreaking discovery offers insight into a transitional phase in biological communities. The unique characteristics of Ediacaran life paved the way for the recognizable groups we categorize today.”
“Upon first examining these specimens, we recognized their uniqueness and the unexpected nature of our findings.”
The fossil group includes some of the earliest known relatives of deuterostomes, a category which now encompasses humans and vertebrates such as fish.
Among the specimens are ancestors of modern starfish alongside their close relative, the acorn worm (Ambulacraria), characterized by a U-shaped body attached to the seafloor with a stalk and tentacles for food capture.
Dr. Frankie Dunn from the University of Oxford noted, “It’s captivating that such exotic organisms thrived during the Ediacaran period.”
“We’ve discovered fossils that are distant relatives of starfish and sea cucumbers, and the search for more continues.”
The bicephalic fossils from the Egawa biota suggest that chordates (animals with backbones) also existed during this period.
Other noteworthy discoveries among the fossils include worm-like bilateral animals featuring complex feeding adaptations, as well as rare specimens believed to be early comb jellies.
Many specimens display unique anatomical features that do not correspond to any known Ediacaran or Cambrian species.
Dr. Ross Anderson from the University of Oxford stated, “Our findings suggest that the apparent scarcity of these complex faunas in other Ediacaran sites may highlight preservation discrepancies rather than an actual lack of diversity.”
“Carbonaceous compactions like those found in Egawa are uncommon in rocks of this age, indicating that similar communities may remain unpreserved elsewhere.”
For more on this pivotal discovery, refer to the research paper published in Science.
_____
Gaorong Li et al. 2026. Dawn of the Phanerozoic: The late Ediacaran transitional fauna of southwestern China. Science 392 (6793): 63-68; doi: 10.1126/science.adu2291
Michael Pollan: “Psychedelics have a way of staining the windshield of experience”
Casey Clifford/Guardian/Ivine
Author Michael Pollan, renowned for exploring themes of plants, food, and psychedelics in bestseller works like Omnivore’s Dilemma and How to Change Your Mind, now shifts his focus to the complex topic of consciousness in his latest book, The World Appears: A Journey into Consciousness. Pollan delves into scientific and philosophical insights, weaving literary perspectives throughout. In a recent interview with New Scientist, he reflects on the exploration of writing a book that often leaves him with more questions than answers.
Olivia Goldhill: Let’s begin with a challenging question: How do you define consciousness?
Michael Pollan: Consciousness can be easily defined as a subjective experience, which distinguishes beings with awareness from inanimate objects. Embracing an experience means being aware of it, which leads us to consider the implications of “subjectivity.”
Another intriguing definition arises from philosopher Thomas Nagel, who posed the question, “What’s it like to be a bat?” Although bats differ vastly from us, we can still conceptualize their experiences. If an organism can perceive its existence, it possesses consciousness.
Traditionally, consciousness was thought to reside in the cortex, the brain’s latest evolutionary development. However, I’ve come to understand that consciousness often begins with emotional experiences—not merely cognitive thought. Researchers like Antonio Damasio, Mark Solms, and Anil Seth highlight that consciousness starts with basic emotions such as hunger or itchiness, emerging from the brainstem. This realization underscores that consciousness is an embodied phenomenon; we need vulnerable bodies and profound emotions to truly experience it.
You discuss the limited understanding of consciousness and the scientific challenges involved. Do we require a new scientific approach?
Current physical sciences maintain an objectivity that excludes the qualitative, first-person experience of consciousness. This bifurcation, dating back to Galileo, has confined subjective qualitative matters to theology. While subjective experiences are indeed vital, the adequacy of existing scientific tools to address them is debatable.
We must also analyze consciousness from within. Blind Spot, a book that profoundly influenced my understanding, reveals that science itself results from human consciousness. Our chosen issues and measurement methods stem from our own awareness.
Thus, a novel scientific paradigm may be essential, one that incorporates first-person perspectives. One effort endeavors to connect this through integrated information theory, which posits a subjective experience defined by five axioms, seeking structures that support such experiences. The attempts, while intriguing, have yet to be convincing.
You propose that plants possess memory and intelligence, even hinting at plant consciousness.
I differentiate between sensation and consciousness. Sensation entails awareness of the environment and the ability to assess whether changes are beneficial or detrimental, resulting in a basic form of awareness without self-awareness. I believe plants exhibit this capability.
My exploration into what some refer to as “plant neurobiology” yielded fascinating discoveries. Plants possess around 20 senses compared to our six; they navigate mazes, and when they detect the sound of caterpillars munching, they respond by injecting toxins into their leaves. They send signals to nearby plants alerting them to predators and selectively share resources with kin.
Interestingly, plants respond to the same anesthetics as humans. For instance, when Venus flytraps are exposed to anesthetics, they fail to react to nearby flies. This raises intriguing questions: what do plants lose in consciousness under anesthesia? This provokes thought regarding their cognitive capacities.
It may comfort some to hear your perspective that artificial intelligence lacks consciousness.
Specifically, I am discussing the imminent development of artificial intelligence models. While computers can mimic thoughts, they can’t replicate real emotions, which possess inherent qualitative aspects tied to our physical being.
In my book, I introduce Kingson Mann, who endeavors to create an AI with a “vulnerable body” designed to feel. When I inquired about the authenticity of such feelings, he expressed uncertainty.
How have your past investigations into plants and psychedelics informed your current research on consciousness?
My fascination with plants has roots in my earlier works, and they matter deeply to me. My psychedelic experiences also shaped this exploration. One profound moment occurred in my Connecticut garden, where I sensed a consciousness among the poppies, which seemed to gaze back at me with kindness.
My challenge remains: how to interpret these psychedelic insights. William James suggested we treat such experiences as hypotheses and seek further validation or contradiction. This perspective guided my journey.
Christof Koch recounts his radical psychedelic experience in my book, leading him to rethink established notions of consciousness tied strictly to the brain, illuminating the extraordinary potential of psychedelics in understanding consciousness.
Psychedelics influence how we perceive the world and can “stain the windshield of experience,” which makes it impossible to disregard consciousness. Once you grasp that concept, it can become an obsession.
I appreciate your thoughts on psychologist Russell Hurlbert’s experiment tracking thoughts, though you seem to dispute his claim of limited thoughts.
While I may struggle to articulate my thoughts, I believe they exist and merit expression. James described this as a “hunch”—a threshold of understanding that may take time to articulate.
However, Hurlburt inferred that my inability to instantly contextualize thoughts indicated a cognitive void I was filling with situational elements. While I found our discussions intriguing, I also found them illuminating.
“
Consciousness is a private space where we think whatever we want, and we offer it to businesses “
For over fifty years, Hurlburt has observed real variations in thought processes among individuals. We often assume the term “thinking” is universal, yet distinct forms exist—some think in words, others in images, and some experience what he terms “unsymbolized thinking.” Notably, verbal thinkers are fewer than often presumed.
Does contemplating consciousness enhance or diminish our consciousness?
Alison Gopnik articulates “spotlight consciousness” (focused attention) and “anti-lantern awareness” (exploratory awareness). I initially sought immediate answers to the consciousness dilemma. Yet, through discussions with my artist wife and Zen teacher Joan Halifax, I learned the value of embracing uncertainty. Understanding consciousness is complex yet essential, and protecting our unconsciousness is paramount.
If comprehending consciousness proves potentially impossible, what motivates this pursuit?
Ultimately, the journey of discovery matters more than definitive answers. James’s insights into the intricacies of our minds captivated me, leading to greater appreciation for previously overlooked aspects of consciousness. My hope is that this work enhances your awareness of consciousness more than before you read it.
Cells transport substances by encasing them in membrane bubbles called vesicles that navigate to various locations within the cell. These vesicles merge with other vesicles to release their contents, a complex process requiring the seamless connection of two membranes without rupturing or leaking. Scientists have long theorized that during this fusion, the cell membrane enters a transient intermediate state, but direct visualization of this process within intact cells has remained elusive until now.
Researchers from the NIH and the University of Virginia embarked on a study to determine if the membranes of living cells create stable, observable structures that signify this intermediate state. They cultured multiple mammalian cell types, including those from humans, monkeys, mice, and rats, in nutrient-rich solutions within laboratory flasks kept in a 37°C (98.6°F) incubator to sustain their growth.
The research team placed between 80,000 and 100,000 cells on a specialized gold-coated platform optimized for high-resolution imaging. To maintain the natural state of the cells, they flash-froze them to immobilize the membranes. Subsequently, they employed a technique known as cryogenic electron tomography to generate detailed images referred to as tomographic images.
Using these cross-sectional images, they reconstructed a 3D model of the cells at the nanometer scale, allowing visibility into the delicate structures of internal vesicles and the plasma membrane. Approximately 300 3D reconstructions showcased areas where membrane bubbles interacted and moved, particularly focusing on membrane contact sites where two vesicles or one vesicle and the cell’s plasma membrane are closely aligned.
Typically, a cell membrane comprises two layers of fat-like molecules that create a flexible barrier. However, the researchers uncovered an uncharacterized membrane structure formed when the outer layers of two membranes merge into a continuous sheet while keeping the inner layers separate. They identified a flat, circular area where the outer layers contacted, forming a thin membrane bridge between vesicles, analogous to soap bubbles merging. This structure is referred to as a hemifome.
The research team noted that hemifsomes are considerably larger and more stable than the ephemeral intermediate states posited by earlier studies. They interpreted this stability to suggest that hemifsomes represent more than mere temporary fusion events; they may endure long enough to engage in vital cellular functions.
Additionally, they detected that some hemifsomes contained singular lens-shaped droplets within the membrane at the fusion point of the two vesicles. About half of the 308 cross-sectional images they analyzed revealed these droplets, averaging 40 nanometers in diameter—approximately 100 times smaller than the adjacent vesicles—and positioned close to the oily membrane interior.
These droplets, distinct from surrounding membrane lipids, are believed to consist of a blend of lipids and proteins, referred to as proteolipid nanodroplets. The researchers posited that the consistent association between hemifsomes and these proteolipid nanodroplets might contribute to the stabilization of hemifsomes or influence the morphological organization of the cell membrane.
To investigate whether hemifsomes facilitate material movement within cells, the team introduced 5- or 15-nanometer-sized gold particles into the cells. These particles were adequately small to traverse the cell’s internal transport systems, which usually distribute nutrients and other molecules. By employing a powerful microscope, they tracked the movement of the gold particles through the cell’s compartments; however, none entered hemifsomes, suggesting a non-involvement in cellular transport.
In conclusion, the researchers posited that hemifsomes emerge when cell membranes merge or reshape, akin to temporary construction sites for cellular membrane construction, repair, or rearrangement. Unlike existing models of membrane fusion and vesicle formation, these findings indicate that vital intermediate states can develop into stable and functional cellular configurations.
The researchers propose that future studies should delve into the molecular composition of proteolipid nanodroplets and clarify how cells regulate the shift from hemifsomes to fully fused membranes. They also recommend exploring hemifsomes’ roles in vesicle formation, membrane recycling, or stress responses across various cell types.
The intense spring heat dome that has gripped the West for over a week is finally starting to shift. This extreme weather event has set over 1,500 temperature records across 11 states, according to Climate Central, a leading research organization in climate analysis.
The ongoing heatwave is causing climate scientists, irrigation managers, and local authorities to weigh the potential for a significant water crisis and to assess the unprecedented nature of this weather phenomenon. Even before the surge in temperatures, Western states were noting record low snowfall—a situation that has persisted, leaving many areas nearly devoid of snow.
Researchers have long established that climate change is likely to exacerbate heat waves. However, some scientists are exploring whether lesser-known climate factors could account for the exceptional longevity, intensity, and scope of this month’s heat events.
Fans of the Los Angeles Dodgers protect themselves from the sun during a spring training game in Phoenix on March 21. Ross D. Franklin/Associated Press
Jennifer Brady, a senior data analyst at Climate Central, noted that the heatwave’s extensive effects and duration qualify as an anomaly, “even given the climate change we’re currently experiencing, which many refer to as the new normal.”
Climate Central has developed a climate change index that assesses the influence of climate change on daily temperature averages, rating them from 1 to 5.
Around 29% of the country recorded maximum temperatures classified as a “5” by Climate Central—indicating that these temperatures are at least five times more likely to occur due to climate change. Historical data since 1970 shows that the region is experiencing unprecedented temperature anomalies.
“This is unprecedented and potentially very dangerous,” Brady stated.
Crowds flock to Baker Beach near the Golden Gate Bridge in San Francisco on March 16 during the ongoing heatwave. Tayfun Coskun / Anadolu via Getty Images
The World Weather Attribution Group, comprising scientists who publish statistical analyses of climate impacts, confirmed that climate change played a significant role in the early March heatwave. They stated that these temperatures were statistically practically impossible without climate change, with measurements showing some areas experienced temperatures 20 to 30 degrees Fahrenheit above the average.
In a report released on March 20, the group asserted that climate change is raising the intensity of heatwaves in the West by more than 7 degrees Fahrenheit, making them 800 times more likely to happen compared to a world without global warming.
Climate change is shifting temperature distributions globally. According to Karen McKinnon, an associate professor at UCLA’s Department of Atmospheric and Oceanic Sciences, land is heating up more rapidly than oceans, with the western U.S. warming faster than other regions.
While the global average temperature last year exceeded pre-industrial levels by 1.47 degrees Celsius (2.65 degrees Fahrenheit), McKinnon noted that “depending on your location, we may have already encountered warming of 4 to 5 degrees Fahrenheit.”
Families leave Aliso Beach at sunset amidst a record heatwave on March 20 in Laguna Beach, California. Kevin Carter/Getty Images
Researchers are increasingly curious if factors beyond base-level warming are enhancing the severity of heatwaves like this one. Some are investigating whether climate changes are affecting atmospheric dynamics.
This month’s heatwave resulted from a phenomenon known as a heat dome, which occurs when high pressure and clear skies stagnate over a region, trapping heat like a lid on a stovetop and intensifying global warming.
Scientists propose that climate change is shifting large-scale atmospheric circulation patterns, contributing to the prevalence of heat domes and influencing the jet stream’s behavior. The polar jet stream generally separates cold Arctic air from warmer southern air, and changes in its pattern may lead to extreme weather events.
Researchers speculate that climate change has enlarged jet stream waves, leading to more significant shifts in temperature across the continental U.S.
McKinnon stated that while scientists are probing these trends, conclusive answers remain elusive. Competing theories are surfacing, and it may take years to establish a consensus on these critical climate questions.
“This poses a million-dollar question,” McKinnon said. “Are these atmospheric changes primarily driven by climate change?”
The phrase “Money can’t buy happiness” is a popular notion, but is it true? Right? Scientifically, the relationship between wealth and happiness is complex.
A study from the University of Bath explores “The relationship between income and happiness.”
Up to a certain threshold, money can contribute to happiness. However, this correlation becomes less pronounced beyond a particular point.
What Truly Makes Us Happy?
At a fundamental level, happiness stems from fulfilling our basic biological needs.
Humans require essentials like food, water, air, sleep, and safety for survival. Our brains reward us when we obtain these necessities, recognizing their biological importance.
Our brains also understand that money facilitates access to these essentials.
A 2007 Wellcome Trust study reveals that money can boost our motivation and sense of well-being—two crucial components of happiness.
However, more money does not equate to more happiness. While it may seem vital, its rewarding capacity has limits.
Photo credit: Getty
For instance, eating provides pleasure until we feel full; overindulgence leads to discomfort. Similarly, excessive comfort can lead to isolation.
Moreover, our brains adapt to routine stimuli, as shown in a 2011 study by Dr. Ruth Krebs, demonstrating that surprising experiences boost happiness.
Unexpected financial windfalls tend to bring greater joy than regular income.
For those in financial distress, acquiring money can be incredibly rewarding. However, once financial stability is achieved, the joy from money diminishes, as pointed out in a study from San Francisco State University, which shows how rewards lessen with increased wealth.
Experiences—like travel, forging new relationships, and helping others—tend to produce more happiness.
While money often finances these experiences, it serves more as a means to happiness rather than a direct source.
Is There a Specific Income Level for Happiness?
The notion of a “happiness threshold” suggests that beyond a certain income, additional money won’t enhance happiness. This becomes increasingly relevant today.
As wages stagnate and costs rise, the question of how much income is essential for happiness is critical.
However, the ideal income varies widely among individuals, making it challenging to pinpoint a universal amount.
Photo credit: Getty
Some might find fulfillment in modest means, while others feel they’ll never reach “enough.”
The University of Bath study indicates that cultural comparisons can show how learned behaviors affect the relationship between wealth and happiness.
Interestingly, individuals with substantial wealth can sometimes experience less happiness than those with fewer financial resources, often due to anxiety.
Can Excess Wealth Lead to Unhappiness?
Interestingly, too much money might actually lead to unhappiness. Research indicates that being compensated for doing what you love can sometimes diminish overall happiness. This accounts for why some avoid turning a beloved hobby into a profession.
In today’s world, money is dynamic and rarely stagnant. Wealth translates to various assets, from investments to savings, which are often volatile.
This volatility is influenced by political and economic factors, leaving individuals with limited control over their financial situation. Such uncertainty can lead to increased stress, impacting happiness.
Instead of saying, “Money can’t buy happiness,” it might be more accurate to assert, “Money can buy safety and security,” which pave the way for happiness.
Ultimately, the connection between money and happiness is subjective, relying heavily on personal experiences and upbringing.
Feedback is New Scientist A popular source for the latest science and technology news. Share your thoughts with us at feedback@newscientist.com.
Exploring Unique Units of Measurement
In our recent exploration of the world’s most unusual units of measurement, Feedback presented an interesting case involving polar bears as units of snowpack. Reader Steve Tees inquired about the meaning of “shed load” in the context of traffic delays.
Since then, we’ve received an influx of emails suggesting alternative phrases to express large quantities.
Two readers, Bryn Glover and John Newton, both linked the term to highway incidents, commenting, “The truck was certainly dropping a load.”
F. Ian Lamb proposed viewing “shed loads” as examples of “endogenous relative scaling (ERS) units,” indicating that individual perceptions of size can vary widely based on personal experiences. For instance, £1,000 could seem insignificant to a millionaire but immense to someone living in poverty. Ian invites readers to share more examples of ERS units.
William Croydon provides another perspective, noting that “shed” is a term utilized in nuclear physics. In particle physics, measuring small particles colliding requires a unit with a tiny cross-sectional area.
According to William, the “barn” unit is 100 square femtometers (10-28 square meters), which is approximately the cross-section of a uranium atom’s nucleus. Essentially, this small measurement corresponds to the ease with which a nuclear reaction may be initiated.
William also mentioned that smaller units, or “huts,” have been discussed, albeit with uncertainty regarding their dimensions. Online research led to two variants: the “outhouse,” which is one millionth of a barn, and the “yoctoban,” defined as 10-24 barn, humorously dubbed as a shed in a barn.
In any case, as William points out, even numerous sheds would be “too small to cause problems on the highway.”
Tony Lewis humorously suggests that while Steve Tees wants to know the size of the “xxxx warehouse” blocking traffic, it must indeed measure the equivalent of “xxxx warehouses.”
Pencils and Shakespeare
Feedback regarding the book Puzzle Advisor by Rob Eastaway highlights how William Shakespeare may have been influenced by the mathematics of his era.
Feeling a kinship with Shakespeare, Feedback notes the recent surge of interest in various adaptations of Hamlet, including Riz Ahmed’s modern take and the gender-swapped Scarlet, all of which delve into themes of moral corruption.
Interestingly, Rob’s book mentions that graphite was in use during Shakespeare’s lifetime for writing instruments, suggesting the Bard may have opted for a pencil over a quill for some of his witty compositions.
This was reported in Stationery News, headlined “2B or not 2B?” The article suggests any pencil Shakespeare used would likely have been of pure graphite, implying it would have been 9B rather than 2B.
The Enigma of Hexagonal Water
Reader Joseph Orechino shared an email promoting the supposed health benefits of “hexagonal water,” claiming it is “10 times healthier than lemon water.”
This type of water allegedly undergoes a treatment that arranges its molecules in a hexagonal formation, though many experts agree that such structures are unstable and short-lived.
Despite the scientific skepticism, the allure of hexagonal water persists, with our archives revealing past attempts to create wine from it and other quirky concepts like “vibrating interactive water.”
The feedback poses an intriguing question: Why hexagons? To maximize water’s potential, a pentagram might be a more magical arrangement, although it might lead to accidental symbolism when a bottle is turned upside down.
Have a story for Feedback?
Share your insights with us at feedback@newscientist.com. Don’t forget to include your address. Explore this week’s and previous feedback on our website.
U.S. and German researchers have discovered a unique fungal protein capable of freezing water at relatively warmer subzero temperatures. This breakthrough opens up exciting possibilities for safer cloud seeding, enhanced climate models, and innovative advancements in food preservation and medicine.
Mortierellomycetes and Umbelopsidomycetes fungi from freshwater ecosystems in Korea. Image credit: Goh et al., doi: 10.4489/kjm.20230018.
In cloud seeding, particles known as ice nucleators are introduced into clouds to promote the transformation of cloud water into ice crystals.
As more water molecules adhere to these crystals, they grow in size.
This process creates a snowball effect, where ice crystals become heavier, descend to the ground, and melt into rain as they traverse the atmosphere.
Typically, conventional ice nucleators like silver iodide are used, which are highly toxic.
Professor Boris Binatzer and his team at Virginia Tech suggest that these fungal protein molecules could present a safer alternative.
“If we can efficiently produce these fungal proteins in large quantities, we could enhance cloud seeding safety,” Professor Binatzer stated.
The researchers also uncovered that the fungal genes responsible for ice nucleation proteins likely originated from bacterial species through horizontal gene transfer, a process that occurred hundreds of thousands of years ago.
“While we know fungi can acquire bacterial genes, this isn’t commonplace,” explains Professor Binatzer.
Since the early 1990s, researchers have been aware of fungi’s ability to form ice nuclei. Recent advancements in DNA sequencing and computational biology have enabled the sequencing of genomes from a specific fungal family, Mortierellaceae, revealing the genes coding for ice nucleation proteins.
The function of the acquired genes for fungi is still unclear, but it is evident they have enhanced their capabilities over time.
This genetic modification offers significant human benefits.
The ice-nucleating proteins produced by fungi are distinct from those produced by bacteria in that they are cell-free and water-soluble.
These characteristics make fungal molecules highly attractive for bioinspired refrigeration technologies and artificial weather manipulation.
For instance, in frozen food production, fungal molecules present a safer option compared to bacterial ones since fungi only secrete ice-nucleating proteins, eliminating the need for entire bacterial cells.
“This is a major advantage in food production, allowing use of a single well-defined protein while omitting unnecessary components,” Professor Vinatzer added.
“We have the potential to create safe and effective additives for frozen food preparation.”
Additionally, fungal ice nucleation may prove beneficial in the cryopreservation of cells such as tissues, sperm, eggs, and embryos.
“Utilizing fungal ice nucleators—relatively small molecules—enables faster freezing of water around cells, safeguarding delicate cellular structures,” stated Professor Binatzer.
“This approach is not feasible with bacteria since the entire bacterial cell must be added.”
Ice nucleation plays a crucial role in climate models, impacting predictions of how much radiation is reflected back into space by clouds versus what reaches Earth. Ice presence in clouds allows more radiation to reach our planet.
With the identification of these fungal molecules, determining their quantity in clouds becomes more manageable.
In the long term, this pioneering research could significantly enhance climate modeling accuracy.
For further details, refer to the study findings published in the journal Scientific Progress.
_____
Rosemary J. Eufemio et al. 2026. A previously unrecognized class of fungal ice nucleoproteins with bacterial ancestry. Scientific Progress 12(11); doi: 10.1126/sciadv.aed9652
Ultra-processed foods are often high in fat and sugar.
Anastasia Krivenok/Getty Images
In recent years, health experts, scientists, and media outlets have increasingly highlighted the dangers of ultra-processed foods (UPFs). These foods are often linked to a surge in chronic diseases in today’s society. But what exactly are UPFs? Why should you be concerned about them? Let’s delve deeper.
Defining UPFs can be surprisingly challenging. Historically, humans have modified foods such as grains through processes like milling, salting, and fermenting for better taste and preservation. The concept of ultra-processed foods was coined in the late 2000s by Carlos Monteiro at the University of São Paulo, Brazil. UPFs are those derived from breaking down whole foods into parts like sugar, fat, and fiber, which are then chemically modified and often contain various additives. Common examples include breakfast cereals, biscuits, fish fingers, ice cream, mass-produced breads, and sugary drinks.
Until recently, dietary advice focused primarily on nutritional content. We’ve been instructed to limit foods high in salt, sugar, and saturated fats while opting for fiber-rich, vitamin-packed alternatives. The UPF concept has shifted this conversation, suggesting that the level of processing matters more than just nutrient content. Countries like Brazil, Belgium, and New Zealand have revised their dietary guidelines to discourage the consumption of UPFs.
Is there substantial evidence that UPFs harm health? Research indicates that diets rich in UPFs correlate with severe health risks, including cancer, diabetes, dementia, heart disease, and obesity. However, many of these studies only show correlation, not causation. Assessing the specific impacts of diet against other lifestyle and environmental factors—like poverty and pollution—can be complex. Furthermore, many studies rely on surveys, which can lead to inaccuracies in dietary reporting.
One of the most credible pieces of evidence comes from a 2019 randomized trial. This short-term study involved 20 participants consuming diets high in either UPFs or unprocessed foods over two weeks, then switching diets. Both types matched in caloric content and nutritional composition. Participants were provided with meals and snacks, allowing them to eat freely.
The results were striking: those on UPF diets consumed around 500 additional calories daily, gaining nearly 1 kilogram over two weeks, whereas those on unprocessed diets lost just under a kilogram. This suggests that the appeal of UPFs often leads to excessive caloric intake due to enhanced flavor and palatability.
Some experts suggest that UPFs could pose other health risks, such as contamination from factory processes. Furthermore, many contain additives like emulsifiers, which may potentially be harmful. Studies indicate that UPFs can disrupt the microbiome and promote inflammation. Advocates argue for stricter regulations on UPFs, akin to those for tobacco products, including clear warnings on packaging and advertising limitations.
However, critics claim the evidence isn’t robust enough to justify such measures. They argue that the UPF classification is too broad, potentially labeling some healthy foods, like yogurt and whole-grain breads, as unhealthy. Nutrition experts often struggle to categorize foods by processing levels, leading to confusion among the public. Additionally, not everyone can consistently prepare healthy meals, and harsh criticism of UPFs might eliminate accessible nutrition options.
So, how concerned should we be about UPFs? While they do encompass many unhealthy foods and tend to encourage overeating, most individuals could benefit from minimizing UPF intake while increasing whole food consumption. However, complete avoidance is likely impractical and unnecessary. Aim to reduce intake, diversify your diet, and prepare your meals when possible—yet enjoy the convenience of ready-made options occasionally without guilt.
Heat Wave of 2023: A Catalyst for Devastating Wildfires in Greece
Image Credit: Sakis Mitrolidis/AFP via Getty Images
In recent years, global temperatures have soared beyond predictions, igniting intense discussions among climate scientists. There is widespread agreement that **global warming** is accelerating. However, opinions diverge; some experts argue it’s accelerating more than current climate models forecast, while others posit the surge is just a natural variation that will soon subside.
The implications of this debate are critical: if the acceleration is robust, the timeline to mitigate or adapt to catastrophic climate impacts may be shorter than expected.
“Ultimately, this is a question of how severe climate change will become,” states Zeke Hausfather, a researcher from Berkeley Earth, a nonprofit organization in California.
The Earth used to warm at a stable rate of approximately 0.18°C per decade until the 2010s, but recent data indicates a slight uptick in this rate.
2023 has recorded the highest temperatures yet, surpassing expectations by 0.17°C, fueled by alarming climate events—catastrophic floods in Libya, record-breaking cyclones in Mozambique and Mexico, and unprecedented wildfires in Canada, Chile, Greece, and Hawaii.
Notably, in 1988, James Hansen from Columbia University presented a groundbreaking paper to Congress highlighting that human activity, rather than natural fluctuations, was the primary driver of climate change. His colleagues claim that since 2010, the warming rate has escalated to about 0.32 degrees Celsius per decade.
This acceleration, they argue, is largely due to a “Faustian bargain” between humans and aerosol pollution. While sulfur aerosols counteract warming by reflecting sunlight, this temporary reprieve masks the true impact of carbon dioxide emissions.
As global sulfur emissions are being curbed, this hidden warming is emerging, intensifying climate change implications. China, for example, initiated a “war on pollution” around the 2008 Beijing Olympics, leading to a significant reduction in sulfur aerosol emissions by at least 75%.
Simultaneously, the International Maritime Organization has imposed strict regulations on sulfur emissions from shipping. With reduced aerosols at sea resulting in fewer reflective clouds, the trend is further contributing to warming.
Consequently, global sulfur dioxide emissions have declined by 40% since the mid-2000s. “With cleaner air, more solar radiation is penetrating our atmosphere,” explains Samantha Burgess at the European Union’s Copernicus Climate Change Agency.
This trend escalated in 2024, a year that was even hotter than 2023, surpassing the alarming threshold of 1.5°C above pre-industrial levels. Strikingly, such temperatures threaten the global goals outlined in the Paris Agreement.
Interestingly, despite most scientists agreeing on the acceleration of global warming due to reduced aerosol emissions, perspectives diverge on the extent. Hansen and his team estimate a rate of 0.32°C per decade—a figure that exceeds the United Nations Intergovernmental Panel on Climate Change’s estimate of 0.24°C and the latest climate models’ average of 0.29°C.
Natural fluctuations also significantly influence Earth’s temperature. For instance, in 2020, an exceptional solar maximum occurred within the 11-year solar cycle, resulting in increased sunlight reaching Earth.
In 2022, a massive undersea volcano erupted near Tonga, releasing 146 million tons of water vapor—a greenhouse gas—into the stratosphere while simultaneously emitting sulfur aerosols that temporarily cooled the atmosphere.
Subsequently, a strong El Niño developed in 2023 and 2024. El Niño is a natural climate phenomenon characterized by weakened trade winds, leading to warmer waters in the Pacific Ocean and heightening global temperatures.
To accurately assess the acceleration of global warming, scientists must disentangle natural variability from long-term trends in observed temperatures, building models that reflect emerging patterns. The lesser the impact of natural variability, the more pronounced the acceleration becomes.
Recently, a statistical analysis conducted by Stefan Rahmstorf from Germany’s Potsdam University and statistician Grant Foster found that global warming has intensified by approximately 0.36°C per decade since 2014.
However, Michael Mann from the University of Pennsylvania argues that Rahmstorf and colleagues might overstate aerosol impacts and underestimate natural variability, asserting that minimal acceleration has occurred since the 1990s.
“The recent warmth aligns with standard climate model simulations shaped by the 2023-2024 El Niño event, without necessitating extraordinary explanations,” Mann stated.
Unexpected climate feedback loops may also be factoring into recent temperature rises. One of the most significant uncertainties lies in the behavior of clouds, which can’t be accurately captured in climate models due to their small scale and scattered nature.
A study by Helge Goessling at the Alfred Wegener Institute indicates that approximately 0.2°C of the 1.5°C warming in 2023 can be attributed to a reduction in low-level clouds. Some of this cloud reduction stems from decreased sulfur pollution, while other factors may involve “new low cloud feedback,” according to researchers.
Typically, a temperature inversion creates a situation where cold, moist air resides over subtropical oceans, separated from warm, dry air above. However, as climate change elevates the temperature of this cold air, the inversion layer may collapse, potentially reducing cloud cover, Goessling explains.
If the acceleration of warming primarily arises from sulfur reduction, climate change might taper off in future decades once sulfur pollution reaches negligible levels. Conversely, unleashed climate feedback loops could propel temperatures even higher.
This suggests potential underestimations regarding climate sensitivity—the degree of warming linked to increases in atmospheric CO2.
“The worst-case scenario involves unexpected cloud feedback mechanisms not envisioned by models, indicating that our climate may be more sensitive than previously predicted,” warns Brian Soden from the University of Miami, Florida.
Current climate policies suggest the world may experience a rise of 2.7°C this century. However, there is potential variability in these predictions, with a possible increase of up to 3.7°C. Without significant reductions in carbon emissions, catastrophic impacts could become more frequent.
“A rise of 3.7 degrees Celsius could render certain areas uninhabitable,” said Hausfather. “While 2.7°C presents its own challenges, some regions may still adapt to this change.”
Ultimately, fossil fuel emissions are on the rise, and reversing this trend is essential for mitigating adverse effects, Burgess emphasizes.
“Global warming is progressing faster, and we’re losing time to implement ambitious measures aimed at decarbonizing society,” she concluded.
In an episode of Friends, Phoebe (left) and Joey engage in a profound philosophical discussion
Photo 12 / Alamy
If you’re a fan of Friends, you may recall a specific episode where aspiring actor Joey Tribbiani, portrayed by Matt LeBlanc, hosts a charity telethon on PBS. “A bit of good for PBS and some TV exposure—it’s Joey’s favorite calculation!” he humorously states.
Meanwhile, Phoebe Buffay, played by Lisa Kudrow, challenges him: “This isn’t a good deed. I want to be on TV—it’s totally self-serving.” Their debate sharpens as Joey argues that all acts of kindness stem from selfish motives, while Phoebe searches for examples of genuine altruism.
This dynamic resonates with insights from recent studies on “contempt for good deeds,” highlighting our innate skepticism toward the selflessness of others. Like Phoebe, we often suspect ulterior motives and may end up criticizing them more than those acting solely out of self-interest.
Take, for instance, the well-known public goods game. In this experiment, participants are given small amounts of money, with an option to contribute to a communal pot. As interest accrues, the overall value increases, benefiting everyone involved.
While contributing maximizes everyone’s gain, there’s a risk: selfish individuals can exploit the pot while contributing little. Surprisingly, generous contributors often face backlash from peers, who feel that their selfless actions cast them in a bad light. “When asked about their resentment, many said: ‘Nobody else is doing that’—and it’s true. Their generosity makes the rest of us look inadequate,” notes psychologist Nicola Raihani, in her book published at University College London, The Social Instinct.
In some scenarios, players can even pay to punish those displaying altruistic behavior, demonstrating our competitive nature and suspicion of those attempting to elevate their status through philanthropy.
Interestingly, our judgments often become harsher in altruistic settings. For instance, consider a friend who volunteers at a homeless shelter. Although he appears genuinely concerned, he might actually have a crush on the manager, Kim. By disguising his intentions, he ultimately succeeds in dating her.
Surprisingly, research indicates that we judge such motives more harshly in altruism than in less charitable situations. A study suggests that we view Andy more negatively compared to a barista who similarly seeks to build rapport with their supervisor. This skewed perception exemplifies what’s known as the “dirty altruism effect,” as discussed in this research paper.
This idea is deeply examined in a paper by Sebastian Hafenbreidl at the University of Navarra, Spain. His research points to unconscious evaluations where social rewards for goodwill are weighed against the cost of those actions. He found that what tarnishes altruistic actors isn’t merely self-interest but the perception that they seek undeserved social rewards, tarnishing their image as genuine contributors.
In one of his experiments, participants rated Andy, who volunteered at a homeless shelter or a coffee shop. Results showed that Andy’s volunteering was perceived as less moral when he was suspected of ulterior motives compared to his work as a barista. Interestingly, confiding his true intentions led participants to judge him more favorably.
Further validating his findings, Hafenbreidl explored a scenario involving Tom, a Maldives resort owner spending $100,000 on beach clean-up efforts. Participants rated Tom as less moral when his intentions were publicized for business gains compared to an observation made in private.
Beach clean-ups may be perceived as selfish if personal gain is involved
Fitria Nuraini/Shutterstock
Some individuals may volunteer simply to feel good, which although still selfish, is often judged less harshly than those who seek social accolades from their altruism. Interestingly, Hafenbreidl’s study found that individuals who donate for self-fulfillment are viewed as more moral than those attempting to bolster their reputation, though not as favorably as those who claim no ulterior motives.
This notion might resonate with Phoebe. By the end of the Friends episode, she decides to donate to Joey’s telethon, despite her aversion to PBS, demonstrating that her actions still brought joy to Joey, thus proving her point.
Perhaps Joey was onto something: true altruism might not exist. Personally, I welcome the idea of forgiving those whose self-serving intentions lead to more kindness in the world—after all, there are certainly worse motivations than that.
David Robson’s latest book is The Law of Connection: 13 Social Strategies That Will Change Your Life. If you have a question for David, reach out at: www.davidrobson.me/contact
Simple measurements don’t always tell the whole story
Lee Charlie/Shutterstock
I consider myself healthy—enjoying a balanced diet rich in fruits and vegetables, passionate about fiber, and dedicated to rock climbing twice weekly. However, when I calculated my body mass index (BMI)—weight divided by height squared—I was shocked to discover I am classified as overweight.
For many, this revelation can be alarming, especially for those who have had a past obsession with weight. But how concerned should you truly be about your BMI?
It’s essential to understand that BMI is not a true measure of health. Developed by the 19th-century mathematician Adolphe Quetelet for tracking population metrics, it does not take individual health into account. While it gained traction in the 1970s as an easy method for assessing body fat levels, it falls short of providing a comprehensive health picture.
Since the World Health Organization endorsed BMI in 1997 as a health assessment tool, it has become ingrained in medical practices. Classifications based on BMI include underweight (below 18.5), overweight (25 to 29.9), and obesity (above 30). While this categorization aids in determining treatment eligibility, it introduces significant flaws.
The primary issue is that BMI fails to differentiate between bone, muscle, and fat. A muscular individual might rank as overweight despite being fit and healthy. For instance, my own journey of gaining muscle strength through rock climbing contributed to my BMI categorization.
Conversely, individuals maintaining a ‘healthy’ BMI can still experience health issues. Conditions such as amenorrhea can stem from insufficient body fat, leading to serious health consequences like brittle bones and cardiovascular diseases.
Additionally, BMI does not consider fat distribution, ignoring the risks associated with visceral fat, which primarily surrounds internal organs. Studies indicate that this type of fat is linked to higher risks of conditions such as heart disease, hypertension, and type 2 diabetes.
Though BMI isn’t entirely without merit, alternative methods provide more accurate health assessments. Research highlights the waist-to-hip ratio as a superior indicator, predicting heart attack risk more effectively. Studies also support its role as a better predictor of mortality.
The weight-adjusted waist index offers another promising metric by highlighting visceral fat while enhancing BMI’s efficiency. The Body Roundness Index (BRI) utilizes measurements of height, waist circumference, and weight to assess body shape, also yielding superior predictions for total and visceral fat.
If weight is a concern, considering these alternatives is more beneficial than solely relying on BMI. However, I advocate for prioritizing healthy lifestyle habits—such as consuming a diverse range of fruits and vegetables, nurturing social connections, ensuring ample sleep, and engaging in regular physical activity—over fixating on numerical values. That’s the approach I strive to maintain!
A 74-million-year-old leg bone unearthed from a fossil bed in New Mexico Tyrannosaurus rex suggests groundbreaking insights in a recent study published in Scientific Reports.
This discovery supports the theory that Tyrannosaurus did not migrate from Asia, but instead originated in what is now the American Southwest. This shift in understanding implies that the group evolved into giants much earlier than previously believed.
The shin bone, found in the Kirtland Formation of New Mexico and dating to the late Campanian period, measures 96 centimeters (3.1 feet) long—approximately 84 percent the size of the largest known Tyrannosaurus specimen’s tibia.
Based on its measurements, researchers estimate that the animal weighed around 4,700 kg (10,400 lb), making it the largest known Tyrannosaurus of its time—roughly 50 percent heavier than its contemporary rivals.
The researchers propose three possible origins for the bone: it may belong to a particularly large theropod dinosaur, identified as Vista hebersol; it could represent a newly recognized lineage of giant tyrannosaurs; or it might be an early member of the Tyrannosaurini, related to Tyrannosaurus and its closest relatives.
Of these theories, the authors believe the last is the most plausible. Lead researcher Dr. Nicholas Longrich from the University of Bath noted that the bones closely resemble those of Tyrannosaurus.
“This sounds like Tyrannosaurus,” he remarked in an interview with BBC Science Focus. “If these bones were found in the same beds we know Tyrannosaurus were found, no one would doubt it.”
This bone belonged to an animal that predates Tyrannosaurus by 8 to 9 million years – Photo credit: Nick Longrich
This suggests that the Tyrannosaurus lineage may have originated in southern North America, with connections to the giant tyrannosaurus, Tyrannosaurus macraiensis, identified from the slightly younger Hall Lake Formation in New Mexico. Longrich discovered this latest bone while photographing specimens on a museum shelf.
Large-scale clustering of Tyrannosaurus remains in the American Southwest indicates that this lineage likely evolved in that area before dispersing across the continent, millions of years prior to their emergence further north.
Further excavations of the Kirtland Formation may help clarify the ownership of this bone. Longrich expressed that “the potential for new materials to be discovered is very high,” noting that teeth might be a promising avenue for discovery due to their superior preservation compared to bones.
A more complete skeleton would allow researchers to formally name the species and determine if it represents a direct ancestor of Tyrannosaurus or an early relative.
Cells utilize their internal DNA to produce essential products, such as proteins, through a process termed gene expression. However, scientists and health organizations have identified that gene expression datasets often suffer from inadequate patient samples and excess genes per sample, creating significant challenges in the global fight against cancer. This discrepancy hinders the ability to identify and prioritize critical changes in gene expression that differentiate cancer cells from healthy ones, a phenomenon referred to as the curse of dimensionality.
While machine learning techniques can analyze existing patterns within these expansive datasets to classify samples as cancerous or non-cancerous, this presents additional hurdles. Clinicians are often skeptical of machine learning conclusions due to a lack of understanding regarding model decision-making processes, leading to what is known as the black box problem. Consequently, researchers are striving to develop methodologies that clarify how these models derive their predictions.
A collaborative research team across multiple institutions in Africa concentrated on explicating breast cancer model predictions. They accessed publicly available gene expression data from a global database known as The Cancer Genome Atlas, which compiles data on approximately 20,000 genes from 1,208 breast cancer samples. Their primary objective was to isolate a select few genes from those 20,000 that could reliably predict cancer presence in tissue samples.
Initially, the researchers refined their dataset to 3,602 genes that exhibited differential expression between breast cancer and healthy cells. They then implemented an algorithm to experiment with various gene combinations, aiming to identify the smallest set of genes that consistently yielded promising results. This process is analogous to conducting thousands of mini-races with different runners to determine which runner consistently finishes first, despite all ultimately reaching the finish line.
Subsequently, they utilized diverse machine learning techniques to train and optimize several models based on the expression data of the genes chosen by the algorithm. Remarkably, all models demonstrated high accuracy, predicting cancer status with at least 98% reliability. The next questions arose: “Which genes contribute to model efficacy?” and “How do these genes influence predictions?”
The team employed four distinct statistical interpretation methods known as feature importance techniques to pinpoint the genes most critical to model performance. The first method illustrated how each model’s predictions shifted based on gene expression levels. The second showcased the interplay between multiple genes informing model decisions. The third quantified the overall impact of each gene on the model’s judgement, facilitating a ranked analysis, while the final method evaluated how accurately a single gene could predict breast cancer independently.
Through their analysis, the researchers identified seven genes consistently represented across all trained models and feature importance evaluations. They verified that these genes are associated with biological functions influencing cancer progression, such as tissue repair, regulation of cellular substance transport, and immune response management.
While different models generally agreed on key genes, variations in their exact rankings and influence scores were noted. The researchers explained that biological data is often complex, leading models to interpret various aspects of the same data, suggesting that integrating insights from multiple machine learning models yields superior outcomes compared to depending on a singular model.
The team acknowledged several challenges. The gene selection algorithm required nearly six hours on a high-performance laptop, which may not be practical for larger datasets. They also recognized the potential omission of crucial genes during the selection process. Additionally, despite the extensive dataset, it may not encapsulate the full diversity of breast cancer globally, potentially limiting the model’s applicability across different populations. The researchers concluded that merging machine learning approaches with clear and interpretable methods marks the future of cancer prediction, fostering clinical trust in machine learning-driven insights.
US-Israeli attack ignites oil facility in Tehran, resulting in substantial fires and black smoke on March 8.
Fatemeh Bahrami/Anadolu via Getty Images
On March 8, black smoke enveloped northern Iran as U.S. and Israeli airstrikes continued, leading to alarming health concerns for civilians in Tehran.
What Happened?
In the early hours of March 8, U.S. and Israeli forces launched strikes targeting Iranian oil facilities for the first time since the conflict erupted, igniting massive fires in four oil storage centers and an oil transfer hub in Tehran and Alborz province.
As flames illuminated the night sky, thick black smoke descended over the city, with ash and soot blanketing surfaces. Alarmingly, residents reported dark rain falling, raising concerns after a prolonged drought. Authorities alerted locals about potential acid rain, as many experienced sore throats and burning eyes.
The black rain likely originated from smoke inhaled during these fires. When moisture falls into such polluted air, it can carry harmful particulates to the ground.
This scenario poses significant environmental and health risks, as scientists remain uncertain about the smoke’s chemical makeup, according to Anna Hansell from the University of Leicester.
Composition of the Black Rain
In contrast to regular gasoline, the oil involved was likely less refined and created a more complex mixture of harmful particles when burned. This smoke could contain toxic substances, according to Hansell.
Key components potentially include burnt carbon, polycyclic aromatic hydrocarbons, sulfur, and nitrogen compounds. The combustion process releases sulfur and nitrogen oxides that, when combined with moisture, can produce acid rain.
This environmental disaster could generate smog levels far more severe than those experienced in mid-20th century London. “The scale of this event is concerning,” Hansell remarked.
Secondary pollutants from the strike—such as fragments of concrete and plastic—could contribute to the overall toxicity of the atmosphere.
Health Risks
If this black rain contaminates water supplies, it could lead to gastrointestinal issues like abdominal pain and diarrhea. Furthermore, the acid rain’s effects on skin and eyes are alarming, as already reported by some locals.
However, respiratory health may be the greatest danger. Inhalation of fine particulate matter poses serious health risks, as the composition becomes less important than the quantity inhaled.
“Skin contact with rain can be washed off, but inhaling smoke can be far more dangerous,” Hansell cautioned. “Fine particles can permeate deep into the lungs and bloodstream, increasing risks for chronic diseases.”
Accumulation of toxins in the environment may also contaminate local food sources, leading to long-term health threats.
Regional Impact
While larger particles may settle quickly, smaller harmful particles can travel vast distances via wind currents, potentially affecting air quality as far away as Washington, D.C. As winds shift, smoke from the fire could drift into neighboring countries as well.
It is advised that residents of Iran remain indoors to minimize exposure. If outdoors, wearing masks and goggles is recommended to prevent acid rain exposure.
Individuals should be vigilant about drinking water quality, seeking alternatives if they notice unusual tastes or dark particles.
Other countries should be alert to potential fallout, and health officials will likely issue warnings regarding air quality if necessary.
“The magnitude of environmental devastation doesn’t acknowledge borders,” Hansell warned. “What contaminates one area could migrate, affecting many.”}
NASA, ESA, CFHT, CXO, MJ Jee (University of California, Davis), A. Mahdavi (San Francisco State University)
Recently, there has been a significant shift in the realm of cosmology, reminiscent of the changing trends in fashion. Gone are the days of skinny jeans; in come the baggy styles. Likewise, the foundations of our cosmic understanding are being challenged.
For years, physicists relied on the Standard Model of cosmology, a robust framework that adeptly illustrated the universe’s inception and evolution. Central to this model is dark energy, an enigmatic force driving the universe’s expansion.
Last year, groundbreaking results from extensive telescopic surveys suggested an astonishing possibility: dark energy may be weakening over time. Should this prove true, the Standard Model of cosmology may necessitate a profound rewrite.
A collection of three enlightening features seeks to unravel the intricacies of the Standard Model, examining its current precarious status and what might come next.
“
It does not assist if attachment to old models is fueled by fear or nostalgia. “
Despite these revelations, many physicists remain hesitant to abandon their trusted models. This skepticism is understandable, as many findings in modern physics may require reevaluation over time. However, clinging to outdated concepts out of fear of the unknown won’t advance our understanding.
In scientific discourse, paradigm shifts signify transformative moments when our comprehension fundamentally shifts. While challenging, history shows that such shifts enhance our ability to perceive reality. Whether the issues surrounding dark energy will spark a paradigm shift akin to the quantum or Copernican revolutions remains uncertain. If it does, we may reflect on this era of cosmology as an exhilarating chapter in our quest for knowledge.
The human brain plays a crucial role in interpreting our surroundings, primarily through our five senses: sight, hearing, touch, smell, and taste. However, these senses often provide incomplete information. For instance, many objects we perceive are only partially visible. Our brains utilize prior knowledge and expectations to bridge these gaps in perception, a process known as sensory reasoning.
We engage in sensory reasoning so frequently that it often goes unnoticed. Consider a coffee table: without sensory reasoning, recognizing it when you place your drink down would be challenging. Despite its commonplace nature, the mechanisms behind sensory reasoning remain unclear. Recently, a team from the University of California, Berkeley, embarked on a quest to uncover the brain processes that underpin sensory reasoning in mice.
Earlier studies have shown that mice, much like humans, experience phenomena such as the Kanizsa illusion. This optical illusion highlights sensory reasoning, displaying a white triangle that appears to be present, even though only three incomplete circles and angles are visible. Researchers have identified similar responses to such illusions in mice. The Berkeley team aimed to further this research by observing mouse brains to draw parallels with human sensory reasoning.
“Kanizsa Triangle” by Fibonacci is licensed under CC BY-SA 3.0. Most observers perceive a white triangle in the center rather than three incomplete circles.
To investigate sensory reasoning, researchers utilized two primary methods to monitor brain activity in mice. First, a device called Neuropixel was surgically implanted into the heads of 14 mice, facilitating the observation of numerous neurons simultaneously. The second method involved two-photon imaging, utilizing a specialized microscope to examine individual neuronal activity in four other mice.
These techniques offer complementary advantages and limitations. While Neuropixels provide a comprehensive overview of brain activity, two-photon imaging focuses on single neurons or small groups. The research team conducted experiments on two distinct groups of mice: one utilizing Neuropixels and the other employing two-photon imaging.
To decode sensory reasoning mechanisms, the researchers pinpointed neurons in mice that responded to the perceived white triangle in the Kanizsa illusion. They monitored brain activity while presenting two types of visuals: illusions and real shapes. They discovered that area V1, located at the back of the brain, exhibited similar activity patterns in response to both the illusion and actual shapes.
The study identified two distinct neuron types in area V1 contributing to sensory reasoning. The first type, known as optical illusion shape encoders, only activated upon viewing illusions—essentially shapes that don’t exist. The second neuron type, called segment responders, displayed consistent activity regardless of illusions, responding to specific shapes within the images.
Employing machine learning algorithms, the research team compared both neuron types. They found that optical illusion shape encoders, believed to facilitate the perception of illusions, have stronger connections to regions responsible for higher-level visual processing beyond V1. This insight implies that similar neurons may assist the brain in leveraging expectations to compensate for missing information, though the exact mechanisms remain unclear.
The researchers postulated that partial visual inputs could activate the optical illusion shape encoder, which, in turn, stimulates other neurons in V1, creating the sensation that an illusory shape genuinely exists. To validate this, they used a laser to stimulate the optical illusion shape encoders in resting mice, prompting activation across V1 and inducing the experience of viewing a tangible shape.
Their findings revealed that three interconnected circuits facilitate the experience of sensory reasoning in mice. Initially, segment responders detect shapes and alert higher processing regions of the brain regarding missing information. These advanced regions subsequently activate the optical illusion shape encoder, which completes the pattern and triggers the overall V1 activation, giving the impression of observing a real shape.
Although the study concentrated on illusions, the researchers posited that their discoveries are relevant to sensory reasoning more broadly. As our scientific grasp of brain functions like sensory reasoning evolves, future research may extend these findings to encompass additional cognitive processes, such as memory and language.
Let’s begin with an important fact: No matter what you’ve heard, you are not eating the equivalent of a credit card’s worth of microplastics every week.
You can read more about the confusion around this assertion in the article here.
However, the claim has sparked concerns, particularly after multiple studies reported microplastics accumulating in various environments—ranging from the highest mountains to the deepest ocean trenches, and even in isolated polar regions. Microplastics have also been detected in human tissues, including the heart, liver, kidneys, breast milk, and bloodstream.
Given their prevalence and potential health implications, it’s understandable to be worried, but is it truly warranted?
The ubiquity of microplastics can be traced back to the remarkable properties of plastics. The invention of Bakelite in the early 20th century marked a shift in how materials were produced—created from synthetic compounds rather than sourced from nature.
As plastic became more affordable and widespread, its applications flourished, impacting food packaging, electronics, medical devices, and more. Unfortunately, this durability also leads to a significant environmental issue; microplastics have been released into ecosystems for over a century, persisting for long periods. Consequently, these particles have made their way into the tissues and bloodstreams of various species, including us.
These microplastics are often present in everyday items we consume, such as salt, beer, and drinking water, as detailed here.
Yes, microplastics could likely be within you, but there’s no need to panic just yet. Assessing the health implications of pollutants involves several factors.
Firstly, consider the size of the microplastics, which varies significantly. Secondly, what concentration is required to elicit effects? Lastly, we must examine whether the effects are indeed harmful. Much of the current research is animal-based, which raises questions about its applicability to humans.
Microplastics and Credit Cards
In recent years, alarming headlines have often cited vague information about microplastic sizes or relied on inflated studies that use unrealistically high doses, not reflective of typical human consumption.
For example, widely circulated claims suggested that the average person ingests around 5 grams of microplastics a week—the amount in a credit card. This assertion stems from a 2019 study that employed questionable methodologies and can easily be debunked.
According to a more accurate assessment, most individuals consume only around 0.0041 milligrams per week—less than a grain of salt. This slower rate suggests that it would take over 1.2 million weeks, or 23,000 years, to consume the equivalent of one credit card’s worth of plastic.
If you were immortal, perhaps you could worry about it.
Research indicates that the average person accumulates about 12.2 milligrams of microplastics in their lifetime, but only around 41 nanograms might actually be absorbed by the body based on a study by the same researcher.
New concerns have also emerged surrounding the methodologies used to investigate microplastics within bodily tissues. Some studies employ vaporization techniques that analyze smoke for microplastics, potentially leading to false positives due to similar chemical structures released from fat.
Effects of Microplastics on Human Health
While we know that microplastics are present in our bodies, their effects remain unclear. Some studies indicate that microplastics may lead to behavioral changes and inflammation in animal models; however, these studies often utilize unrealistically high doses—1 gram per day for rodents, for example.
Other studies in pigs showed that a weekly dose of 1 gram affected gene expression and induced oxidative stress in the pancreas, yet this dosage vastly exceeds typical human exposure.
Reports from the World Health Organization have cautioned that most animal studies utilize concentrations of microplastics well above what humans typically encounter. Moreover, microplastics are processed differently in human bodies compared to rodents, complicating data interpretation.
Preliminary human studies have detected microplastics accumulating in plaques and have correlated the presence of these plastics with higher instances of heart attacks and strokes. However, correlation does not entail causation—it’s critical to avoid jumping to conclusions.
Investigating the impact of microplastics on human health is multifaceted. While these small particles carry chemicals capable of disrupting bodily processes, it is essential to recognize that not all these chemicals are absorbed immediately. Studies have demonstrated that the amount of chemicals leaching from microplastics is minimal under average conditions, as addressed in this report. Additionally, the body can excrete certain chemicals, negating long-term accumulation risks.
Concerns also revolve around the potential introduction of other hazardous substances linked to microplastics. Moreover, they may disrupt immune functions or even cause cell damage and inflammation. However, comparative assessments regarding the risks of microplastics versus other pollutants—such as air quality or dietary excesses—remain uncertain.
While it’s natural to fear the health risks posed by microplastics, we need definitive evidence to gauge their danger accurately. This discussion taps into our anxiety surrounding pollution. Just because we don’t consume a credit card’s worth of plastic each week doesn’t mean that the issue isn’t serious. However, the field of microplastic research is still nascent, and comprehensive data on their effects in humans is lacking.
Until further research emerges, I’ll focus my concerns elsewhere.
This week, AI chatbot Claude experienced an outage. Users reported being unable to access services via the Anthropic website, with the issue persistent for approximately a week. Similar outages have impacted various technology giants, government websites, and even hospitals. What is driving this surge in service disruptions?
The primary vulnerability of today’s internet lies in its heavy reliance on cloud computing. This shift has resulted in numerous services depending on just a few key providers like Amazon and Microsoft. During the early days of the internet, businesses operated on their own infrastructure—akin to a self-sufficient local store. When an issue arose in one area, others remained unaffected, but now, if a cloud provider faces difficulties, the repercussions resonate across multiple platforms.
Frequently, user-access issues stem from simple human errors. One notable incident underscoring these risks was the 2024 outage caused by cybersecurity firm CrowdStrike, which inadvertently released software configuration files that rendered millions of Windows computers inoperative—affecting airlines, banks, and emergency service centers globally.
Joseph Jarneki from the Royal United Services Institute indicates that large-scale outages are typically not premeditated. Cybercriminals tend to focus on smaller targets instead of provoking major tech companies, preferring to extract ransom payments when preying on vital services.
Tim Stevens from King’s College London highlights that ransomware attacks are increasingly directed at local authorities and crucial infrastructure. Hackers tend to infiltrate essential services such as water supplies and municipal governments, where they can hold operations hostage for payment.
The UK has witnessed such incidents, including ransomware attacks on Hackney Council, Gloucester City Council, and Leicester City Council, along with similar challenges faced by the NHS and local water suppliers. Stevens notes an ongoing cat-and-mouse game between hackers and cybersecurity experts. Unfortunately, it appears hackers currently hold the upper hand. “In recent discussions, it’s been indicated that we’re losing ground. We’re not just behind; we’re actually losing,” Stevens confessed.
State-sponsored hackers from countries like Russia and China typically do not aim to disrupt cloud providers on a large scale. “While they do target these entities, their intentions are highly focused rather than destructive,” emphasizes Jarnecki.
According to Sarah Krebs from Cornell University, cyberattacks are increasingly utilized in nations operating within a “gray zone”—a fluctuating state of unease that signifies neither full-scale peace nor active warfare. This tension often manifests as calculated disruptions aimed to weaken adversaries.
Krebs explains, “This approach acts similarly to economic sanctions; much of our GDP and overall economic stability hinges on the Internet. Disabling it critically impairs adversaries’ abilities to generate wealth, subsequently hindering their resource capabilities for warfare.”
Importantly, Krebs notes that Russia and China aren’t the sole practitioners of such tactics. Western nations, too, engage in cyber operations. Notably, intelligence agencies such as GCHQ and MI6 have previously compromised al-Qaeda computers, resulting in significant operational disruptions—these covert operations remain classified and occur behind the scenes.
Stevens mentioned, “It’s clear that Western intelligence and security agencies are conducting cyber operations against Russian assets. However, the legal frameworks often restrict the scope and intensity of these operations, which can be a source of frustration within the community.”
Claude has since resumed functioning, but Anthropic has yet to address inquiries from New Scientist regarding the recent outage effects.
Understanding the Link Between Air Pollution and Dementia
Air pollution is commonly linked to respiratory illnesses, but recent studies suggest a troubling connection to another serious health concern: dementia.
A recent study published in JAMA Neurology indicates that increased exposure to fine particulate matter may exacerbate neurological changes associated with conditions like Alzheimer’s disease.
The researchers stress that further investigation is essential, yet evidence of this correlation is compelling.
A meta-analysis published in July 2025 by The Lancet Planetary Health reviewed data from over 29 million individuals across multiple countries from the late 1980s and early 1990s. The findings highlighted the detrimental effects of PM2.5 (particulate matter), nitrogen dioxide (NO2), and soot on cognitive health.
The study concluded that “the diagnosis of dementia is significantly linked to long-term exposure to fine particulate matter pollution.”
This ongoing research has identified a growing body of evidence, building on earlier publications. For instance, a 2017 study in The Lancet established a connection between living near major roads and elevated dementia rates, as discussed in this landmark research.
But what specific problems does air pollution cause, and how can we address them?
Most air pollution originates from burning fossil fuels, alongside natural sources like sandstorms. – Photo credit: Getty Images
The Role of Particulate Matter in Health
Air pollution manifests in various forms, with particulate matter (PM) being a prominent type. This term encompasses microscopic particles suspended in the air, including dust, smoke, and liquid droplets that are often invisible to the naked eye.
Particulate matter is categorized by size, ranging from fine (PM0.1) to coarse particles (PM10).
Notably, PM2.5 is exceptionally small, measuring less than 1/30th the width of a human hair. Its minute size allows it to remain airborne for extended periods, making it easily inhalable.
According to Dr. Holly Elser, an epidemiologist and co-author of the recent JAMA Neurology study, “[PM2.5 pollution] is linked to numerous health outcomes.” These outcomes range from asthma and lung cancer to heart disease and, increasingly, dementia.
The complexities surrounding PM2.5 arise from its myriad sources. “While traffic is a significant contributor, it is not the sole source,” says Dr. Hanen Kreis from the University of Cambridge, who studies urban mobility’s health impacts.
Additional sources of PM2.5 include power plants, factories, construction sites, wildfires, and biomass burning, as well as natural occurrences like sandstorms.
The toxicity of PM2.5 particles varies depending on their origin. Understanding their chemical composition is vital for addressing their health impacts.
Researchers have identified two principal pathways for PM2.5 to infiltrate the central nervous system: “through the olfactory nerve (via the nose) or through the bloodstream by crossing the blood-brain barrier.”
How PM2.5 Affects Brain Health
Due to PM2.5’s diminutive size, it can penetrate deep into the lungs, facilitating its entry into the bloodstream and ultimately reaching the brain. There, it can induce inflammation and oxidative stress, resulting in neuronal and vascular damage over time, according to Dr. Kreis.
Other hypotheses exist regarding pollution’s influence on cognition. For instance, pollutants may travel through the olfactory pathway to the hippocampus, the brain’s memory center, leading to the accumulation of harmful amyloid and tau proteins associated with Alzheimer’s disease.
Research has also indicated that PM2.5 can restrict cerebral blood flow, cause microvascular damage, and heighten the risk of vascular dementia.
Color MRI scan of the brain of a 68-year-old Alzheimer’s patient – Photo credit: Science Photo Library
Air pollution levels are notably higher near busy roads, but research shows that its concentration diminishes significantly with distance from traffic.
A 2017 study published in The Lancet analyzed data from over 6 million residents in Ontario, revealing that individuals living within 50 meters (165 feet) of a major road face a 7 to 12% increased risk of dementia compared to those residing over 200 meters (approximately 650 feet) away.
Moreover, the overall burden of PM2.5 is directly associated with dementia risk. Dr. Kreis notes that each 10 micrograms per cubic meter (μg/m3) increase in PM2.5 correlates with a 17% increase in dementia risk.
For perspective, the average PM2.5 level around central London’s roads in 2023 was 10μg/m3.
For nitrogen dioxide (NO2), another pollutant primarily released from fossil fuel combustion, every 10μg/m3 increases the relative risk of dementia by 3%. In 2023, the average roadside NO2 level in central London was 33 μg/m3.
Ultimately, fossil fuel combustion represents the largest contributor to air pollution, particularly PM2.5.
Mitigating Exposure to Air Pollution
If you reside or work near a busy road, it may be challenging to significantly lower your air pollution exposure. Yet, given that many individuals live in metropolitan areas, addressing this issue must be a priority. Dr. Kreis advocates for “targeted policy measures and a shift from fossil fuels to clean energy” as essential solutions.
Nevertheless, it’s beneficial to be informed about air quality variations (which often worsen on warm afternoons but improve following rain).
On days when the air quality index exceeds 100, indicated as “unhealthy to breathe,” minimizing outdoor activities is advisable. If going outside is unavoidable, wearing a fit-tested N95 or KN95 mask can help protect against PM2.5 exposure.
For those indoors on poor air quality days, utilizing an air purifier or fan can enhance indoor conditions. Good-quality models can be obtained for around £100, making them a cost-effective solution.
Additionally, when navigating urban environments, consider opting for less trafficked routes with more greenery, as Dr. Kreis does when biking. Fewer vehicular emissions mean lower pollution levels, and vegetation can significantly absorb air pollutants; research suggests that substantial plant coverage can reduce pollution concentrations by as much as 50%.
PM2.5 concentrations are notably elevated on the London and New York subway systems. Some research indicates that levels in certain London Underground stations can be up to 18 times greater than street level, prompting medical professionals to recommend masks in these environments.
During traffic jams, close your car windows and turn off your engine to minimize exposure. At home, ensure proper ventilation while cooking.
Awareness is a crucial first step. As Dr. Elser emphasizes, it’s important to acknowledge that while air pollution is a risk factor for dementia, it is just one of many.
When we think about infamous fictional psychopaths, like the chillingly calculating Patrick Bateman from American Psycho, they often embody the image of a scammer. But what about real-life psychopaths?
Research indicates that psychopaths are more inclined to lie to achieve their goals, exhibiting remarkable fearlessness, almost as if they have ice in their veins.
You might assume that their cold demeanor makes it hard to detect their deceit. Surprisingly, studies suggest that psychopaths are not significantly better at lying than others.
For instance, a study from the 1980s revealed that convicted psychopaths were easily identifiable, much like non-psychopaths using lie detectors. However, it’s important to note that while lie detector tests are commonly employed, they are notoriously unreliable.
In a more recent 2016 study, researchers found that criminals tend to lie frequently. Notably, psychopaths often exhibit a heightened tendency to lie during psychological tasks. Yet, they still encounter cognitive costs from lying, such as making more errors and responding more slowly.
Though psychopaths lack the moral and emotional barriers that typically hinder lying for most people, they cannot escape the psychological challenges associated with creating believable lies.
Interestingly, while psychopaths may not have a natural talent for lying, there is emerging evidence that they can learn to become more effective liars.
A 2017 study discovered that students with high psychopathic traits demonstrated significant improvement during tasks that required them to lie convincingly. They could lie faster than others, indicating that the mental strain of lying decreases along with reduced neural activity related to deceit.
In summary, psychopaths may not excel at lying initially, but they have a propensity to lie more frequently and improve at it more swiftly than others.
This article addresses the question posed by Lyle Morse via email: “Are psychopaths really good at lying?”
To submit your own questions, please email questions@sciencefocus.com or reach out via social media: Facebook, Twitter, or Instagram. (Don’t forget to include your name and location.)
For more fascinating scientific insights, visit our Ultimate Fun Facts page.
While scrolling through TikTok, I stumbled upon a video featuring Donald Trump accusing CNN journalist Caitlan Collins of “not laughing” after she questioned him about the convicted sex offender Jeffrey Epstein.
Without a pause, I continued scrolling. I wasn’t angry, nor did I contemplate the implications of a president making such derogatory remarks. Yet, as I reflected on those comments while writing this piece, I realized how abhorrent, unprofessional, and sexist they truly were.
My brain didn’t fail to react out of indifference; it succumbs to a neurological phenomenon known as habituation. This led me to explore how it shapes our lives and our capacity to navigate it effectively.
Habituation is our brain’s method of normalizing experiences, allowing us to engage with life without becoming overwhelmed. It acts as a neural shortcut that enables us to filter out irrelevant information, preventing sensory overload.
At the café where I work, trance music plays, my ski jacket feels weighty, and bright lights flicker nearby. However, until I consciously recognized these stimuli, my brain had adapted to ignore them, allowing me to focus more readily.
Habituation liberates neural resources, enabling us to promptly detect new stimuli vital for survival. “This mechanism is essential for survival across all species,” states Tali Shallot from University College London.
This habit-forming capability assists us in managing grief, chronic pain, and in normalizing suffering, making life more navigable. A striking example arises from studies on individuals with locked-in syndrome; despite being entirely conscious yet unable to communicate verbally or move, most report satisfaction. Notably, those who’ve endured this condition longer are more inclined to express contentment with their quality of life.
Habit formation also fuels progress. As the initial excitement of a new job diminishes, satisfaction levels stabilize due to habituation. Shallot notes that this waning enthusiasm propels the desire for advancement. “Our responses to pleasure decrease over time, motivating exploration and progress.”
However, forming habits isn’t always beneficial. Ignoring chronic pain may result in delayed medical intervention, while normalizing detrimental behaviors at home or work can lead to accepting intolerable situations.
Compounding this issue, habituation can be a mental health concern. “Most mental health disorders involve some form of habituation disorder,” notes Shallot. Research indicates that those with depression are slower to recover from negative events, highlighting the struggle to adapt to distressing news.
Shallot’s recent, unpublished findings reveal another concerning aspect: frequent financial risk-takers become desensitized to risks over time. “I can see this pattern in stockbrokers,” Shallot remarks.
On a lighter note, habituation explains why our homes feel smaller over time and why new clothes quickly lose their appeal, often prompting excessive consumption.
Take a Step Back and Slow Down
Short Breaks Enhance Focus
Michael Wheatley/Alamy
How can we break the cycle of habituation? How do we train our brains to regain awareness?
One effective method is mindfulness, which encourages heightened awareness of the present. Research shows that awareness can influence eating habits. Consider how easily we overindulge when we’re not truly savoring our food.
Another strategy is to take breaks, which may seem counterintuitive. Researchers, including Leaf Nelson from UC Berkeley and Tom Meyvis from NYU, found that interrupting pleasurable activities, like music or holidays, can enhance enjoyment. Breaks disrupt routines, aiding in the process of novelty, while stepping away from unpleasant experiences may hinder habit formation and increase irritation.
Injecting novelty into your routine is also beneficial. Repeating the same route can dull excitement; try varying your jogging path or rearranging your furniture. “These small changes can reveal unexpected joys, presenting fresh information to the brain,” Shallot advises.
Particularly concerning, however, is our increasing habituation to social media. “In recent years, society has grown normalized to rude online behavior,” Shallot explains. Constant exposure to negative events dulls our reactions and alters our response to significant global issues, especially for children, who experience desensitization towards violence due to media exposure. Studies correlate media violence exposure with increased risks of violence later in life.
The simplest solution? Take a break. “We need to engage with the world anew,” Shallot concludes. “Small shifts can lead to impactful changes.”
I embraced this advice, deleting social media apps from my phone, planning several short vacations instead of one lengthy break, and even switching gyms for a change of scenery. I aspire that upon my return to social media, I will not just feel greater joy, but also experience a heightened emotional response, allowing my brain to discern what truly deserves my attention.
Pollution has made many urban areas uninhabitable for humans, leading to the devastation of families and communities alike. Ants, which rely on specific hydrocarbons on their exoskeletons for recognition, are severely affected. A recent study indicates that ozone exposure alters these hydrocarbons, resulting in ants failing to recognize their nestmates. Instances of aggression within colonies have been observed, where some ants even attack their own relatives.
With approximately 20 quintillion ants on Earth, human-induced pollution could unearth unprecedented levels of destruction.
This alarming scenario exemplifies anthropomorphism, the attribution of human traits to non-human entities, exemplified by comparing ant colonies to human families. While some scientists criticize anthropomorphism as misleading, others advocate for drawing parallels between ant behavior and human social dynamics to shed light on evolutionary concepts.
Notably, entomologist E.O. Wilson used ants to support his “sociobiological” theory, proposing that animal behavior stems from evolutionary necessity. Wilson asserted that insights into ant behavior could illuminate biology’s impact on human development and accomplishment.
However, evolutionary biologist Stephen Jay Gould emerged as a notable critic, denouncing Wilson’s ideas as “biological determinism,” cautioning against potential eugenics-inspired policies. The debate surrounding biology’s role in human society persists in academia, with sociobiology now often referred to as evolutionary psychology.
A significant shift has occurred in studying ant behavior. Deborah Gordon, a Stanford University biologist, revealed that ant behavior operates on algorithms. Her research involved years of studying various ant species, culminating in collaborations with computer scientists to demonstrate how ants utilize effectively distributed signaling networks. For instance, when a worker ant finds a large food source, she lays a pheromone trail for others to follow. Encountering other ants, she evaluates the available resources and efficiently recruits additional foragers.
“
Algorithmic determinism has replaced biological determinism, but the end result for ants is still the same. “
There is no central authority instructing ants; they succeed through decentralized communication, much like how distributed computer networks manage data flow. Gordon likens this process to internet activity, underscoring how networks efficiently allocate resources.
Gordon’s findings contrast sharply with Wilson’s theories, as she draws comparisons between ants and computers rather than humans. Nevertheless, as AI companies invest heavily in replicating human cognition with algorithms, the parallels between ant behavior and artificial intelligence become more pronounced. Algorithmic determinism may overshadow biological determinism, but the implications for ants remain significant. While humans often reference ants to explain behaviors observed in other species, we frequently overlook the intricate nature of the ants themselves.
Returning to my research on human pollution and its impact on ants, Gordon’s Antenet approach relies heavily on colony members collaborating and exchanging critical information. However, when ozone interferes with the hydrocarbons in ants, they lose their ability to recognize one another, disrupting crucial coordination. This could lead to colony demise.
For humans, such recognition isn’t vital; we don’t rely on scent to coordinate food gathering or childcare. Nevertheless, we share the planet with extraordinary wildlife, and if we fail to mitigate ozone pollution, we risk obliterating their social structures. It is time to shift our perspective from viewing ants simply as metaphors for humanity and machines to appreciating their intrinsic value.
What I Am Reading: H.G. Wells’ “The War of the Worlds”: The Martians are cyber-vampires.
What I See: My life is murder, A delightfully corny detective series starring Lucy Lawless.
What I Am Working On: Finding a place to live in a new city.
Annalee Newitz is a science journalist and author. Their latest book is Automatic Noodles, and they co-host the Hugo Award-winning podcast Our Opinion Is Correct. Follow @annaleen or visit their website: techsploitation.com.
Feedback is New Scientist. This online platform and magazine deliver top-notch science and technology news through expert journalism. To share feedback on topics you believe our readers would find captivating, please email feedback@newscientist.com.
Athlete Nomar
Feedback has been astonished—shocked—to discover that a grove of trees in northern Italy was believed to predict a solar eclipse.
You might wonder, “Are you suggesting that some thought trees could genuinely forecast solar eclipses?” Surprisingly, the answer is yes.
The partial solar eclipse occurred on October 25, 2022. Botanists led by Alessandro Chiorerio had previously inserted electrodes into Norwegian spruce trees to monitor their bioelectrical activity. In a report published in April 2025, they claimed that “Trees anticipated the eclipse and synchronized their bioelectrical behavior hours in advance, with older trees showing greater anticipatory behavior due to initial time asymmetry and increased entropy.”
Ultimately, the errors became apparent. A paper published in Trends in Plant Science on February 6th reported this finding, with insights from journalist Matthew Sparks, who should likely receive recognition for his contributions.
Authors Ariel Nowopransky and Hegyi Isak noted that the drop in sunlight during the eclipse was minimal, ensuring the trees had sufficient light. Moreover, such solar eclipses occur every 18 years or so. The oldest trees in the study, around 70 years old, may not have lived long enough to learn patterns, since solar eclipses trace various paths across the Earth’s surface.
Feedback has examined the original study, but it seems unnecessary to delve deeply to debunk it. The team only wired three trees and five stumps. While sample size isn’t everything, it does matter.
The paper also includes a lengthy section on “Theoretical Analysis of Quantum Field Theory.” Yes, the Q word! “A tree is open, thus dissipative. The system continuously exchanges (releases and receives) matter and energy with the environment in various forms.” Aging of the system and the evolution of time (arrow of time) are discussed, although after the first paragraph’s analysis, it felt like we entered a quantum state where we lost interest.
Interestingly, the electrical activity of the trees was synchronized in the 14 hours leading up to the eclipse. How can we explain this? Novoplansky and Isak suggested, “A total of 664 lightning strikes occurred from October 22 to 25, 2022,” including three strikes within 10 kilometers of the site during the 14 hours prior to the eclipse. Perhaps that’s a factor.
Please Don’t Spill It
Continuing our theme of “People inadvertently sending out amusing press releases,” Feedback received great news about tea.
“Recent scientific research indicates that consuming a daily cup of tea can offer heart-healthy benefits, with growing evidence supporting its effects on cholesterol levels, blood pressure, inflammation, and blood clotting.” As regular tea drinkers, Feedback finds this news uplifting—especially for Mrs. Feedback, whose bloodstream is approximately 70% tea.
Who delivered this news? The Tea Advisory Committee of course. Feedback had not previously heard of them, although their website claims they are “supported by the organization” and receive a restricted educational grant from the UK Tea & Infusions Association, the trade association for the UK tea industry. Their purpose is to “provide the media with unbiased information about the health benefits of black tea.”
The final statement of the press release reads: “Previous research has indicated that the ideal amount is four cups of tea daily, yet only a third (35%) of Brits report drinking three to four cups a day. Our challenge, as tea experts and nutritional scientists, is to ensure the public understands the heart health benefits of tea.” Feedback has more details, but we adore espresso.
Universal and Free
In our ongoing quest to identify exemplary and flawed technical abbreviations, Feedback uncovered a fantastic initiative undertaken by researchers at Carnegie Mellon University.
The concept is straightforward. From Legos to Stickle Bricks, a myriad of construction toys exists. However, they often lack interoperability; with few exceptions, you can’t connect parts from different systems.
Golan Levin and Shawn Sims took it upon themselves to create an open-source 3D printable adapter that allows components from various construction systems to be combined. If you own a 3D printer, you can download the design for free and fabricate your own hybrid toy.
It’s quite impressive. The designers explain their goal to enable “radically hybrid constructive play, creating designs previously deemed impossible, ultimately providing more creative opportunities for children” and to deliver “a public service that corporate interests cannot or will not fulfill.”
Feedback believes this kit deserves wide usage. However, we suspect that the name “Free Universal Construction Kit” might limit its appeal to parents somewhat.
Have a story for Feedback?
You can email your article to Feedback at feedback@newscientist.com. Please include your home address. This week’s feedback and past editions can be found on our website.
Individuals experiencing long-term grief disorder display increased brain activity in response to death-related images, indicating heightened emotional and memory processing.
Paul Mansfield/Getty Images
While grief is a natural response to loss, for approximately 5% of bereaved individuals, this grief becomes prolonged, evolving into prolonged grief disorder (PGD). Recent research has provided insights into the development of this challenging condition, potentially aiding healthcare professionals in identifying those who may require additional support following a loss.
Inclusion of PGD in the American Psychiatric Association‘s diagnostic manual in 2022 sparked significant discourse regarding its implications on the understanding of normal grief responses and the constraints of defining acceptable grieving timelines. Current studies analyzing brain activity suggest that PGD is indeed a distinct mental health condition.
Richard Bryant and researchers from the University of New South Wales in Sydney have compared brain activity patterns in individuals with PGD to those experiencing other grief-related conditions, including post-traumatic stress disorder (PTSD), depression, and anxiety. Their findings indicate that while some overlap exists, PGD patients consistently demonstrate more significant alterations in brain circuits related to reward processing.
For instance, studies indicate that PGD patients may experience greater activation in the nucleus accumbens, the brain region responsible for processing rewards and motivations, in response to grief-related stimuli compared to those not suffering from PGD. The strength of this activation correlates strongly with the intensity of longing for the deceased.
Individuals with PGD also exhibit distinct responses to reminders of the deceased, showing a preference for avoiding such stimuli, unlike individuals with PTSD and anxiety, who generally demonstrate behaviors aimed at avoidance.
Moreover, research indicates that PGD patients experience heightened amygdala and right hippocampus activation when confronted with death-related imagery, in contrast to typical grievers, who might display increased deactivation in response to positive images, highlighting a disruption in emotional regulation and a diminished capacity for positive emotional experiences.
Bryant elucidates that in PGD, the brain’s reward system becomes inextricably linked to the deceased, leading to an overwhelming yearning for the lost loved one. “The principal distinction between PGD and normal grief lies in the duration, indicating that individuals become ‘stuck’ in their grief, unable to heal like the majority,” he explains.
While this review provides valuable insights, the complexity of PGD makes it difficult to implement standardized diagnostic approaches, as noted by Catherine Shear at Columbia University. Access to brain scans is often unavailable for grieving individuals, and the intricate nature of grief complicates one-time assessments.
Shear also suggests that “two-person neuroscience” can enhance our understanding of grief by monitoring brain activity during interpersonal interactions, further unraveling how grief is influenced by social contexts, cultural norms, and individual support levels.
This comprehensive review aids in predicting individuals at risk for PGD post-bereavement. In a significant study, bereaved adults underwent brain scans shortly after their loss and periodically over the next six months. Stronger connections between the amygdala and regions involved in behavior regulation and information filtration observed during initial scans may forecast worsening grief symptoms, implying that such patterns can indicate a higher likelihood of developing PGD in the future.
Despite the identification of psychosocial factors that may predispose certain individuals to PGD, conclusive predictions remain challenging, according to Joseph Govias from the Medical College of Wisconsin. Early identification may facilitate intervention, ranging from support groups to specialized treatments.
Advancements in understanding specific neurobiological mechanisms reinforce the need to acknowledge PGD as distinctly separable from general grief, guiding tailored treatment strategies for affected individuals.
“Recognizing both the shared and unique neurobiological underpinnings may prevent misdiagnosis and inadequate care,” Govias states. “For instance, PGD less commonly responds to antidepressants, whereas focused grief therapy proves effective. Conversely, in cases where PGD coincides with major depression, a combination of antidepressants and grief-targeted therapies may yield optimal results.”
Artist’s depiction of QT45 superimposed on a microscopy image of a frozen environment conducive to RNA replication (based on AlphaFold3 predictions)
Microscope images by Elfie Chan and James Atwater
According to the RNA World Hypothesis, life initiated with RNA molecules that evolved to replicate themselves. Recent discoveries reveal an RNA molecule capable of this self-replication, executing essential processes, though not simultaneously.
“It’s been a long quest to reach a point where we confidently state RNA can replicate itself under the right conditions, showcasing its potential,” says Philip Holliger at the MRC Molecular Biology Laboratory, Cambridge, UK.
In living organisms, proteins are pivotal, catalyzing chemical reactions while their synthesis instructions are encoded in double-stranded DNA. RNA, existing typically as a single strand, serves as a chemical analog of DNA.
While RNA is not as reliable for information storage due to its instability, it exhibits a unique capability: folding into protein-like enzymes that catalyze chemical reactions. This dual function of RNA as both storage and catalyst led to the hypothesis in the 1960s that the genesis of life may have hinged on self-catalyzing RNA molecules.
However, identifying such self-replicating molecules has proved exceptionally challenging. It was previously assumed that self-replicating RNA would be relatively large and complex, yet large RNAs are cumbersome to spread and duplicate.
Furthermore, while shorter RNA molecules have been known to form spontaneously under suitable conditions, the likelihood of larger molecules doing the same remains low.
“This insight led us to reconsider; perhaps something simpler and smaller could efficiently complete this process,” Holliger explains. “That search yielded QT45.”
RNA comprises nucleotide building blocks. The research team initiated the process by generating 1 trillion random sequences, each 20, 30, or 40 nucleotides long. They selected three capable of binding nucleotides and combined them for several rounds of evolution, introducing random mutations to enhance performance.
The resultant molecule, QT45, is composed of just 45 nucleotides. In alkaline, near-freezing water, single-stranded RNA can serve as a template to join short strands of two or three nucleotides, creating complementary strands, including those that mirror itself. “Although the process is currently slow with low yields, this is expected,” notes Holliger.
QT45 can also replicate itself using its complementary strands. “This is the first instance of RNA that can generate itself and its coding strand, representing the two core reactions of self-replication,” states Holliger. However, the team has yet to achieve both reactions occurring within the same container. Future efforts will focus on further evolving the molecule and experimenting with conditions like freeze-thaw cycles to see if simultaneous reactions are possible.
“The most fascinating aspect is that once the system begins self-replication, it also starts self-optimization,” Holliger adds, as the error-prone process generates various variants, some potentially more effective at replication.
“The findings from the Holliger lab represent a vital step toward fully self-replicating RNA.” asserts Sabine Muller from the University of Greifswald, Germany.
“A key takeaway from this discovery is the identification of intermediate-sized RNA oligomers capable of self-synthesizing,” remarks Zachary Adam at the University of Wisconsin-Madison.
The vast number of possible 45-nucleotide-long RNA sequences is “inconceivably large,” Adam notes, making the team’s discovery of QT45 from an initial batch of 1 trillion sequences mind-boggling.
In early Earth’s environment, a molecule akin to QT45 might have successfully replicated itself amidst conditions similar to those in modern-day Iceland, combining ice with hydrothermal activity that creates freeze-thaw cycles and pH gradients. Holliger believes compartmentalization is essential to segregate key components, with numerous possibilities for this occurrence, from pockets of meltwater in ice to cellular vesicles spontaneously formed from fatty acids.
Carina Nebula Observed by the Hubble Space Telescope
NASA/ESA/M. Livio, Hubble Heritage Team & Hubble 20th Anniversary Team (STScI)
Embark on a journey through the cosmos, exploring our solar system, traversing the Milky Way, and venturing into the vast cosmic wilderness, rich with black holes and galaxies. The question remains: Is the universe truly infinite?
Can exploration go on forever, or is there a boundary at some point? This significant inquiry in cosmology seeks to determine the universe’s size and shape. Although we have some clues, they lead to more questions than answers, leaving much in mystery.
When discussing space with peers, we often emphasize its vastness and potential infinity—a concept that challenges our understanding. Cosmologists have grappled with such ideas for centuries. The key to grasping the universe’s size lies in understanding its shape, which has been subject to diverse theories.
The simplest model is a flat universe, reminiscent of a sheet. While reality is far more complex, this metaphor aids comprehension. A flat universe would mean conventional rules of geometry apply—triangles maintain a sum of 180 degrees and lines remain straight. However, in a curved universe, geometry becomes non-Euclidean, leading to unexpected results.
The universe’s structure is influenced by gravity and dark energy; gravity binds matter together while dark energy acts as a force expanding the cosmos. If these forces balance perfectly, the universe remains flat. If dark energy dominates, it resembles a Pringle shape, while differing configurations may yield a finite or infinite cosmos.
Should gravity prevail, the universe would be spherical and finite—a straightforward conclusion. However, extrapolating from various large-scale cosmological observations suggests that the universe is likely flat. Recent findings indicate dark energy might decrease over time, underscoring our limited understanding of the universe as a whole. Despite creating detailed maps of dark matter, it remains enigmatic, complicating our grasp of gravity and its implications. Therefore, describing the universe as “probably flat” requires cautious interpretation.
As a storyteller, I must confess a bias against infinity. While intriguing, the concept’s application in the physical realm presents difficulty. My inclination is that every reality necessitates some limitation, however expansive. Infinity can feel unquantifiable, and if equations falter, can we genuinely assume an eternal existence?
This perspective is not unique; many theories subscribe to the idea of a finite universe. Even with a flat structure, the connections between different spacetime regions remain puzzling. Should the universe be finite and flat, we encounter an intriguing dilemma: what lies beyond its boundaries? Is it another universe, or simply nothingness? The prospects are disconcerting, complicating the mathematics that describe our reality.
In a curved spacetime, options expand. Spherical structures lack edges; travel far enough in one direction, and you may find yourself back where it began. Other possibilities include shapes resembling donuts, Klein bottles, or intricate topologies with wormholes. While some theories posit shapes like peanuts, cones, or apples, adding extra dimensions complicates matters further.
Introducing infinity creates a more chaotic scenario—an eternal universe filled with limitless galaxies and star systems. The focus shifts from the universe’s edges to the entirety contained within it.
This concept can be exhilarating: the spectrum of possibilities appears endless, and it’s statistically likely that other life forms exist. However, contemplating an infinite universe can be overwhelming. While it’s thrilling to imagine the vastness of life out there, the thought that “the universe is eternal, so anything can happen” can seem a bit meaningless.
Yet, these feelings are subjective. Ultimately, physics relies on observation and mathematics. This aspect is what I appreciate about physics—its precision; but infinity lacks that precision. When you set off through space, you desire a destination, whether it’s an edge or home.
Unravel the Mysteries of the Universe: Cheshire, England
Join leading scientists for a weekend exploring the universe’s mysteries, featuring a tour of the iconic Lovell Telescope.
Are Statins Really Causing Side Effects? Major Study Finds Clarity
Benjamin John/Alamy
Recent investigations reveal that the numerous side effects attributed to statin medications have been significantly overstated. This emerging evidence prompts calls for modifications on drug packaging to mitigate unwarranted concerns that deter patients from essential lifesaving treatments.
“Our findings indicate that the majority of issues listed as potential statin side effects are unlikely caused by the medication,” stated Christine Reese during a press event at Oxford University on February 3rd.
Statins, known for their cholesterol-lowering capabilities, are affordable medications that robustly reduce heart attack and stroke risks. However, fears about side effects, notably muscle pain, have long plagued their use. A 2022 study confirmed that muscle pain is rarely, if ever, induced by statin use.
“Regrettably, both patients and many healthcare providers are confused about statin side effects, contributing to hesitance in initiating or continuing their use,” commented Reese.
In this study, Reese and her team scrutinized common side effects listed on statin labels—like dizziness, fatigue, and memory loss. These narratives stem largely from case reports and observational studies rather than concrete data. The investigation did not delve into muscle pain, weakness, or diabetes risks as previously analyzed in other studies.
Researchers evaluated 19 randomized controlled trials involving 120,000 participants over an average follow-up of 4.5 years, comparing the effects of five widely prescribed statins against a placebo.
Out of 66 observed side effects, most did not correlate with statin usage, and similar occurrences were noted in placebo participants, suggesting a nocebo effect—where fear or expectation of side effects leads to actual experiences. “We have seen that the risk of some side effects like elevated protein levels in urine, swelling in extremities, and liver function changes is legitimate,” mentioned Jeffrey Berger from New York University Langone Health. “However, these do not pose significant harm, allowing us to assert confidently that the benefits of statins overshadow their risks,” Reese concluded.
Drug regulators advocate for updates to statin labels as suggested by Karol Watson at UCLA, indicating clearer differentiation of actual side effects versus those equally occurring in placebo users.
Updating these labels can be a lengthy endeavor. Remarkably, the UK’s Medicines and Healthcare products Regulatory Agency only recommended in January 2026 the inclusion of muscle weakness and pain as possible side effects on statin labels.
In the interim, clinicians can utilize this research to reassure current and prospective statin users. “It’s essential to educate patients to adjust their expectations rather than dismissing their concerns,” emphasized Berger.
Watson hopes the findings will definitively settle the debates surrounding statins. “Future studies should pivot from whether statins typically induce these symptoms—we already know they do not. Instead, research should focus on identifying individuals who are genuinely more prone to certain statin-related side effects,” she remarked.
Here’s a rewritten, SEO-optimized version of your content, keeping the HTML tags intact:
Farmers Spraying Pesticides on Cotton Fields
Tao Weimin/VCG via Getty Images
Over 60 years have passed since Rachel Carson’s influential book, Silent Spring, highlighted the dangers of pesticides. The negative impact on wildlife has escalated, potentially more than ever before.
“Across nearly every nation, there is a trend of increased pesticide toxicity,” explains Ralph Schulz from RPTU University Kaiserslautern-Landau, Germany.
The risks associated with pesticides depend on both the volume used and their toxicity levels, which can vary significantly among species. To quantify the overall pesticide burden, Schulz and his team formulated a metric called “applied toxicity.”
The team investigated the use of 625 pesticides across 201 countries from 2013 to 2019, incorporating both organic and conventional pesticide data.
They averaged toxicity data from regulatory bodies in various nations, assessing the toxicity levels to eight major organism groups: aquatic plants, aquatic invertebrates, fish, terrestrial arthropods, pollinators, soil organisms, terrestrial vertebrates, and terrestrial plants. This enabled them to calculate the total toxicity per country or organism group.
Globally, applied toxicity rose from 2013 to 2019 in six out of eight organism groups. Notably, pollinators saw a 13% increase, fish a 27% rise, and terrestrial arthropods—including insects, crustaceans, and spiders—experienced a 43% increase.
“This increase does not automatically translate to direct toxic effects on these organisms,” Schulz clarifies. “However, it serves as an important indicator of the toxicity levels of the pesticides currently in use.”
Numerous studies indicate that pesticide concentrations in various ecosystems, such as rivers, often exceed regulators’ assessments during approval processes.
“While this particular index does not account for it, significant evidence exists,” Schulz remarks, emphasizing that risk evaluations tend to underestimate real-world exposures.
The rise in the combined applied toxicity stems from two key factors: the increased use of pesticides and the replacement of older varieties with more toxic alternatives, spurred primarily by the emergence of pest resistance. Schulz notes, “In my view, resistance will only exacerbate with more chemical pesticide use.”
Pesticides like pyrethroids pose notable risks to fish and aquatic invertebrates, even when applied in minimal amounts. Neonicotinoids also significantly threaten pollinators.
Calls to eliminate glyphosate, known as Roundup, are growing. Although glyphosate’s overall toxicity is relatively low, its widespread use contributes to cumulative toxicity, according to Schulz. A ban could backfire if more toxic herbicides are adopted following the ban.
Reducing pesticide usage could lead to unintended consequences; declining farm productivity may necessitate more land clearance, resulting in biodiversity loss.
During the 2022 UN Biodiversity Summit, nations pledged to reduce biodiversity loss. Schulz states, “Overall risk from pesticides” has yet to be precisely defined, but he believes that the aggregate of applied toxicities could serve as a metric.
While this method has its limitations, he insists that no perfect measure of overall pesticide use exists. Roel Vermeulen of Utrecht University in the Netherlands adds, “Despite the uncertainties, the alarming trends it reveals are undeniable.” He warns, “The world is drifting away from UN objectives, which spells bad news for ecosystems and ultimately for human health.”
“Crucially, this study illustrates that a small number of highly toxic pesticides are responsible for the majority of overall risk, highlighting clear and actionable targets for significant benefits,” Vermeulen asserts.
Transforming agricultural practices will require broader societal shifts. “Consumers must adopt dietary modifications, minimize food waste, and pay fair prices that truly reflect the environmental costs of production,” he concludes.
Topics:
### Key SEO Sweets
– Used relevant keyword phrases like “pesticides,” “toxic effects,” and “biodiversity.”
– Enhanced the alt text for the image to improve accessibility and searchability.
– Integrated relevant links to authoritative sources to build trust and provide additional information.
Examining Resilience to Alzheimer’s Disease: Why Some Individuals Remain Symptom-Free
Associated Press/Alamy
Recent studies reveal that some individuals exhibit brain changes tied to Alzheimer’s disease yet show no symptoms like memory loss. Though the reasons remain unclear, innovative research is uncovering protective factors that may prevent cognitive decline.
Alzheimer’s disease is marked by amyloid plaques and tau tangles accumulating in the brain, widely believed to contribute to cognitive decline. However, some individuals, known for their resilience, defy this notion. In 2022, Henne Holstege and her team at the University Medical Center in Amsterdam discovered that certain centenarians retain good cognitive function despite these pathological changes.
Expanding on this research, the team conducted a new study involving 190 deceased individuals. Among them, 88 had Alzheimer’s diagnoses, while 53 showed no signs of the disease at death. Their ages ranged from 50 to 99, and 49 were centenarians with no dementia, though 18 exhibited cognitive impairment previously.
The focus was on the middle temporal gyrus—an early site of amyloid plaques and tau tangles in Alzheimer’s. Interestingly, centenarians with elevated amyloid levels had tau levels akin to those without Alzheimer’s, suggesting that limiting tau accumulation is critical for resilience, according to Holstege.
While amyloid plaques are linked to cognitive decline, Holstege posits that tau accumulation may activate a cascade of symptoms. Notably, amyloid plaques alone may not cause significant tau tangling. “Without amyloid, tau can’t spread,” she explains.
Further analysis of approximately 3,500 brain proteins revealed only five were significantly associated with high amyloid plaques, while nearly 670 correlated with tau tangles. Many of these proteins are involved in crucial metabolic processes like cell growth and waste clearance. Holstege emphasizes, “With amyloid, everything changes; with tau, it’s a different story.”
In the cohort of 18 centenarians with high amyloid levels, 13 showed significant tau spread throughout the middle temporal gyrus, a pattern similar to Alzheimer’s, but the overall tau presence remained low.
This distinction is vital, as diagnosis hinges on tau spread, indicating that accumulation, not just proliferation, triggers cognitive decline. “We must understand that proliferation doesn’t mean abundance,” Holstege clarifies.
In a second study, Katherine Prater and her team at the University of Washington examined 33 deceased individuals—10 diagnosed with Alzheimer’s, 10 showing no signs, and 13 deemed resilient. Most subjects were over 80 and underwent cognitive assessments within a year before death.
In line with previous findings, the research indicated that tau was present but not accumulated in resilient brains. Though the mechanisms remain elusive, Prater theorizes that microglia—immune cells regulating brain inflammation—might play a crucial role in maintaining cognitive function in resilience.
The team also conducted genetic studies on microglia from the dorsolateral prefrontal cortex, essential for managing complex tasks. They discovered that resilient individuals’ microglia exhibited heightened activity in messenger RNA transport genes compared to those with Alzheimer’s. This suggests effective gene transport, vital for protein synthesis, is preserved in resilient brains.
“Disruptions in this process can severely impact cell function,” Dr. Prater remarked at the Neuroscience Society meeting in San Diego. However, its direct relationship to Alzheimer’s resilience remains to be elucidated.
Moreover, resilient microglia demonstrated reduced activity in metabolic energy genes compared to those in Alzheimer’s patients, mirroring patterns in healthy individuals. This suggests heightened energy expenditure in Alzheimer’s due to inflammatory states that disrupt neuronal connections and lead to cell death.
“Both studies indicate that the human brain possesses mechanisms to mitigate tau burdens,” Prater concludes. Insights gained from this research could pave the way for new interventions to delay or even prevent Alzheimer’s disease. “While we aren’t close to a cure, the biology offers hope,” she stated.
Money has always influenced healthcare, from pharmaceutical advertising to research agendas. However, the pace and scale of this influence have intensified. A new wave of players is reshaping our health choices, filling the gaps left by overstretched healthcare systems, and commodifying our well-being.
Traditionally, doctors held a monopoly on medical expertise, but this is rapidly changing. A parallel healthcare system is emerging, led by consumer health companies. These entities—including health tech startups, apps, diagnostic services, and influencers—are vying for authority and monetizing their influence.
Currently, there seems to be a solution for every discomfort. Fitness trackers monitor our activity, while meditation apps come with subscription fees. Our biology is increasingly quantifiable, yet these marketable indicators may not always lead to improved health outcomes. We’ll observe whether changes in biomarkers yield positive results. While genetic testing and personalized nutrition promise a “better you,” the supporting evidence often falls short.
In this landscape, our symptoms, treatments, and even the distinctions between genuine illness and everyday discomfort are commodified. This trend is evident in podcasts promoting treatments without disclosing conflicts of interest, influencers profiting from diagnoses, and clinicians presenting themselves as heroes while selling various solutions.
Much of this transformation occurs online, where health complaints and advertising lack proper regulation. Social media platforms like TikTok, YouTube, and Instagram are becoming key sources of health advice, blending entertainment with information.
The conglomerate of pharmaceutical, technology, diagnostic, and supplement brands is referred to as the Wellness Industrial Complex, fueling the rise of the “commodified self.”
This issue is not just about personal choice. Social platforms shape our discussions about disease, influencing clinical expectations and redefining what healthcare should provide. We’re essentially participating in a global public health experiment.
However, this phenomenon also reflects real-world deficits. Alternative health options thrive because people seek acknowledgment, control, and connection, especially when public health support feels insufficient. Critiquing misinformation alone won’t halt its spread and could exacerbate marginalization.
When timely testing is inaccessible, private diagnostics can offer clarity and control. Optimization culture flourishes when traditional medicine is perceived as overly cautious or reactive.
The critical question for health systems is not whether to adapt but how. They must remain evidence-based, safe, and equitable while also being attuned to real-world experiences. Failure to do so risks losing market share and moral authority—the ability to define the essence of care.
To navigate health today, one must understand the commercial mechanisms influencing it. The content we consume is curated by an industry with unprecedented access to our bodies, data, and resources, amplifying its potential to impact our self-perception.
In 2009, the World Swimming governing body prohibited specific swimsuits from international competitions, citing unfair advantages. High-tech equipment from NASA was instrumental in designing these swimsuits, which featured ultrasonically welded seams instead of traditional stitching.
Swimmers donning these suits shattered 23 of the 25 world records during the 2008 Beijing Olympics. What made this swimwear so revolutionary? The answer lies in its remarkable ability to minimize friction between the swimmer and the water, enhancing speed and performance.
This instance illustrates the critical influence of friction in our world, a theme thoroughly investigated by Jennifer R. Vail in her book, Friction: Biography.
Bale is a tribologist, focusing on friction, wear, and lubrication as materials interact. She emphasizes, “The forces that resist movement drive us forward.” This concept forms the foundation of her work, which, while technical, delves into friction’s impact on science, technology, and civilization—a necessity as we confront future technological hurdles.
“We study friction because it is omnipresent,” Vail remarks. How did ancient Egyptians transport heavy materials for monumental projects? How do anoles and geckos scale vertical surfaces? Why was Teflon included in the Manhattan Project? What aerodynamic principles govern airplane wings? These queries all converge on friction.
From desert sands controlled by hair-like structures on animal legs to synthetic substances optimizing fluid interactions, friction plays a pivotal role, shaping everything from quantum activities to cosmic phenomena. Bale provides a detailed, passionate narrative on friction’s ubiquitous presence, showcasing its significance.
“
Friction has been central to civilization ever since humans began rubbing objects together to create fire. “
While discussing friction, Bale emphasizes the potential risks associated with harnessing this force. Our ability to manipulate friction has been integral to civilization, from the earliest fire-starting methods to modern innovations in engines, turbines, and contact lenses.
However, it is Bale’s outlook on the future that captivates readers. Alarmingly, friction consumes approximately 40% of energy in manufacturing processes, impacting both production and friction mitigation efforts. A study highlighted that an average car’s fuel consumption was over a third burnt solely to counteract friction. In a world increasingly challenged by energy conservation, optimizing friction is vital for sustainable practices.
Vail noted that innovations in tribology could potentially save energy equivalent to 34 million barrels of gasoline annually—180 times the daily gasoline consumption in the U.S. Bale’s urgent call for more tribologists in energy certification and greater emphasis on this field in educational curriculums is vital for our energy future.
This book is essential reading. Yet, despite Bale’s engaging tone and clear enthusiasm, the complexity may overwhelm some casual readers. Nevertheless, the effort is rewarding; gaining insight into friction enriches our understanding of the world, highlighting how countless interactions shape our experiences.
Epstein-Barr Virus: A Common Infection with Serious Implications
Science History Images/Alamy
Approximately 10% of individuals carry genetic mutations that heighten their susceptibility to the Epstein-Barr virus (EBV), a common pathogen linked to diseases like multiple sclerosis and lupus. Insights from a study involving over 700,000 participants may clarify why EBV results in severe illness for some, yet remains relatively harmless for the majority.
“Nearly everyone has encountered EBV,” explains Chris Whincup from King’s College London, who did not partake in the research. “How is it that, despite widespread exposure, only a fraction of the population develops autoimmune conditions?” This research offers plausible answers.
The Epstein-Barr virus was initially identified in 1964 when scientists detected its particles in Burkitt’s lymphoma, a type of cancer. Today, over 90% of the population has been infected with EBV, evidenced by the presence of antibodies against the virus.
Initially, EBV is responsible for infectious mononucleosis, often referred to as monofever or glandular fever, which typically resolves in a few weeks. However, it is also linked to chronic autoimmune disorders, as evidenced by a 2022 study demonstrating its role in the onset of multiple sclerosis, leading to nerve damage.
“Why do individuals exhibit such varied responses to the same viral infection?” questions Caleb Lareau at Memorial Sloan Kettering Cancer Center.
To investigate, Lareau and her research team analyzed health data from over 735,000 individuals participating in the British Biobank study and a U.S. cohort called All of Us. Their genomes were sequenced using blood samples. “When EBV infects certain cells, it leaves behind copies in the blood,” shares Lareau, indicating that the human genome in their sample includes EBV genome copies.
The research highlights substantial variability in EBV DNA levels among subjects. Of the participants, 47,452 (9.7%) exhibited over 1.2 complete EBV genomes per 10,000 cells, indicating that while many cleared the virus post-infection, this subset did not.
To comprehend the heightened vulnerability of these individuals, the research team sought specific genomic differences that correlated with high EBV levels. As noted by Ryan Dhindsa from Baylor College of Medicine, they identified 22 genomic regions linked to elevated EBV levels, many of which are previously associated with immune-mediated diseases.
The strongest correlation was found in genes related to the major histocompatibility complex, essential immune proteins in distinguishing between self and foreign cells. “Certain individuals possess mutations in their major histocompatibility complex,” Dhindsa explains. Further studies indicated that these variants may impede the immune system’s capacity to detect EBV infections.
“This virus profoundly impacts our immune system, having lasting effects on certain individuals,” comments Ruth Dobson at Queen Mary University of London. Persistent EBV DNA can subtly stimulate the immune system, potentially leading to autoimmune attacks on the body.
Moreover, the genetic variants linked to high EBV levels were associated with various traits and symptoms, notably an elevated risk for autoimmune diseases such as rheumatoid arthritis and lupus, reinforcing the hypothesis of the virus’s involvement in these conditions.
The research team also identified a connection between these mutations and chronic fatigue, intriguing given that some studies have posited EBV as a contributing factor to myalgic encephalomyelitis, commonly known as chronic fatigue syndrome (ME/CFS). Due to the large sample size, “we can assert that this signal exists,” Dhindsa remarked, although the precise relationship remains unclear.
For Wincup, the primary takeaway is the identification of immune system components damaged by continuous EBV presence. Targeting these components could lead to more effective treatments for EBV-related conditions.
Additionally, vaccination against EBV is a potential avenue. Currently, only experimental vaccines exist. Wincup emphasizes that developing a vaccine would be a significant advancement, arguing that despite its common perception as benign, EBV causes considerable suffering for many. “How benign is it really?”
Following a heart attack, the brain processes signals directly from sensory neurons in the heart, indicating a crucial feedback loop that involves not only the brain but also the immune system—both vital for effective recovery.
According to Vineet Augustine from the University of California, San Diego, “The body and brain are interconnected; there is significant communication among organ systems, the nervous system, and the immune system.”
Building on previous research demonstrating that the heart and brain communicate through blood pressure and cardiac sensory neurons, Augustine and his team sought to explore the role of nerves in the heart attack response. They utilized a groundbreaking technique to make mouse hearts transparent, enabling them to observe nerve activity during induced heart attacks by cutting off blood flow.
The study revealed novel clusters of sensory neurons that extend from the vagus nerve and tightly encompass the ventricles, particularly in areas damaged by lack of blood flow. Interestingly, while few nerve fibers existed prior to the heart attack, their numbers surged significantly post-incident, suggesting that the heart stimulates the growth of these neurons during recovery.
In a key experiment, Augustine’s team selectively turned off these nerves, which halted signaling to the brain, resulting in significantly smaller damaged areas in the heart. “The recovery is truly remarkable,” Augustine noted.
Patients recovering from a heart attack often require surgical interventions to restore vital blood flow and minimize further tissue damage. However, the discovery of these new neurons could pave the way for future medications, particularly in scenarios where immediate surgery is impractical.
Furthermore, the signals from these neurons activated brain regions associated with the stress response, triggering the immune system to direct its cells to the heart. While these immune cells help form scar tissue necessary for repairing damaged muscle, excessive scarring can compromise heart function and lead to heart failure. Augustine and colleagues identified alternative methods to facilitate healing in mice post-heart attack by effectively blocking this immune response early on.
Recent decades have indicated that communication occurs between the heart, brain, and immune system during a heart attack. The difference now is that researchers possess advanced tools to analyze changes at the neuron level. Matthew Kay from George Washington University noted, “This presents an intriguing opportunity for developing new treatments for heart attack patients, potentially including gene therapy.”
Current medical practices frequently include beta-blockers to assist in the healing process following heart attack-induced tissue damage. These findings clarify the mechanism by which beta-blockers influence the feedback loops within nervous and immune systems activated during heart attacks.
As Robin Choudhury from the University of Oxford remarked, “We might have already intervened with the newly discovered routes.” Nevertheless, he cautioned that this pathway likely interacts with various other immune signals and cells that remain not fully understood.
Moreover, factors like genetics, gender differences, and conditions such as diabetes or hypertension could affect the evolution of this newly identified response. Hence, determining when and if a pathway is active in a wider population remains essential before crafting targeted drugs, Choudhury added.
We can Usually Agree on Objects’ Appearance, But Why?
Martin Bond / Alamy
Although our world seems inherently ambiguous at the quantum level, this is not the experience we face in daily life. Researchers have now established a methodology to measure the speed at which objective reality emerges from this quantum ambiguity, lending credibility to the notion that an evolutionary framework can elucidate this emergence.
In the quantum domain, each entity, such as a single atom, exists within a spectrum of potential states and only assumes a definitive, “classical” state upon measurement or observation. Yet, we perceive strictly classical objects devoid of existential ambiguities, and the processes enabling this have challenged physicists for years.
Prominent physicist Wojciech Zurek of Los Alamos National Laboratory in New Mexico introduced the concept of “quantum Darwinism,” suggesting that a process akin to natural selection confirms the visibility of the “fittest” state among numerous potential forms, ensuring successful replication through environmental interactions up to the observer’s perspective. When observers with access to only portions of reality converge on the same objective observation, it indicates they are witnessing one of these identical copies.
Researchers at University College Dublin, led by Steve Campbell, have shown that differing observers can still arrive at a consensus on objective reality, even if their observational methods lack sophistication or precision.
“Observers can capture a fragment and make any measurements they desire. If I capture a different fragment, I too can make arbitrary measurements. The question becomes: how does classical objectivity arise?” he explains.
The research team has redefined the emergence of objectivity as a quantum sensing issue. For instance, if the objective fact pertains to the frequency of light emitted by an object, the observer must acquire accurate data about that frequency, similar to how a computer employs a light sensor. In optimal conditions, this method achieves ultra-precise measurements, quickly leading to a definitive conclusion about the light’s frequency. This scenario is assessed using Quantum Fisher Information (QFI), a mathematical formula that benchmarks how varying, less accurate observational techniques can still attain similar precise conclusions. Gabriel Randy at the University of Rochester highlights this comparison in their recent study.
Remarkably, their calculations indicate that for significantly large fragments of reality, even observers employing imperfect measurements can ultimately gather enough data to reach the same conclusions about objectivity as those derived from the ideal QFI standard.
“Surprisingly, simplistic measurements can be just as effective as more advanced ones,” Lundy states. “This illustrates how classicality emerges: as fragments grow larger, observers tend to agree on even basic measurements.” Thus, this research contributes further to our understanding of why, when observing the macroscopic world, we concur about its physical attributes, such as the color of a coffee cup.
“This study underscores that we do not require flawless, ideal measurements,” adds Diego Wisniacki from the University of Buenos Aires, Argentina. He notes that while QFI is foundational in quantum information theory, its application to quantum Darwinism has been sparse, presenting pathways to bridge theoretical frameworks with established experimental methodologies, like quantum devices utilizing light-based or superconducting qubits.
“This research serves as a foundational ‘brick’ in our comprehension of quantum Darwinism,” states G. Massimo Palma from the University of Palermo, Italy. “It more closely aligns with the experimental descriptions of laboratory observations.”
Palma elaborates that the simplicity of the model used in this study could facilitate new experimental pursuits; however, complex system calculations will be essential to solidify quantum Darwinism’s foundation. “Advancing beyond rudimentary models would mark a significant progression,” Palma asserts.
Lundy conveyed that researchers are eager to transform theoretical findings into experimental validations. For instance, qubits formed from trapped ions could be employed to evaluate how the emergence of objectivity timescale relates to the durations during which these qubits retain their quantum characteristics.
Historically, science operated under the notion of a “normal brain,” one that fits standard societal expectations. Those who diverge from this model have often been labeled with a disorder or mental health condition, treated as if they were somehow flawed. For years, researchers have refined the notion that neurodevelopmental conditions, including autism, ADHD, dyslexia, and movement disorders, should be recognized as distinctive variations representing different neurocognitive frameworks.
In the late 1990s, a paradigm shift occurred. What if these “disorders” were simply natural variations in brain wiring? What if human traits existed on a spectrum rather than a stark boundary between normal and abnormal? Those at either end of the spectrum may face challenges, yet their exceptional brains also offer valuable strengths. Viewed through this lens, diverse brains represent assets, contributing positively to society when properly supported.
The concept of neurodiversity gained momentum, sparking lively debates in online autism advocacy groups. By 2013, the Diagnostic and Statistical Manual of Mental Disorders recognized autism as a spectrum condition, abolishing the Asperger’s syndrome diagnosis and classifying it on a scale from Level 1 to Level 3 based on support needs. This shift solidified the understanding of neurodivergent states within medical literature.
Since the early 2000s, research has shown that individuals with autism often excel in mathematical reasoning and attention to detail. Those with ADHD frequently outperform others in creativity, while individuals with dyslexia are adept at pattern recognition and big-picture thinking. Even those with movement disorders have been noted to develop innovative coping strategies.
These discoveries have led many scientists to argue that neurodivergent states are not mere evolutionary happenstance. Instead, our ancestors likely thrived thanks to pioneers, creative thinkers, and detail-oriented individuals in their midst. A group possessing diverse cognitive strengths could more effectively explore, adapt, and survive. Some researchers now propose that the autism spectrum comprises distinct subtypes with varying clusters of abilities and challenges.
While many researchers advocate for framing neurodivergent characteristics as “superpowers,” some caution against overly positive portrayals. “Excessive optimism, especially without supporting evidence, can undermine the seriousness of these conditions,” says Dr. Jessica Eccles, a psychiatrist and neurodiversity researcher at Brighton and Sussex Medical School. Nevertheless, she emphasizes that “with this vocabulary, we can better understand both the strengths and challenges of neurodiversity, enabling individuals to navigate the world more effectively.”
The world is entering an alarming “era of water bankruptcy” fueled by overconsumption and climate change. Approximately 75% of the global population lives in regions confronting severe water scarcity, pollution, and drought.
This is the finding of a United Nations report, which concludes that many regions are extracting excessive amounts from their annual rainwater and snowmelt, leading to the rapid depletion of groundwater reserves that may take thousands of years to replenish. Notably, 70% of major aquifers are now classified as depleted, and many changes are irreversible.
Key contributors to this crisis include the expansion of agriculture and urbanization into arid areas, which are becoming increasingly dry due to climate change. For instance, around 700 sinkholes have formed in Türkiye as a consequence of groundwater extraction. In addition, devastating sandstorms induced by desertification have resulted in numerous casualties in Beijing.
“Our surface water account is now empty,” asserts Kave Madani from the United Nations University Institute for Water, Environment and Health. “The inherited savings from our ancestors—groundwater and glaciers—are now exhausted. We are witnessing global signs of water bankruptcy,” he explained.
Approximately 4 billion people face water scarcity for at least one month each year, which is exacerbated by immigration, conflict, and insecurity. Madani noted that while a currency collapse triggered recent protests in Iran, underlying water shortages were also significant contributors.
Iran has experienced its driest autumn in 50 years. This situation is further aggravated by the rapid proliferation of agricultural dams and wells, contributing to the near-complete desiccation of Lake Urmia, once the largest lake in the Middle East. The Iranian government is now considering evacuating Tehran and is exploring cloud-seeding methods to induce rain.
In the United States, the Colorado River, which is crucial for the water supply in much of the western region, has experienced an estimated flow reduction of 20% in the past 20 years. This decline is mainly attributed to decreased rainfall and increased evaporation, alongside excessive water repurposing for beef and dairy production. Cities like Los Angeles rely heavily on this water for drinking, despite the diminishing flow reaching the ocean.
The river’s primary reservoirs are currently at about 30% capacity, and projections indicate they could reach “dead pool” status (10-15% capacity) by 2027, according to research conducted by Bradley Udall from Colorado State University. Negotiations over water allocation among states stalled last year.
Experts emphasize that increasing agricultural water efficiency often leads to greater water consumption. Improvements such as drip and sprinkler irrigation allow for gradual water absorption, yet more water also runs back into rivers from flooded fields. Therefore, it is essential to reduce overall water consumption alongside enhancing efficiency, Udall asserts.
“Agriculture consumes 70% of our water resources, hence effective solutions must originate from the agricultural sector,” he adds. “A reduction in agricultural use is crucial, and this issue is prevalent worldwide.”
Approximately half of the global food production occurs in areas where water storage is diminishing. Addressing agricultural water use will also necessitate economic diversification to support the livelihoods of over 1 billion individuals, predominantly in low-income nations, which often export food to high-income countries.
“Water is integral to the economy, as it significantly impacts public health,” states Madani. “If jobs are lost, it can lead to social unrest similar to what we are witnessing in Iran.”
Even regions with sufficient rainfall are experiencing increased water extraction by data centers or contamination from industries, sewage, and agricultural runoff. Wetlands equivalent to the area of the European Union are being lost primarily due to agricultural conversion, incurring an estimated global cost of $5.1 trillion in ecosystem services, such as flood mitigation, food production, and carbon storage.
In Bangladesh, approximately half of the nation experiences well water contamination due to arsenic, exacerbated by rising sea levels and saltwater intrusion. In Dhaka, tap water and the ominously dubbed “river of death” are polluted by chemicals linked to fast-fashion product manufacturing intended for export to Europe and North America.
“It is widely known that the river is tainted by the garment industry,” notes Sonia Hawke from Oxford University. “However, strict regulations could deter buyers, creating a conflict of interest.”
In many instances, vital water bodies—including rivers, lakes, wetlands, and aquifers—struggle to return to their previous conditions. Additionally, significant glacial melting has diminished water supplies for hundreds of millions.
Madani emphasizes the necessity for humanity to adapt to reduced water availability through improved water management strategies. However, this starts with accurately assessing water resources and consumption, including household meters, well usage, and waterway health.
“Efforts like [cloud-seeding] may be futile if we don’t understand our water system’s metrics. Effective management begins with measurement,” Madani concludes.
Arc-shaped volcanoes like Japan’s Sakurajima release carbon dioxide from the Earth’s interior
Asahi Shimbun via Getty Images
New research suggests that the impact of volcanoes on Earth’s climate may not be as ancient as previously believed.
The Earth’s climate has experienced shifts between “icehouse” and “greenhouse” conditions, largely dictated by greenhouse gas levels like carbon dioxide.
Volcanic arcs, including significant eruptions from mountain ranges such as Japan’s, release CO2 from deep within the Earth. Recent findings indicate that dinosaurs became a substantial source of carbon emissions only towards the end of their reign, approximately 100 million years ago, according to Ben Mather and his team from the University of Melbourne.
This correlates with the emergence of phytoplankton featuring calcium carbonate scales in the oceans approximately 150 million years ago. When these organisms perish, they deposit large amounts of calcium carbonate on the ocean floor.
As tectonic plates shift, these significant reservoirs of carbon are pushed into the mantle and recycled into the Earth’s molten core via a process known as subduction.
“Most of the carbon derived from plankton on the subducting oceanic plate mixes into the melt interior, but a portion is released through volcanic arcs,” explains Mather.
Before the emergence of scaly plankton, volcanic arc emissions contained relatively lower levels of CO2, according to Mather.
Through modeling, Mather and colleagues examined tectonics’ long-term impact on the carbon cycle over the past 500 million years. They discovered that much of the carbon stored within Earth throughout its history was released through crustal fractures in a process termed rifting, not primarily through volcanic arcs.
Rifting, a geological process where continents separate, can occur on land (as in the East African Rift) or along mid-ocean ridges.
“As tectonic plates separate, they effectively ‘roof off’ parts of the molten Earth,” Mather states. “This process generates new crust at mid-ocean ridges, releasing carbon.” The amount of carbon entering the atmosphere from continental fractures and mid-ocean ridges relies on the cracks’ length and the rate at which they separate, a process that has remained relatively stable. However, emissions from volcanic arcs have surged in the last 100 million years due to new carbon reservoirs formed by plankton.
Currently, Earth is in a temporary warm phase called an interglacial period, nested within a larger ice age that began 34 million years ago. One reason for the persistent cold phases is that phytoplankton sequester substantial amounts of carbon from the ocean, depositing it on the sea floor. Although volcanic emissions are rising, they still pale in comparison to the carbon stored by phytoplankton and that sequestered through tectonic movements.
According to Alan Collins and his team from the University of Adelaide, modeling studies like this are crucial for comprehending how volcanic and tectonic activities have influenced climate patterns over geological timescales.
“The composition of marine sediments has shifted as new organisms evolved, utilizing diverse elements, including the rise of calcium carbonate-based zooplankton,” Collins emphasizes.
Reference journal: Nature Communications Earth and Environment, DOI TK
Explore the Land of Fire and Ice: Iceland
Embark on an unforgettable journey through Iceland’s breathtaking landscapes. Experience volcanic and geological marvels by day, and chase the mesmerizing Northern Lights by night (October).
Topic:
This revision enhances SEO through keywords and improves readability while retaining the HTML structure.
IBM’s Quantum System Two Unveiled at a Data Center in Germany
Quantum computing has been making headlines lately. You might have noticed quantum chips and their intriguing cooling systems dominating your news feed. From politicians to business leaders, the term “quantum” is everywhere. If you find yourself perplexed, consider setting a New Year’s resolution to grasp the fundamentals of quantum computing this year.
This goal may seem daunting, but the timing is perfect. The quantum computing sector has achieved significant breakthroughs lately, making it a hotbed of innovation and investment, with the market expected to exceed $1 billion, likely doubling in the coming years. Yet, high interest often leads to disproportionate hype.
There remain numerous questions about when quantum computers might outpace classical ones. While mathematicians and theorists ponder these queries, the practical route may be to improve quantum computers through experimentation. However, consensus on the best methodologies for building these systems is still elusive.
Compounding the complexity, quantum mechanics itself is notoriously challenging to comprehend. Physicists debate interpretations of bizarre phenomena like superposition and entanglement, which are pivotal for quantum computing’s potential.
Feeling overwhelmed? You’re not alone. But don’t be discouraged; these challenges can be overcome with curiosity.
As a former high school teacher, I often encountered curious students who would linger after class, eager to discuss intricate aspects of quantum computing. Many were novice learners in math or physics, yet they posed thought-provoking questions. One summer, a group who took an online quantum programming course approached me, surpassing my own coding knowledge in quantum applications. The following year, we delved into advanced topics typically reserved for college-level classes.
Recently, I discovered a young talent in quantum inquiry. A 9-year-old YouTuber, Kai, co-hosts a podcast named Quantum Kid, where he interviews leading quantum computing experts for over 88,000 subscribers to enjoy.
Kai’s co-host, Katya Moskvich, is not only his mother but also a physicist with extensive experience in science writing. She works at Quantum Machines, a firm developing classical devices that enhance the functionality of quantum computers. Kai brings an infectious enthusiasm to the podcast, engaging with pivotal figures who have influenced modern quantum theory.
In a recent episode, renowned quantum algorithm creator Peter Scholl discussed the intersection of quantum computing, sustainability, and climate action. Nobel laureate Stephen Chu and distinguished computer scientist Scott Aaronson also joined, exploring concepts like time travel and its theoretical connections to quantum mechanics. Additionally, physicist John Preskill collaborated with roboticist Ken Goldberg to examine the interplay of quantum computing and robotics.
Kai and Co-Host (Mother) Katya Moskvich
While The Quantum Kid may not delve deep into rigorous math, it offers a fun entry point and insight from leading experts in quantum technology. Most episodes introduce fundamental concepts like superposition and Heisenberg’s uncertainty principle, which you can explore further in reputable publications such as New Scientist.
The true strength of The Quantum Kid lies in Kai’s ability to ask the very questions that an inquisitive mind might have regarding quantum computers—those which seek to unpack the complex yet fascinating nature of this technology. If you’ve been curious about quantum computing but have felt overwhelmed, Kai encourages you to remain inquisitive and seek clarity. (We’re here to guide you on your quantum journey.)
Could quantum computers revolutionize space exploration or even facilitate time travel? Might they help develop advanced robotics or combat climate issues? The answers are not straightforward, laden with nuances. Kai’s engaging dialogues make complex theories accessible, ensuring clarity resonates with both young listeners and adults. Hearing Peter Scholl reiterate that current quantum systems lack the clout to change the world doesn’t dampen Kai’s enthusiasm but rather fuels it.
In the pilot episode, physicist Lennart Renner expresses optimism, stating, “We’re evolving alongside new machines that can potentially revolutionize tasks, hence we must deliberate on their applications,” setting a forward-thinking tone that reverberates throughout the series.
Adopting a blend of Kai’s wonder and imagination, coupled with the seasoned expertise of guests, will enhance any quantum learning project you embark on this year. Quantum computing, while intricate and multifaceted, remains incredibly compelling. If your child is captivated, why not explore it together?
Over 50 years ago, Jane Goodall amazed the scientific community by discovering that chimpanzees in Tanzania use tools to extract insects from termite mounds—an act previously thought to be exclusive to humans. Her mentor, Louis Leakey, famously remarked, “Now we either need to redefine ‘tool,’ redefine ‘human,’ or accept chimpanzees as humans.”
Today, research supports the notion that a variety of species engage in learning and exhibit cultural behaviors. A recent study published in the Philosophical Transactions of the Royal Society B, co-led by Philippa Brakes, showcases evidence of cultural learning across species, from whales to wallabies.
For many species, sharing culturally transmitted behaviors is crucial for survival, aiding skill development and adaptability in shifting environments. In the realm of conservation, these insights are beginning to transform practices, from species reintroduction to mitigating human-wildlife conflicts over habitat use.
Moreover, the concept of “longevity conservation” is gaining popularity. Research shows that some of the longest-lived animals have developed remarkable genetic adaptations to cope with extended lifespans while serving as custodians of shared ecological knowledge. Older individuals often possess critical information that aids adaptation to environmental changes. For instance, species like Greenland sharks and giant tortoises reveal biochemical strategies for resisting cancer and cellular repair over centuries.
As our understanding expands, we are compelled to rethink what qualifies a site as a ‘World Heritage Site.’ If whales and birds possess cultural traditions, shouldn’t we regard the loss of their songs and foraging methods with as much seriousness as the loss of human monuments? Although this perspective may seem radical, it is indeed worth considering.
Many indigenous communities have long recognized the knowledge-sharing among species. Collaborative relationships, such as those between killer whales and indigenous hunters in Australia, as well as bottlenose dolphins aiding fishermen in Brazil, illustrate the importance of listening to nature.
Understanding the knowledge shared by other animals can inspire us to rethink controversial technologies like “eradication.” Without elder guides to teach young hybrids migration paths and social norms, revived individuals may struggle to survive in current habitats.
Perhaps the most significant challenge posed by a human cross-cultural perspective is the assumption of human exceptionalism. The more we learn about the cultures of other species, the more we recognize that we coexist with a diverse array of beings, each with their own values and emotions.
It took over 50 years for the importance of non-human cultures highlighted in Goodall’s findings to gain traction among conservation groups. As time progresses, we continue to dismantle the myth of human exceptionalism. We do not need to explore distant galaxies to find intelligent, civilized beings; numerous other cultural life forms already share our planet. Embracing this knowledge can drive the transformative changes necessary to fulfill our commitments as guardians of this rich biocultural diversity.
Philippa Brakes is a behavioral ecologist at Massey University in New Zealand. Mark Bekoff is Professor Emeritus at the University of Colorado Boulder.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.