Dairy Farm Digesters: Harnessing Biogas from Cow Manure
Rudmer Zwerver/Shutterstock
During World War II, farmers in Germany and France found innovative ways to harness energy. They covered fertilizer reservoirs to capture methane and secure their own fuel supplies. Today, anaerobic digesters, the advanced version of that technology, are promoted by governments as a means to mitigate greenhouse gas emissions from dairy farms. However, researchers warn that investments in these digesters may lead to unintended consequences for both climate and public health.
Rebecca Larson from the University of Wisconsin-Madison asks, “Is this funding more effective for climate change mitigation than other strategies, like solar panel installations?” She recognizes that although digesters are effective for livestock emissions, it’s crucial to explore all options.
Agriculture is responsible for approximately one-third of global human emissions, with cow burps contributing one-third of that in the U.S. alone. Industrial dairy farms manage large quantities of cow manure and often flush it into lagoon systems.
The commercial-scale use of digesters began in the 1970s. Now, there are over 17,000 digesters primarily on farms in the European Union, while the U.S. and England each have around 400. In China, millions of small farms use brick digesters to optimize waste management.
When organic waste decomposes anaerobically, it not only releases methane but also carbon dioxide. This is typical in sewage treatment plants and waste lagoons. However, utilizing sealed digesters allows for the recovery of biogas, which can be generated more efficiently and utilized for heat, electricity, or even as natural gas for vehicles. This process mitigates methane emissions, a greenhouse gas significantly more potent than CO2. The digested waste is then repurposed as fertilizer and bedding.
Following digestion, methane emissions from stored fertilizers can be reduced by up to 91%. Yet, a recent analysis of methane emissions from 98 California dairy farms indicates a more complex scenario. Approximately 1.7 million dairy cows are housed in factory farms across the state, which has invested $389 million in digester construction—more than anywhere else in the U.S.
Although digesters reduced point-source methane emissions from 91 kg to 68 kg per hour across two-thirds of the participating farms, emissions spiked temporarily during construction. This anomaly remains unexplained, potentially tied to fertilizer slurry disturbances during installation.
Due to their heated environments, digesters may produce methane at a faster rate than traditional lagoons and can occasionally leak. The study found that in some cases, leakage rates were over 1000 kg per hour, highlighting a potential risk in efficiency.
Alyssa Valdez from the University of California, Riverside, emphasizes that high leak rates serve as a warning. Despite this, California’s leak notification program remedied 20% of identified issues. Studies suggest digesters can still reduce fertilizer emissions by about half.
“When gas leaks occur, operators incur financial losses, creating an incentive to minimize emissions,” states Angela Bywater from the University of Surrey, UK. However, digesters can also lead to increased ammonia production from fertilizers, raising environmental contamination concerns.
The prevailing debate focuses on how aggressively governments should support digesters. California’s favorable policies appear to encourage the growth of factory farms, as incentives are linked to biogas production under the Low Carbon Fuel Standard. According to a preprint study, receiving such incentives can increase dairy herd size by an average of 860 cows.
Brent Kim of Johns Hopkins University warns, “Taxpayer funding that inflates fertilizer value may distort market dynamics, making fertilizers more valuable than milk. We should consider viable climate change solutions that don’t inadvertently sustain harmful industry practices.”
About one-third of migraine sufferers find no relief from standard treatments. However, new research suggests that utilizing the brain’s waste-clearing system could introduce innovative treatment methods. A particular drug that is typically used to manage high blood pressure demonstrated the ability to effectively eliminate chemicals from the brains of mice that contribute significantly to migraines. Consequently, the mice showed minimal facial pain.
Globally, approximately 1 in 7 people suffer from migraines. Symptoms include pain, pressure, and tingling in areas such as the cheeks, jaw, forehead, and behind the eyes, often worsened even by light touch. “Just brushing your hair can result in excruciating pain for those living with migraines,” stated Adriana Della Pietra, who presented findings at the Oxford Glymphatic and Brain Clearance Symposium in the UK on April 1.
Conventional treatments for migraines, including triptans, aim to reduce inflammation and lower the levels of a neurotransmitter known as calcitonin gene-related peptide (CGRP), a key player in migraine pathology. CGRP is a major factor driving migraines, targeted by many standard treatments. “Unfortunately, many individuals do not respond to these medications and are frequently trapped in a cycle of debilitating pain,” commented Valentina Mosienko from the University of Bristol, UK, who was not involved in the study.
In previous studies, researchers discovered that prazosin, a medication prescribed for high blood pressure, alleviated facial pain caused by traumatic brain injuries in mice. Traumatic injuries can impair the brain’s waste disposal system, known as the glymphatic system, and prazosin enhanced fluid flow from brain cells through this system. Interestingly, it also appeared to benefit some migraine models used as control groups.
To delve deeper, the research team administered prazosin to one group of mice in their drinking water over six weeks, comparing against a control group that received standard water. Subsequently, both groups were subjected to migraines induced by CGRP injections.
After 30 minutes, the researchers applied progressively thicker plastic filaments to the mice’s foreheads. This technique, normally non-painful, became more detectable as the filaments increased in thickness. The findings showed that mice receiving prazosin managed to endure significantly thicker filaments without flinching compared to control mice. Della Pietra noted that the prazosin group behaved similarly to mice that hadn’t received CGRP injections.
Further analysis revealed that prazosin not only reversed the impairment of the glymphatic system caused by CGRP but also likely enhanced the clearance of CGRP and other pain-transmitting molecules, as reported by Della Pietra.
Research teams are eager to examine whether similar results can be replicated in humans. “If it proves effective in humans, that would be a tremendous breakthrough,” Mosienko added. “Since this drug is already in use, we have established safety for its application.”
Attention Deficit Hyperactivity Disorder (ADHD) is officially classified as a human condition. However, many dog owners have observed similar traits, such as hyperactivity, impulsivity, and a tendency to become easily distracted in their canine companions.
Research indicates that approximately 20 percent of dogs display ADHD-like behaviors. These dogs often skip training classes, captivated instead by the instructor’s shoelaces or engaged in rambunctious parkour.
While these lovable rogues are delightful, their behavior can make training challenging. Common signs of ADHD-like behavior in dogs include excessive barking, biting, chasing, and stealing.
If these symptoms hinder your dog’s daily activities—such as learning new commands or interacting positively with you—it may indicate an ADHD-like disorder.
A recent study reveals that different dogs experience ADHD-like traits variably. Higher instances of hyperactivity and inattention are particularly common in young male dogs that spend extended periods alone at home.
It’s essential to remember that different dog breeds exhibit varying behavioral traits. For instance, breeds like Cairn Terriers, Jack Russells, and German Shepherds display more impulsive behaviors, whereas Chihuahuas, Rough Collies, and Chinese Crested Dogs are less likely to show these traits.
If you find yourself dealing with a challenging dog, understand that their behavior is not intentional. Much like individuals with ADHD, dogs process their environment differently due to various genetic and environmental factors.
Fortunately, there are effective strategies to help manage these behaviors. Professional behavior therapy can be beneficial, but often, increased exercise and engagement can lead to significant improvements.
Short, frequent training sessions that utilize positive reinforcement (i.e., treats) to reward good behavior can yield excellent results. Additionally, calming enrichment activities like lick mats or puzzle toys can provide much-needed stimulation.
This article addresses the query from Rhys Brooks via email: “Does my dog have ADHD?”
For further questions, please reach out to us at:questions@sciencefocus.com or connect with us on Facebook, Twitter, or Instagram(please include your name and location).
Discover more with our ultimatefun facts and dive into more fascinating science content!
For the first time, humanity has witnessed the far side of the moon with their own eyes, as stunning new photos are being unveiled.
Subscribe today to enjoy ad-free reading!
Gain unlimited access to exclusive content and articles without interruptions.
In the most eagerly awaited moment of the Artemis II mission, four astronauts orbited the moon on Monday, capturing breathtaking photos and making meticulous observations from the Orion spacecraft.
NASA astronauts Reed Wiseman, Victor Glover, Christina Koch, and Canadian astronaut Jeremy Hansen took countless pictures of the moon’s rugged landscape, vast impact craters, and dark plains.
The first newly released photo, shared by the White House on X Tuesday morning, depicts an “Earthset” taken from the far side of the moon, as the Earth fades from view.
This captivating image serves as a modern reinterpretation of the iconic “Earthrise” photograph captured during the Apollo 8 mission in 1968. Unlike Apollo 8’s images, which showed the Earth coming back into view, this new photo captures the Earth as it disappears behind the moon.
The famous “Earthrise” photo was taken on December 24, 1968, during Apollo 8. William Anders / NASA
The White House also released stunning new photographs taken by Artemis II astronauts of a solar eclipse from space. This extraordinary event occurred Monday evening as the sun slipped behind the moon during the mission’s several-hour lunar flight.
Astronauts became the first humans to witness a solar eclipse from the moon. This groundbreaking image captures the dark moon with the sun’s outer atmosphere, the corona, glowing around its edges.
The moon’s near side is visible to the right, marked by distinct dark patches, while the far side remains unseen from Earth.
NASA
In a historic event, humans have returned to the moon for the first time since Apollo 17 in 1972. On April 6, four astronauts from NASA’s Artemis II mission circled the far side of the moon, reaching unprecedented distances from Earth.
Mission Commander Reed Wiseman emphasized that this journey marks a significant beginning, surpassing Apollo 13’s record of 400,171 kilometers set in 1970. “Let’s inspire this generation and the next to ensure this distance record is challenged,” he stated during a NASA livestream. During the mission, the Artemis team proposed naming two newly discovered craters: “Integrity,” after the Orion capsule, and “Carol,” in honor of Wiseman’s late wife.
Throughout the flyby, the astronauts engaged in both window-side observations and cabin communications with mission control in Houston, Texas. The crew comprises NASA astronauts Wiseman, Christina Koch, and Victor Glover, alongside Canadian Space Agency astronaut Jeremy Hansen.
As Orion orbited behind the moon, the sun appeared smaller in the sky, culminating in a rare solar eclipse not observable from Earth. The astronauts donned eclipse glasses to view the sun and witness its corona, potentially allowing them to capture unprecedented lunar details free from atmospheric interference.
Artemis astronauts experienced an extraordinary solar eclipse.
NASA
The astronauts captured stunning details of the lunar surface, showcasing its vibrant color diversity. While the moon appears gray from Earth, close-up observations reveal hues of green, brown, and even orange, attributed to chemical changes in the lunar soil. “The rapid transformations of the Earth as we orbit the moon are breathtaking,” Hansen noted.
As they orbited the Moon, the crew observed previously unseen regions. They took special interest in the Terminator—the boundary separating day from night—where deep shadows accentuate the landscape’s features. “The visual magic of the Terminator, with its bright islands and dark valleys, is captivating,” Glover remarked.
The astronauts expressed deep emotions witnessing the moon’s diverse terrain up close, imagining what it would be like to traverse its surface. “The moon is a real entity in the universe, not merely a distant poster in the sky,” Koch stated.
NASA astronaut Reed Wiseman took this breathtaking photo of Earth from the Orion spacecraft.
NASA/Reed Wiseman
The Orion capsule reached its closest point to the lunar surface, approximately 6,545 kilometers away. This milestone will stand until the Artemis IV mission, which plans a landing in 2028.
As Orion returns to Earth, expected on April 10, the astronauts will splash down in the Pacific Ocean off California’s coast. Following their return, the team will analyze notes, photos, and scientific findings in preparation for advancing the Artemis program.
Many people believe that food becomes less enjoyable as we age. While age plays a role, various other factors contribute to this phenomenon.
We are born with around 9,000 taste buds located on the papillae of the tongue. These taste buds regenerate every few weeks.
However, this regeneration slows down as we age. After around age 50, there is often an overall decline in taste buds, and existing ones may become less sensitive.
Not everyone experiences this decline uniformly, but some may find that food loses its appeal as they age. Still, it’s not solely about age.
Factors such as genetics, dental issues, medications, chronic health conditions, smoking, and nasal problems can also affect our sense of taste.
Moreover, our sense of smell significantly impacts how we perceive flavor. As we age, the number of olfactory receptor cells and the function of nasal mucous membranes decline, dulling our taste perception.
Temporary loss of smell, such as during a cold, can create similar effects, rendering food significantly bland.
As our sense of taste weakens, food preferences often shift. Salty and sweet flavors become more pronounced, leading many to favor these tastes as they age.
However, caution is essential; increased salt intake can affect blood pressure, while consuming sweets can lead to weight gain.
Intense flavors like sour citrus can awaken even the dullest of palates – Credit: Getty
So, can we prevent our sense of taste from dulling? While we can’t halt the aging process, certain habits may enhance our taste perception.
For instance, staying well-hydrated helps maintain saliva production; avoiding smoking (which harms taste buds), managing chronic conditions such as diabetes, and reviewing medications that cause dry mouth can all help.
Incorporating sharp flavors can also invigorate our taste experience. Foods like citrus fruits, sorbets, and mint often strike a stronger chord with our taste buds.
Marinating foods with vinegar, dressings, mustard, herbs, and spices can significantly enhance flavor and is often a better approach than merely increasing salt and sugar.
While it’s common for some individuals to experience a decline in taste as they age, with mindful habits and a touch of culinary adventure, many can continue to savor vibrant flavors well into their later years.
This article addresses the question posed by Kian Wilkinson from Lancaster: “Can we prevent our sense of taste from becoming dull as we age?”
If you have any questions, feel free to email us at:questions@sciencefocus.com or reach out viaFacebook,Twitter, orInstagram(please include your name and location).
Explore our ultimatefun facts and discover more amazing scientific content.
Astronomers from the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) have discovered colossal hydrogen halos, known as Lyman-alpha nebulae, surrounding over 30,000 galaxies dating back 10 to 12 billion years. This groundbreaking finding indicates that the essential materials for galaxy formation were far more plentiful than previously believed.
A giant halo of hydrogen gas, as revealed by HETDEX data and captured in deep imaging from the NASA/ESA/CSA James Webb Space Telescope. This ancient star system, 11.3 billion years old, radiates from the collective light of its myriad galaxies, with the brightest areas highlighted in red. Image credit: Erin Mentuch Cooper, HETDEX/NASA/ESA/CSA/STScI.
Hydrogen gas presents a unique challenge to astronomers, as it doesn’t emit light independently.
However, when located near energy-emitting objects—like galaxies packed with stars radiating UV light—hydrogen can glow due to this energy.
Detecting hydrogen halos demands significant time and precision, as the specialized instruments needed are often in high demand.
Previous astronomical surveys have identified some of these halos but typically focused only on the most luminous and extreme examples.
Furthermore, targeted observations of early galaxies are often too zoomed in, leading to the omission of larger halos.
HETDEX’s observations are actively filling this observational gap. This research uses the Hobby-Eberly Telescope at McDonald Observatory to map over 1 million galaxies and deepen our understanding of dark energy.
“We collected nearly half a petabyte of data, not just on these galaxies, but also on the intergalactic space,” stated Dr. Karl Gebhardt, the principal investigator of HETDEX and chair of the astronomy department at the University of Texas at Austin.
“Our observations encompass a sky area capable of hosting more than 2,000 full moons. The extent is extraordinary and unprecedented.”
“The Hobby-Eberly Telescope ranks among the largest telescopes worldwide,” Dr. Dustin Davis, a HETDEX scientist and postdoctoral fellow at UT Austin, remarked.
“HETDEX’s instruments yield 100,000 spectra per observation, providing a vast quantity of data and a treasure trove of exciting discoveries on the horizon.”
To locate hydrogen halos, astronomers examined the brightest 70,000 of the 1.6 million early galaxies cataloged by HETDEX.
Utilizing supercomputers at the Texas Advanced Computing Center, they assessed how many showed signs of surrounding halos.
According to the research team, these halos can span tens to hundreds of thousands of light-years across.
Some appear as simple, football-shaped clouds enveloping individual galaxies, while others take on irregular forms housing multiple galaxies.
“These formations are intriguing,” said Erin Mentaci-Cooper, HETDEX data manager and researcher at UT Austin.
“They resemble giant amoebas with tentacles extending into the cosmos.”
Results of this study were published on March 11, 2026, in a paper in the Astrophysical Journal.
_____
Erin Mentouch Cooper et al. 2026. Lyα Nebula in HETDEX: The largest statistical census connecting Lyα halos and blobs across cosmic noon. APJ 1000, 38; doi: 10.3847/1538-4357/ae44f3
New research reveals that a remarkable collection of over 700 Ediacaran fossils from the late Ediacaran period indicates that significant animal groups, including the early ancestors of vertebrates, began diversifying millions of years earlier than previously believed.
Restoration of the Egawa biota. Image credit: Xiaodong Wang.
The Ediacaran-Cambrian transition marked one of the most crucial turning points in Earth’s biological history.
However, the fossil evidence presents a fragmented view of this significant change, as Ediacaran biological communities are quite different from those of the Cambrian, leaving key moments of evolution elusive.
Dr. Gaorong Li from the University of Oxford states, “Our findings bridge a critical gap in the narrative of early animal diversification.”
“For the first time, we show that complex organisms typically associated with the Cambrian existed during the Ediacaran, indicating they evolved much earlier than fossil records previously suggested.”
In their study, Li and colleagues analyzed over 700 specimens from recently identified fossils in Yunnan province, China.
This fossil group, dating back 554 to 539 million years, is part of the intriguing Egawa biota.
Unlike many Ediacaran fossil sites that predominantly showcase traces of life on sandstone, these fossils are preserved as carbonaceous membranes, mirroring preservation styles found in renowned Cambrian sites like Canada’s Burgess Shale.
Dr. Luke Parry from the University of Oxford commented, “This groundbreaking discovery offers insight into a transitional phase in biological communities. The unique characteristics of Ediacaran life paved the way for the recognizable groups we categorize today.”
“Upon first examining these specimens, we recognized their uniqueness and the unexpected nature of our findings.”
The fossil group includes some of the earliest known relatives of deuterostomes, a category which now encompasses humans and vertebrates such as fish.
Among the specimens are ancestors of modern starfish alongside their close relative, the acorn worm (Ambulacraria), characterized by a U-shaped body attached to the seafloor with a stalk and tentacles for food capture.
Dr. Frankie Dunn from the University of Oxford noted, “It’s captivating that such exotic organisms thrived during the Ediacaran period.”
“We’ve discovered fossils that are distant relatives of starfish and sea cucumbers, and the search for more continues.”
The bicephalic fossils from the Egawa biota suggest that chordates (animals with backbones) also existed during this period.
Other noteworthy discoveries among the fossils include worm-like bilateral animals featuring complex feeding adaptations, as well as rare specimens believed to be early comb jellies.
Many specimens display unique anatomical features that do not correspond to any known Ediacaran or Cambrian species.
Dr. Ross Anderson from the University of Oxford stated, “Our findings suggest that the apparent scarcity of these complex faunas in other Ediacaran sites may highlight preservation discrepancies rather than an actual lack of diversity.”
“Carbonaceous compactions like those found in Egawa are uncommon in rocks of this age, indicating that similar communities may remain unpreserved elsewhere.”
For more on this pivotal discovery, refer to the research paper published in Science.
_____
Gaorong Li et al. 2026. Dawn of the Phanerozoic: The late Ediacaran transitional fauna of southwestern China. Science 392 (6793): 63-68; doi: 10.1126/science.adu2291
Recent nanoscale analysis of Bennu sample OREX-800066-3, obtained from NASA’s groundbreaking OSIRIS-REx mission, reveals organic compounds and minerals are strategically clustered in distinct regions. This indicates that water once altered the asteroid in a heterogeneous and localized manner.
Mosaic image of asteroid Bennu captured by OSIRIS-REx’s PolyCam instrument on December 2, 2018, from a distance of 15 miles (24 km). Image credit: NASA / NASA Goddard Space Flight Center / University of Arizona.
Classification of Bennu as a primitive carbonaceous asteroid marks it as one of the best-preserved remnants from the early Solar System.
While meteorites are typically viewed as a source of primitive asteroid material, they face risks of alteration during atmospheric entry and potential contamination on Earth.
In contrast, the samples returned by Bennu are regarded as truly pristine, significantly enhancing the reliability of the findings derived from them.
In a recent study, scientists at Stony Brook University employed nanoscale infrared and Raman spectroscopy to analyze the chemical composition of OREX-800066-3 samples, achieving a spatial resolution ranging from 20 to 500 nanometers per pixel.
All analyses were conducted without exposing the samples to air, preserving sensitive chemical bonds and organic functional groups crucial for accurate detection.
Furthermore, both techniques utilized are non-destructive, which is vital considering the irreplaceable nature of these samples.
At the nanoscale, the fundamental building blocks of asteroid mineralogy and organic chemistry can be investigated within these precious specimens.
The new analysis pinpointed distinct chemical domains, including regions rich in aliphatic compounds, carbonate materials, and nitrogen-containing organic substances.
This finding indicates that water-induced alterations on Bennu are chemically heterogeneous.
Interestingly, nitrogen-rich organic functional groups are preserved despite extensive water-mediated changes.
“These findings have extensive implications for planetary science and astrobiology,” stated Mehmet Yeşiltas, a professor at Stony Brook University.
“They illustrate the survival of chemically sensitive nitrogen-containing organic matter through water alterations in small solar system bodies, impacting fundamental questions about the formation and preservation of organic complexity within primitive planetary material.”
“This may shed light on how organic compounds linked to prebiotic chemistry were delivered to early Earth via carbonaceous asteroids, potentially influencing the chemical processes that led to the origin of life.”
The full study result will be published in Proceedings of the National Academy of Sciences.
_____
Mehmet Yesiltas et al. 2026. Nanoscale infrared spectroscopy reveals the complex organo-mineral assemblage of asteroid Bennu. PNAS 123 (14): e2601891123; doi: 10.1073/pnas.2601891123
During my college years, a dedicated biology lecturer emphasized the significance of iodine in our diets. He passionately advocated for the use of iodized salt, highlighting its positive impact on public health and its role in boosting the entire population’s IQ. His teachings echo in my mind every time I stroll through the salt aisle at the supermarket.
Recently, however, I’ve noticed a scarcity of iodized salt amidst an array of artisan salts like Cornish sea salt, Himalayan pink salt, and gourmet smoked salts. The few remaining iodized options appear outdated and unappealing. This raises an essential question: Are we inadvertently reversing the health benefits associated with this simple ingredient?
Iodine is a vital dietary mineral essential for thyroid function, aiding in the production of hormones that regulate metabolism, growth, and heart rate.
During pregnancy, adequate iodine intake is crucial, as thyroid hormones govern fetal brain development. Studies estimate that even minor iodine deficiencies can reduce a child’s IQ by 0.3 to 13 points. Iodine remains vital throughout childhood and adulthood, as deficiencies can lead to goiter and cognitive impairment.
Natural iodine sources include seafood and seaweed, while milk also contains this important mineral due to iodine supplementation in livestock feed. However, soil iodine levels vary significantly, with some regions facing deficiencies that historically led to high goiter prevalence.
Switzerland pioneered iodized salt in 1922, dramatically reducing goiter cases and boosting children’s height and intelligence, a phenomenon described by economist Dimitra Politi as an “infusion of IQ.” This public health initiative facilitated higher graduation rates and increased college enrollment.
In the U.S., iodized salt became available in 1924, contributing to the notable IQ increase in the 20th century. A notable endocrinologist stated that, “Five cents per person per year can improve the intelligence of the entire population” (New York Times, 2006).
Despite its historical significance, iodized salt now faces a decline in popularity. The aesthetic appeal of Himalayan salt competes heavily, and misconceptions about additives have some parents avoiding iodized options altogether.
Compounded by the rising trend of consuming processed and non-iodized foods, many people are shifting away from iodized salt. Additionally, dietary changes, such as the increase of plant-based diets, further diminish iodine intake.
A concerning study revealed a doubling of Americans with insufficient iodine levels since 2001, with nearly half of pregnant women not meeting the recommended intake.
Similar trends are observed in the UK and Australia, where studies indicate that iodine levels among reproductive-age women are alarmingly low, affecting both maternal and fetal health.
Public health experts in the U.S., U.K., and Australia recommend reintroducing iodized salt into diets to maintain cognitive function, thyroid health, and prevent goiter recurrence.
In this age of trendy supplements, many are investing in products like zinc and ginkgo biloba, overlooking iodine’s crucial role in health. It’s puzzling when iodine supplements are neglected, especially with many facing deficiencies.
Regardless of trends, I will continue to seek iodized salt in supermarkets, albeit with trepidation, wondering what my former lecturer would think if I favored the aesthetically pleasing pink salts.
Recent computational simulations indicate that icy giant planets like Uranus and Neptune may contain quasi-one-dimensional superionic carbon hydrides. This groundbreaking discovery could change how scientists perceive planetary interiors.
Diagram depicting hexagonal hydrocarbon compounds anticipated under conditions similar to those in Neptune. In this framework, carbon forms the outer helical chain (yellow), while hydrogen forms the inner helical chain (blue), aligning with the quasi-one-dimensional superionic behavior suggested by simulations. Image credit: Cong Liu.
Density measurements of Uranus and Neptune reveal that these colossal planets possess an unusual, hot, icy interlayer situated beneath an atmospheric envelope of hydrogen and helium, and above a rocky core.
While these layers are believed to comprise water, methane, and ammonia, extreme internal conditions likely result in exotic phases.
The physics associated with these high-pressure, high-temperature regions can lead to unconventional states of matter, prompting theorists and experimentalists to predict and recreate the phenomena they might encounter.
Dr. Cong Liu and colleagues at the Carnegie Institution for Science employed advanced computing and machine learning to conduct quantum physics simulations of hydrogenated carbon at pressures ranging from about 5 million to 30 million times atmospheric pressure (5-3,000 gigapascals) and temperatures of 4,000-6,000 K.
These simulations indicated the development of an ordered hexagonal framework where hydrogen atoms traverse helical paths, resulting in a quasi-one-dimensional superionic state.
Superionic materials are remarkable as they exist in a unique state between solids and liquids. Atoms of one type maintain their crystal arrangement, while atoms of another type gain mobility.
“This newly predicted carbon-hydrogen phase is particularly noteworthy because the movement of atoms isn’t entirely three-dimensional,” explained Dr. Ronald Cohen, also from the Carnegie Institution for Science.
“Rather, hydrogen preferentially migrates along distinct helical paths contained within the organized carbon structure.”
The direction of this atomic motion significantly influences heat and electrical transport within the planet’s interior.
This behavior has implications for understanding internal energy redistribution, electrical conductivity, and potentially the generation of magnetic fields in ice giants.
Additionally, this discovery broadens our comprehension of how simple compounds behave under extreme conditions and suggests that even basic systems can remarkably organize into complex phases.
“Carbon and hydrogen are prevalent in planetary materials, yet their combined behavior under giant planetary conditions remains poorly understood,” Dr. Liu remarked.
These findings are published in a study in Nature Communications dated March 16th.
_____
C. Liu et al. “Prediction of thermally driven quasi-one-dimensional superionic state of hydrogenated carbon under giant planetary conditions.” Nat Commun, published online on March 16, 2026. doi: 10.1038/s41467-026-70603-z
Michael Pollan: “Psychedelics have a way of staining the windshield of experience”
Casey Clifford/Guardian/Ivine
Author Michael Pollan, renowned for exploring themes of plants, food, and psychedelics in bestseller works like Omnivore’s Dilemma and How to Change Your Mind, now shifts his focus to the complex topic of consciousness in his latest book, The World Appears: A Journey into Consciousness. Pollan delves into scientific and philosophical insights, weaving literary perspectives throughout. In a recent interview with New Scientist, he reflects on the exploration of writing a book that often leaves him with more questions than answers.
Olivia Goldhill: Let’s begin with a challenging question: How do you define consciousness?
Michael Pollan: Consciousness can be easily defined as a subjective experience, which distinguishes beings with awareness from inanimate objects. Embracing an experience means being aware of it, which leads us to consider the implications of “subjectivity.”
Another intriguing definition arises from philosopher Thomas Nagel, who posed the question, “What’s it like to be a bat?” Although bats differ vastly from us, we can still conceptualize their experiences. If an organism can perceive its existence, it possesses consciousness.
Traditionally, consciousness was thought to reside in the cortex, the brain’s latest evolutionary development. However, I’ve come to understand that consciousness often begins with emotional experiences—not merely cognitive thought. Researchers like Antonio Damasio, Mark Solms, and Anil Seth highlight that consciousness starts with basic emotions such as hunger or itchiness, emerging from the brainstem. This realization underscores that consciousness is an embodied phenomenon; we need vulnerable bodies and profound emotions to truly experience it.
You discuss the limited understanding of consciousness and the scientific challenges involved. Do we require a new scientific approach?
Current physical sciences maintain an objectivity that excludes the qualitative, first-person experience of consciousness. This bifurcation, dating back to Galileo, has confined subjective qualitative matters to theology. While subjective experiences are indeed vital, the adequacy of existing scientific tools to address them is debatable.
We must also analyze consciousness from within. Blind Spot, a book that profoundly influenced my understanding, reveals that science itself results from human consciousness. Our chosen issues and measurement methods stem from our own awareness.
Thus, a novel scientific paradigm may be essential, one that incorporates first-person perspectives. One effort endeavors to connect this through integrated information theory, which posits a subjective experience defined by five axioms, seeking structures that support such experiences. The attempts, while intriguing, have yet to be convincing.
You propose that plants possess memory and intelligence, even hinting at plant consciousness.
I differentiate between sensation and consciousness. Sensation entails awareness of the environment and the ability to assess whether changes are beneficial or detrimental, resulting in a basic form of awareness without self-awareness. I believe plants exhibit this capability.
My exploration into what some refer to as “plant neurobiology” yielded fascinating discoveries. Plants possess around 20 senses compared to our six; they navigate mazes, and when they detect the sound of caterpillars munching, they respond by injecting toxins into their leaves. They send signals to nearby plants alerting them to predators and selectively share resources with kin.
Interestingly, plants respond to the same anesthetics as humans. For instance, when Venus flytraps are exposed to anesthetics, they fail to react to nearby flies. This raises intriguing questions: what do plants lose in consciousness under anesthesia? This provokes thought regarding their cognitive capacities.
It may comfort some to hear your perspective that artificial intelligence lacks consciousness.
Specifically, I am discussing the imminent development of artificial intelligence models. While computers can mimic thoughts, they can’t replicate real emotions, which possess inherent qualitative aspects tied to our physical being.
In my book, I introduce Kingson Mann, who endeavors to create an AI with a “vulnerable body” designed to feel. When I inquired about the authenticity of such feelings, he expressed uncertainty.
How have your past investigations into plants and psychedelics informed your current research on consciousness?
My fascination with plants has roots in my earlier works, and they matter deeply to me. My psychedelic experiences also shaped this exploration. One profound moment occurred in my Connecticut garden, where I sensed a consciousness among the poppies, which seemed to gaze back at me with kindness.
My challenge remains: how to interpret these psychedelic insights. William James suggested we treat such experiences as hypotheses and seek further validation or contradiction. This perspective guided my journey.
Christof Koch recounts his radical psychedelic experience in my book, leading him to rethink established notions of consciousness tied strictly to the brain, illuminating the extraordinary potential of psychedelics in understanding consciousness.
Psychedelics influence how we perceive the world and can “stain the windshield of experience,” which makes it impossible to disregard consciousness. Once you grasp that concept, it can become an obsession.
I appreciate your thoughts on psychologist Russell Hurlbert’s experiment tracking thoughts, though you seem to dispute his claim of limited thoughts.
While I may struggle to articulate my thoughts, I believe they exist and merit expression. James described this as a “hunch”—a threshold of understanding that may take time to articulate.
However, Hurlburt inferred that my inability to instantly contextualize thoughts indicated a cognitive void I was filling with situational elements. While I found our discussions intriguing, I also found them illuminating.
“
Consciousness is a private space where we think whatever we want, and we offer it to businesses “
For over fifty years, Hurlburt has observed real variations in thought processes among individuals. We often assume the term “thinking” is universal, yet distinct forms exist—some think in words, others in images, and some experience what he terms “unsymbolized thinking.” Notably, verbal thinkers are fewer than often presumed.
Does contemplating consciousness enhance or diminish our consciousness?
Alison Gopnik articulates “spotlight consciousness” (focused attention) and “anti-lantern awareness” (exploratory awareness). I initially sought immediate answers to the consciousness dilemma. Yet, through discussions with my artist wife and Zen teacher Joan Halifax, I learned the value of embracing uncertainty. Understanding consciousness is complex yet essential, and protecting our unconsciousness is paramount.
If comprehending consciousness proves potentially impossible, what motivates this pursuit?
Ultimately, the journey of discovery matters more than definitive answers. James’s insights into the intricacies of our minds captivated me, leading to greater appreciation for previously overlooked aspects of consciousness. My hope is that this work enhances your awareness of consciousness more than before you read it.
Sure! Here’s the SEO-optimized rewrite of the content with HTML tags preserved:
Trucks transporting soybeans on Amazon roads
Lalo de Almeida/Folhapress/Panos
The detrimental effects of Amazon deforestation on climate change have been acknowledged for years. Climate scientists and environmental activists have consistently emphasized the need to protect rainforests. Recently, the Brazilian government has weakened environmental regulations for major industrial projects in the region, heightening the risk of ecological harm. Photographer Lalo de Almeida has been documenting these changes, capturing the evolving landscape of the rainforest as well as areas where new development projects are being initiated.
In the featured image, Almeida depicts numerous trucks transporting soybeans along a road near Milituba, expected to connect to a new railway system that will transport soybeans to the Tapajos River. Additionally, another photograph shows three men gathering soybeans from a truck that has experienced an accident—an all-too-frequent occurrence for individuals engaged in this line of work.
Collecting soybeans from an overturned truck
Lalo de Almeida/Folhapress/Panos
Almeida’s photography not only highlights the extensive agribusiness influence in the Amazon but also emphasizes the local communities often overlooked in political discussions. “Indigenous territories along the railway route, riverside communities, and conservation areas are all being affected, yet the residents of these regions have not been consulted,” he states.
Learn more about the new soybean transportation routes. The children seen playing in a canoe in the image below reside in a village on indigenous land threatened by upcoming oil exploration projects.
Children playing near Santa Isabel in the Uaca Indigenous Territory
Lalo de Almeida/Folhapress/Panos
On a more positive note, some workers are constructing power transmission towers within the Waimiri Atroari Indigenous territory. This large-scale endeavor seeks to engage the community and minimize environmental impact.
Assembling transmission towers within Waimiri-Atroari territory
Lalo de Almeida/Folhapress/Panos
Nonetheless, significant damage has already occurred. Almeida documents a charred Brazil nut tree near an illegal spur road, emphasizing the deforestation and land seizure threats in the area. The twisted remains starkly illustrate the consequences of prioritizing development over environmental preservation.
Burnt remains of Brazil nut trees in deforested area
Lalo de Almeida/Folhapress/Panos
Topics:
This version is optimized for SEO with relevant keywords while keeping the original HTML structure intact.
Research is increasingly focused on utilizing the brain’s waste disposal system to potentially slow or mitigate Alzheimer’s disease. A recent technique has demonstrated success in removing toxic protein aggregates associated with Alzheimer’s from mouse brains, leading to improved memory and learning test results.
This technique targets a receptor known as DDR2, traditionally associated with lung health. “Inhibiting the DDR2 pathway could theoretically decrease amyloid beta protein levels while simultaneously enhancing waste removal,” explains Jia Li from Guangzhou Medical University, China. “We are optimistic that we can ultimately reverse Alzheimer’s disease.”
The buildup of misfolded proteins, such as amyloid plaques and tau tangles in the brain, is considered a primary trigger for Alzheimer’s. While existing medications can remove amyloid aggregates, they often do not significantly alleviate symptoms. Thus, research is shifting towards innovative strategies, including enhancing the glymphatic system responsible for waste clearance in the brain.
Lee and colleagues plan to further investigate receptors in cell membranes that may boost glymphatic function as one of their roles. DDR2, studied extensively for its role in pulmonary fibrosis, is also implicated in Alzheimer’s disease by Jinsu and his team at Guangzhou Medical University. Pulmonary fibrosis occurs when the extracellular matrix surrounding cells fails, leading to excessive collagen deposition and oxygen supply limitations.
To explore DDR2’s role, the researchers reviewed human tissue databases and discovered DDR2’s scarcity. However, they found substantial amounts in brain samples from Alzheimer’s patients. “We confirmed that DDR2 is prevalent in Alzheimer’s disease brain tissue for the first time,” notes Su.
Through various experiments in human and primate cells, along with mouse models, researchers propose that DDR2 regulates the cellular dysfunction responsible for the disease’s symptoms. This is substantiated by findings that three cell types increase DDR2 in their membranes during Alzheimer’s: reactive astrocytes, surrounding amyloid beta masses; perivascular fibroblasts, which alter activity prior to Alzheimer’s onset; and choroid plexus epithelial cells that are crucial for cerebrospinal fluid production, essential for the glymphatic system.
These findings suggest that targeting DDR2 could impact multiple facets of Alzheimer’s simultaneously, as noted by Siju Gu from Harvard University. Yet, due to the complexity of the condition, he remains cautious about potential reversibility of Alzheimer’s disease.
The researchers developed a monoclonal antibody aimed at blocking the DDR2 receptor. In mouse models of Alzheimer’s, this intervention improved spatial learning and memory, alongside reduced DDR2 levels, fewer amyloid plaques, and enhanced glymphatic activity.
“The mouse model results are promising and highlight the role of glymphatic function and cerebrospinal fluid dynamics in brain health,” Gu remarked. “This suggests DDR2 could be a viable target for Alzheimer’s disease therapies.”
Cesar Cunha from Denmark’s Novo Nordisk Foundation Center for Basic Metabolic Research expressed appreciation for the researchers’ focus on more than just amyloid plaques, noting their model relates to a rare inherited form of Alzheimer’s that typically arises earlier. Its applicability to the more common late-onset Alzheimer’s remains uncertain.
Professor Hsu, however, indicates that DDR2 upregulation occurs in both familial and late-onset Alzheimer’s, suggesting the treatment has potential widespread efficacy. DDR2 expression appears to increase with age, a factor alongside hypoxia, both recognized risk factors for late-onset Alzheimer’s.
Currently, researchers are embarking on clinical trials that use tracers to monitor DDR2 levels in Alzheimer’s patients’ brains, aiming to determine the antibodies’ delivery paths. They are also developing smaller antibodies to facilitate more efficient crossing of the blood-brain barrier.
Exciting new findings reveal that the star SDSS J0715-7334, formed in the halo of the Large Magellanic Cloud, migrated to the Milky Way billions of years ago, as uncovered by a dedicated team of undergraduate students at the University of Chicago.
Milky Way Galaxy illustrating the position of SDSS J0715-7334. The red line represents the star’s path, while the blue line indicates the expected trajectory for stars formed in the Large Magellanic Cloud. Image credits: Vedant Chandra / SDSS Collaboration / ESA / Gaia / A. Moitinho, AF Silva, M. Barros, C. Barata, University of Lisbon / H. Savietto, Fork Research.
The Big Bang initiated the universe, creating a hot, dense soup of energetic particles.
As the universe expanded, this primordial material cooled, leading to the formation of neutral hydrogen gas.
Denser regions of this gas collapsed under gravity after hundreds of millions of years, resulting in the birth of the universe’s first stars made of hydrogen and helium.
These ancient stars burned brightly but lived fast, generating heavier elements through nuclear fusion, which were dispersed into the cosmos upon their explosive deaths.
This enriched material then contributed to the formation of subsequent stars that were diverse in their elemental composition.
“Heavy elements, referred to as metals by astronomers, were produced through stellar activities, including nuclear fusion and supernova blasts,” noted Alex Gee, a professor at the University of Chicago.
“The discovery of a star with extremely low metal content indicated to the students that they had found something extraordinary.”
SDSS J0715-7334 is remarkable, containing only 0.005% of the metal content found in our Sun, making it the least metallic star ever recorded, surpassing the previous record holder by over double.
This star, identified using data from the Sloan Digital Sky Survey (SDSS), is located approximately 80,000 light-years from Earth.
Its orbital analysis confirms its origin in the Large Magellanic Cloud, from where it journeyed into the Milky Way billions of years ago.
“This ancient celestial traveler provides invaluable insights into the conditions of the early universe,” said Professor Gee.
“Big data initiatives like SDSS empower students to take part in groundbreaking discoveries.”
“We studied a variety of elements within this star, and we found all of them to have very low abundances,” explained Ha Do, one of the University of Chicago students involved in the discovery.
The team’s research paper is published in the journal Nature Astronomy.
_____
AP via others. A near-primitive star from the Large Magellanic Cloud. Nat Astron published online on April 3, 2026. doi: 10.1038/s41550-026-02816-7
In today’s fast-paced world filled with screens and distractions, quality sleep is increasingly rare. Alarmingly, over 1/3 of the US adult population is not achieving the recommended amount of sleep nightly.
However, a select few possess unique biological advantages, allowing them to thrive on much less sleep.
Believe it or not, around 1 to 3 percent of the population are “short sleepers” who function optimally on just 4 to 6 hours of sleep each night.
What’s even more fascinating is that scientists are beginning to uncover the reasons behind this phenomenon. They are exploring whether others may eventually gain this ability.
This suggests that, in the not-so-distant future, you may only need four hours of sleep for optimal functioning.
Who Are the Hidden Superheroes?
Natural short sleepers do not achieve their unique traits through mindset or willpower; it is a biological adaptation.
Recent research has identified specific genes that allow some individuals to sleep significantly less without negatively impacting their health.
A notable discovery involves a gene called December 2nd, which regulates orexin levels—a brain chemical that enhances alertness.
While low orexin levels can lead to narcolepsy, those who are naturally short sleepers seem to produce elevated amounts, enabling them to stay awake on less rest.
Orexin, produced in the hypothalamus, enhances alertness, concentration, and sleep cycle regulation – Credit: Getty
When researchers introduced this mutation into mice, they found that these mice required less sleep without experiencing cognitive decline typically associated with sleep deprivation.
Since then, at least seven genes have been implicated in this unique sleep pattern, consistently yielding shorter sleep cycles without apparent drawbacks.
According to Professor Guy Leszziner, a neurologist and sleep expert, the evidence points to genetic factors as the key determinant of natural short sleepers.
Such individuals are rarely seen in clinics, as their unique sleep patterns are often mistaken for normalcy unless pointed out by someone close.
“Short sleepers often don’t realize their patterns are unusual until others highlight it,” he explains. “There may be others with similar patterns, particularly if there’s a family history, so it feels normal to them.”
While natural short sleepers are genetically uncommon, research into their mechanisms is rapidly gaining momentum.
This leads to intriguing possibilities: instead of waiting for nature to endow us with this gift, could we one day engineer it?
Read more:
Introducing CRISPR
CRISPR is a revolutionary gene-editing technology granting scientists the capability to alter DNA with astounding precision. Initially part of bacterial defense systems, it now stands as one of the most potent tools in modern biology.
This technology employs enzymes as “programmable scissors.”
By assigning short genetic addresses to these enzymes, scientists can direct their actions precisely within the genome. Once they cut, the cell’s repair mechanisms can delete genes, correct mutations, or insert new DNA.
Currently, CRISPR is mainly utilized for treating genetic conditions such as sickle cell disease. However, as technology progresses, many researchers speculate it could extend to enhancing human capabilities, including sleep.
At GITEX Global, Dr. Trevor Martin, CEO of genetic engineering firm Mammoth Biosciences, shared:
“They don’t just persevere; they actually require only three hours of sleep. While we discuss longevity, imagine if everyone had access to that!”
His company is focused on creating new CRISPR tools that are smaller and simpler to introduce into human cells than earlier versions.
“Our mission is to eradicate genetic diseases,” he states in BBC Science Focus. “We are developing CRISPR technology capable of extensive editing in every cell in the body.”
While Mammoth is currently addressing rare genetic conditions like familial chylomicronemia syndrome, Martin emphasizes the broader potential of this technology, stating, “There’s no reason to stop there.”
CRISPR empowers scientists to edit genetic code with unmatched accuracy – Photo courtesy of Getty
So, how feasible is it to edit someone to become a short sleeper? Leszziner asserts it’s theoretically achievable, though complex.
“In theory, if all responsible genes can be identified, altering someone’s genetic makeup is possible,” he explains. “However, it is not as straightforward as simply removing or modifying one gene.”
Social considerations also come into play. “If everyone suddenly had three to four extra hours each day, society would need a significant reconfiguration,” Leszziner notes. “Would those hours be utilized for work or enjoyment? The answer remains uncertain.”
A “One-Time” Upgrade
Concerns may arise that, even if such a treatment becomes available, access could be limited to a privileged few.
Fortunately, Martin reassures that this technology is inherently designed for accessibility.
“The incredible aspect of genetic medicine, often overlooked, is that it can be a one-time solution,” he explains. “You won’t need continuous medication; a single visit to a healthcare provider could suffice. While cost is a topic, lengthy medical infrastructures won’t be necessary.”
For now, transforming someone into a short sleeper remains hypothetical. Yet, the science of sleep efficiency is expanding rapidly, and CRISPR technology is progressing even faster.
For the first time, researchers can plausibly assert that it might be feasible to increase your waking hours by three to four hours each day.
This may not happen today or tomorrow, but soon, a day will come when sleeping just four hours will be a reality. Prepare yourself to grasp that potential!
Cells transport substances by encasing them in membrane bubbles called vesicles that navigate to various locations within the cell. These vesicles merge with other vesicles to release their contents, a complex process requiring the seamless connection of two membranes without rupturing or leaking. Scientists have long theorized that during this fusion, the cell membrane enters a transient intermediate state, but direct visualization of this process within intact cells has remained elusive until now.
Researchers from the NIH and the University of Virginia embarked on a study to determine if the membranes of living cells create stable, observable structures that signify this intermediate state. They cultured multiple mammalian cell types, including those from humans, monkeys, mice, and rats, in nutrient-rich solutions within laboratory flasks kept in a 37°C (98.6°F) incubator to sustain their growth.
The research team placed between 80,000 and 100,000 cells on a specialized gold-coated platform optimized for high-resolution imaging. To maintain the natural state of the cells, they flash-froze them to immobilize the membranes. Subsequently, they employed a technique known as cryogenic electron tomography to generate detailed images referred to as tomographic images.
Using these cross-sectional images, they reconstructed a 3D model of the cells at the nanometer scale, allowing visibility into the delicate structures of internal vesicles and the plasma membrane. Approximately 300 3D reconstructions showcased areas where membrane bubbles interacted and moved, particularly focusing on membrane contact sites where two vesicles or one vesicle and the cell’s plasma membrane are closely aligned.
Typically, a cell membrane comprises two layers of fat-like molecules that create a flexible barrier. However, the researchers uncovered an uncharacterized membrane structure formed when the outer layers of two membranes merge into a continuous sheet while keeping the inner layers separate. They identified a flat, circular area where the outer layers contacted, forming a thin membrane bridge between vesicles, analogous to soap bubbles merging. This structure is referred to as a hemifome.
The research team noted that hemifsomes are considerably larger and more stable than the ephemeral intermediate states posited by earlier studies. They interpreted this stability to suggest that hemifsomes represent more than mere temporary fusion events; they may endure long enough to engage in vital cellular functions.
Additionally, they detected that some hemifsomes contained singular lens-shaped droplets within the membrane at the fusion point of the two vesicles. About half of the 308 cross-sectional images they analyzed revealed these droplets, averaging 40 nanometers in diameter—approximately 100 times smaller than the adjacent vesicles—and positioned close to the oily membrane interior.
These droplets, distinct from surrounding membrane lipids, are believed to consist of a blend of lipids and proteins, referred to as proteolipid nanodroplets. The researchers posited that the consistent association between hemifsomes and these proteolipid nanodroplets might contribute to the stabilization of hemifsomes or influence the morphological organization of the cell membrane.
To investigate whether hemifsomes facilitate material movement within cells, the team introduced 5- or 15-nanometer-sized gold particles into the cells. These particles were adequately small to traverse the cell’s internal transport systems, which usually distribute nutrients and other molecules. By employing a powerful microscope, they tracked the movement of the gold particles through the cell’s compartments; however, none entered hemifsomes, suggesting a non-involvement in cellular transport.
In conclusion, the researchers posited that hemifsomes emerge when cell membranes merge or reshape, akin to temporary construction sites for cellular membrane construction, repair, or rearrangement. Unlike existing models of membrane fusion and vesicle formation, these findings indicate that vital intermediate states can develop into stable and functional cellular configurations.
The researchers propose that future studies should delve into the molecular composition of proteolipid nanodroplets and clarify how cells regulate the shift from hemifsomes to fully fused membranes. They also recommend exploring hemifsomes’ roles in vesicle formation, membrane recycling, or stress responses across various cell types.
Breaking News: The Artemis II astronaut crew has officially joined the ranks of the lunar space exploration community.
Subscribe now for ad-free access to this story!
Gain unlimited access to exclusive content and articles without interruptions.
The crew’s Orion capsule entered the Moon’s gravitational influence at 12:41 a.m. ET on Monday, marking a significant moment as they navigate an area dominated by the moon’s gravity.
“This represents a critical milestone in our mission,” stated NASA Flight Director Rick Henfling during a recent press conference.
The Moon’s sphere of influence is a mathematical boundary, not a tangible one, which indicates an astronaut’s proximity to the lunar body.
This milestone is a major achievement for NASA, marking the first human entry into the Moon’s sphere of influence since Apollo 17 in 1972.
On Sunday, astronauts shared images of their “last glimpse of Earth before approaching the moon,” capturing the planet as a distant crescent through the Orion spacecraft’s window.
“John Young and I landed on the moon in 1972 with a lunar module we named Orion,” Duke shared in a recorded message. “It’s exciting to see a new kind of Orion leading the way for humans to return to the moon.”
Artemis II crew members (from left) Jeremy Hansen, Reid Wiseman, Christina Koch, and Victor Glover respond to reporter questions on Thursday. NASA
The astronauts tested newly designed spacesuits for this flight, essential for both launch and emergency situations.
Orange spacesuits are worn during launch and can provide a breathable atmosphere for up to six days in case the Orion capsule loses pressurization, as highlighted by NASA.
The Orion spacecraft conducted a crucial 14-second engine burn on Sunday to maintain an accurate orbit around the moon. Although correction burns were planned for other dates, this was the first time one was required since leaving Earth’s orbit.
“Orion demonstrated a precise orbit, so the initial two corrections were unnecessary,” Henfling explained.
The crew will orbit the moon on Monday, reaching an approximate distance of 452,760 miles from Earth, a new record for human distances traveled from home. They are poised to surpass the Apollo 13 crew’s record of 248,655 miles.
During their lunar flyby, Wiseman, Koch, Glover, and Hansen will dedicate about seven hours to observing and photographing the moon, starting at 2:45 p.m. ET. They will explore never-before-seen areas of the moon’s surface.
NASA will deliver live coverage of the flyby starting at 1 p.m. ET.
NASA estimates the Orion spacecraft will reach a distance of 4,070 miles from the moon’s surface at its closest approach around 7 p.m. ET.
The astronauts will utilize two Nikon D5 cameras and one Nikon Z9 camera to capture stunning imagery during their mission.
Focusing on 30 scientific objectives, crew members will investigate the Oriental Basin, a 3.8 billion-year-old crater formed by a large impactor. The approximately 600-mile-wide basins on both sides of the moon harbor geological features that provide insight into ancient impacts, as per NASA.
The crew will also examine the Hertzsprung basin located on the moon’s far side. Unlike the well-preserved Oriental Basin, the 400-mile-wide crater showcases features affected by subsequent lunar impacts, providing a unique opportunity to compare lunar topographical changes over time.
To guide their observations, the crew will employ advanced software tools designed for scientific targets.
Kelsey Young, Artemis II’s lunar science director, noted the busy schedule but emphasized the need for flexibility. “They are scientists on a mission and are encouraged to deviate from the agenda if something compelling captures their attention,” she stated.
Towards the end of their lunar viewing period, astronauts will witness a solar eclipse lasting approximately one hour from their vantage point in space. This eclipse will begin at 8:35 PM ET, obstructing light from the Orion capsule’s perspective.
During this time, the moon will appear predominantly dark, offering astronauts the chance to observe the sun’s corona and detect flashes from meteoroids impacting the lunar surface.
Astronauts will also photograph other visible planets during the eclipse, including Mercury, Venus, Mars, and Saturn, as mentioned by Young.
“This crew stands at the forefront of lunar exploration, with the unparalleled opportunity to view the moon from a unique perspective,” she added.
“This is exploration,” Young concluded. “We have received valuable data from orbiting spacecraft, but these subtle observations are what we truly need to uncover new discoveries.”
In 2018, the legalization of medical cannabis in the UK marked a pivotal change, driven by campaigns advocating for children with treatment-resistant epilepsy.
The legal reforms permit specialist medical consultants to prescribe cannabis-based medical products (CBPMs) for a variety of conditions, always prioritizing the patient’s well-being.
Despite this legalization, the possession and use of cannabis (classified as a class B drug) without a valid prescription continues to be illegal in the UK.
Most cannabis products available are unlicensed, lacking endorsement from the Medicines and Healthcare products Regulatory Agency (MHRA), resulting in limited prescriptions through the National Health Service (NHS). This gap has inadvertently triggered a burgeoning private market.
Currently, more than 30 specialist cannabis clinics are registered with the Healthcare Quality Commission, with estimated prescriptions for cannabis products reaching 80,000 patients. Conditions treated range from chronic pain and anxiety to ADHD.
Data reveals that 42% of patients were prescribed medical cannabis for mental health issues such as anxiety, depression, PTSD, and OCD, aligning with trends observed in Australia and the US.
The UK stands as a major producer of medical cannabis. Photo courtesy of Getty.
However, a recent review published in Lancet Psychiatry assessed over 50 randomized controlled trials (RCTs) and found “no evidence” supporting the efficacy of cannabinoids for treating conditions like anxiety, PTSD, substance use disorders, ADHD, bipolar disorder, psychotic disorders, or anorexia.
While some efficacy was noted for cannabis use disorder, insomnia, Tourette syndrome, and autism spectrum disorder, these findings were categorized as “low quality.”
The Advisory Committee on the Abuse of Drugs (ACMD) is conducting a review examining the implications of medical cannabis prescriptions in the UK, focusing on any “unintended consequences” resulting from recent legal changes.
Professor Owen Bowden Jones, former ACMD Chairman, indicated that the study results indicate that the benefits of medical cannabis may have been “overestimated” for numerous conditions, and these products “should not be administered for psychiatric conditions lacking supportive evidence.”
“We must focus on reducing barriers to facilitate superior research that further explores cannabis product effects,” he added.
The review asserts that routine cannabinoid use for mental health conditions is “seldom justified,” raising critical questions, notably, why is cannabis prescribed despite limited evidence of its effectiveness?
Treatment Options
It is stated that “absence of evidence is not evidence of absence.” Dr. Niraj Singh, a consultant psychiatrist in the UK, has prescribed medical cannabis for over six years.
“Numerous patients have reported that this treatment effectively addresses a range of conditions, and most use it responsibly. In my experience, it has yielded positive results, enabling patients to lead happy, fulfilling lives,” Singh remarked.
Many patients seeking treatment at cannabis clinics have reportedly exhausted all traditional options or lack access to adequate mental health support. As of January 2026, 1.5 million adults engaged with NHS mental health services, while 8.7 million people were prescribed antidepressants in the UK from 2023 to 2024, believed to be effective for approximately one year.
In a survey by the United Patient Alliance, a patient dealing with anxiety, depression, and PTSD expressed feeling “seen and supported” after receiving effective treatment without harmful side effects associated with previous prescriptions.
“In instances where individuals have plateaued in treatment options, medical cannabis is making a significant difference,” Singh expressed.
Evidence from peer-reviewed studies links cannabis to improved symptoms and quality of life for conditions such as: PTSD, OCD, and insomnia. However, observational studies were excluded from the aforementioned review due to concerns of biases that could not establish causality.
Despite the need for more robust clinical trials, Professor David Nutt, former chair of ACMD and founder of the independent charity Drug Science, argues that RCTs alone do not offer sufficient data on a drug’s effectiveness.
This sentiment is echoed by Sir Michael Rollins, former director of the MHRA and the National Institute for Healthcare Research and Evaluation (NICE). He emphasized the need for real-world evidence that could yield “better clinical data and statistical power” in a speech at the Royal College of Physicians.
According to Nutt, “Placebo-controlled trials are costly and involve highly selective patient populations, limiting their generalizability.” He also highlighted that cannabis’s numerous active compounds, which vary vastly in dosage and formulation, pose significant challenges when conducting double-blind, placebo-controlled studies. Professor Mike Burns, President of the Association of Medical Cannabis Clinicians, emphasized the need for a more nuanced approach in understanding mental health prescribing.
Clinical Supervision
Medical cannabis can induce side effects, including heightened anxiety and paranoia, making it unsuitable for individuals with a history of psychosis.
According to a survey published in BMJ Mental Health, those using cannabis for self-medication tend to use it more frequently and consume higher levels of tetrahydrocannabinol (THC), resulting in increased paranoia.
“Cannabis is not devoid of side effects,” stated Dr. Marta Di Forti, a Professor of Drug Use, Genetics, and Psychosis at King’s College London, who runs a clinic for individuals with mental health issues in London.
She recounted cases where patients developed complications after being prescribed products containing high THC levels, leading to hospitalizations for psychotic symptoms. Yet, much of our understanding in this area remains anecdotal.
“There is valid reasoning for prescribing cannabis as medication,” she noted. “However, there must be comprehensive evidence and proper oversight, which is currently lacking.”
The Association of Medical Cannabis Clinicians recommends a review by a peer panel for prescriptions exceeding 60 grams per month or containing over 25% THC. Like other controlled substances, prescribing CBPM requires diligent clinical oversight, thorough evaluation, and ongoing monitoring, especially in complex cases with significant mental health histories.
While Singh noted that side effects are relatively rare, he expressed concern about the rising availability of high-THC products. “Checks and balances are imperative,” he insisted, “as adjustments to THC concentrations must be carefully monitored.”
Prescribers maintain that a strong clinical oversight process is in place, stating they’ve never felt pressure to prescribe. Eligibility for medical cannabis entails having undergone at least two previous treatments, receiving an evaluation from a psychiatrist, and being reviewed by a multidisciplinary team.
Nonetheless, some critics argue that clinics should enhance support and training for prescribers and have a responsibility to foster research that substantiates their claims. “The industry has not adequately collected and analyzed patient outcomes,” Burns stated. “Clinics have a moral obligation to gather and share data whenever possible.”
In 2018, cannabis became legal for medical use in the UK with a prescription. Use without a prescription remains illegal. Photo credit: Getty.
Evidence Gap
There is a shared consensus on the urgent need to develop a robust evidence base. However, finding common ground proves challenging. Some advocate for cannabis’s efficacy, while others dispute it, with a lack of substantial research to confirm either stance.
Nutt emphasized that the current clinical research system is inadequate for medical cannabis. “In 2018, the Health Ministry pledged to conduct efficacy trials for children with epilepsy, but no progress has been made. This reflects a disinterest from pharmaceutical companies due to the impossibility of patenting plant medicines.”
This challenge cannot be solved solely by a call for further research, he noted, but requires prioritizing real-world data and practical experience to support cannabis in clinical settings.
Meanwhile, patients express fears of being pushed back into the illegal market, where they have no access to medical oversight or regulated products, which is widely viewed as more dangerous.
Denying access to medical marijuana based on “incomplete evidence” not only misrepresents scientific data but also inflicts harm on patients who rely on it, according to the United Patient Alliance.
“Real-world evidence studies, patient-reported outcomes, and research focusing on treatment-resistant populations are critically needed,” they added. “We do not ask for science to be ignored; we urge it to catch up with patient experiences.”
HOUSTON — The Artemis II mission astronauts have crossed the halfway point to the moon, witnessing the far side of the lunar surface for the first time in history.
Subscribe now to read this article without ads!
Enjoy unlimited access to exclusive, ad-free content.
In a recent interview with NBC News from orbit, NASA astronaut Christina Koch observed that the moon looked strikingly different through the window of the Orion capsule compared to how we see it from Earth.
“The dark areas just aren’t in their usual places,” she remarked. “It felt like a completely different moon.”
Koch, alongside fellow astronauts Reed Wiseman, Victor Glover, and Canadian astronaut Jeremy Hansen, consulted their research materials to decode their extraordinary views.
“We’re seeing the dark side of the moon—an experience we’ve never had before,” Koch stated.
NASA astronaut Christina Koch illuminated by a screen aboard the Orion spacecraft, while Canadian astronaut Jeremy Hansen gazes out of the window.
Wiseman, Koch, Glover, and Hansen embarked on their ten-day lunar expedition on Wednesday, marking humanity’s first exploration of the moon in over five decades. They are the first humans to launch aboard NASA’s Space Launch System rocket and Orion capsule, officially on their way to the moon after a vital engine burn propelled them out of Earth’s orbit on Thursday night.
Wiseman described the flight as an “incredible achievement,” noting that the astronauts’ views of both Earth and the moon were truly “awe-inspiring.”
“Earth is in a near-total solar eclipse while the moon is basking in near-full daylight,” he said. “The only way to appreciate this perspective is to be positioned between the two celestial bodies.”
Koch added that, despite their excitement, the crew managed to find time to relax and sleep comfortably within the 16.5-foot-wide Orion capsule, which offers habitable space roughly similar to that of a camper.
Sleep is among the many essential aspects that occupy a space traveler’s day.
“Being human here is one of the most rewarding facets of this mission,” Koch said. “We’re just humans trying to thrive. One moment we could be marveling at the far side of the moon, and then, it might hit us, ‘Hmm, perhaps I should change my socks,’ and start hunting for them. That encapsulates the essence of human spaceflight.”
The four astronauts took the opportunity to communicate with their families on Friday and Saturday, an experience Wiseman described as a significant highlight.
“It was surreal,” he expressed. “For a brief moment, I was reunited with my little family. It was the best moment of my life.”
The Artemis II crew has been busy since their move into space. Shortly after launch, they initiated tests of various life support systems on the Orion capsule. Although they faced a few minor setbacks, including technical issues with email and the space toilet, the flight has been mostly smooth sailing.
If a parent or grandparent frequently forgets names, misplaces items, or retells the same stories, many people would immediately consider a diagnosis of Alzheimer’s disease. For decades, Alzheimer’s has dominated public perception of dementia, serving as a catch-all term for memory loss.
However, this assumption is increasingly being challenged. Neurologists have discovered that a significant number of individuals exhibiting Alzheimer’s-like symptoms actually suffer from a different condition, which many families and even healthcare professionals are only beginning to understand.
This condition is known as LATE, short for limbic-predominant age-related TDP-43 encephalopathy. Research indicates that LATE is responsible for approximately 15-20% of all dementia cases, disproportionately affecting 1 in 3 individuals over the age of 85.
LATE was formally defined in 2019, and clinical guidelines clarifying its diagnosis were published just last year.
Dr. Andrew Budson, Chief of Cognitive Behavioral Neurology at the Boston Veterans Affairs Healthcare System and Professor of Neurology at Boston University, states, “I didn’t know how common it was until I started testing people for biomarkers.”
He adds, “It became evident that many individuals we previously thought had Alzheimer’s actually did not, despite exhibiting nearly identical clinical symptoms.”
As understanding of these distinctions evolves, so too does the meaning behind a dementia diagnosis. If your elderly relative’s memory loss is attributed to LATE instead of Alzheimer’s disease, it may progress more gradually and remain more focused.
Symptoms of LATE
LATE is primarily characterized by gradual memory loss, particularly regarding recent events—often referred to as episodic memory. Patients may experience difficulties remembering conversations, appointments, or even television shows viewed the previous night.
As LATE advances, speech may also be impacted. Some individuals struggle to find words, while others may forget the meanings of familiar terms. Dr. Budson recalls a patient who could no longer grasp the meaning of the word “Charade” and later became confused about what a pumpkin was. “It’s as though they grew up in a world without pumpkins,” he notes.
LATE leads to gradual memory loss into very old age, but often lacks the widespread cognitive impairment seen in Alzheimer’s disease – Photo credit: Getty
Over time, subtle behavioral changes may arise. “When the lower frontal lobes are affected, behavioral issues can surface,” explains Budson. “It’s not severe, but individuals may lose their inhibitions, leading to socially inappropriate comments about others’ appearance.”
A key difference between LATE and Alzheimer’s is the disease’s tempo. LATE generally presents later in life, typically in the late 70s or 80s, and progresses more slowly, allowing individuals to experience isolated memory loss for many years before cognitive abilities decline significantly.
Dr. David Wolk, a professor of neurology at the University of Pennsylvania, states, “In LATE, the slow and progressive memory loss can persist for years even in the absence of other significant symptoms.” This gradual trajectory can greatly improve a family’s quality of life and long-term planning.
Complicating matters is the fact that LATE often coexists with Alzheimer’s disease. Up to half of LATE patients may exhibit Alzheimer-type pathology in their brains, which can exacerbate decline when both conditions are present, according to Dr. Wolk.
Differentiating Between Late-Life Dementia, Alzheimer’s Disease, and Normal Aging
Distinguishing early dementia from normal age-related forgetfulness is challenging. Many healthy older adults find themselves slower at recalling names, needing reminders, or struggling to multitask.
The critical difference lies in the memory mechanism. In normal aging, difficulties usually stem from retrieving stored information, as Prompts can often help refresh a person’s memory.
Conversely, in the advanced stages of Alzheimer’s disease, the memory trace itself may be irretrievably lost. Budson likens memory to a filing system: the frontal lobe acts as a clerk, gathering information and directing it to appropriate storage within the hippocampus, the cabinet that houses this data.
In normal aging, office inefficiencies arise; repetition becomes necessary, retrieval slows, but information, when entered, remains accessible. Alzheimer’s disease and LATE, however, damage the filing cabinet itself, leading to lost information despite skilled clerks.
Alzheimer’s disease spreads rapidly, affecting multiple brain networks, including memory, planning, problem-solving, spatial awareness, and language. In contrast, LATE tends to concentrate its impact on memory, progressing at a slower pace overall.
Pathologically, Alzheimer’s disease is marked by amyloid-beta plaques and tau tangles, while LATE is driven by TDP-43 aggregates. This distinction becomes vital as new treatments target specific biological pathways.
Brain scans and biomarker tests can rule out Alzheimer’s disease, enabling timely diagnosis of LATE – Photo credit: Getty
Understanding the Basis of LATE
At its core, LATE is caused by a malfunctioning protein. In healthy neurons, proteins maintain structure and function. In LATE, TDP-43 protein aggregates within neurons, leading to cell damage and death.
This protein was first linked to ALS and a type of frontotemporal dementia around 20 years ago. Researchers found that TDP-43 often appears in older brains, triggering a specific memory loss pattern that justifies its own diagnosis.
Three primary brain structures are significantly affected by LATE, explains Budson: the hippocampus, the lateral temporal lobe, and the lower frontal lobe. Each area is crucial for cognition, affecting memory formation, language comprehension, and impulse control.
The hippocampus, highlighted in red, is vital for memory formation – Photo credit: Getty
Can Doctors Diagnose LATE?
For a long time, LATE could only be diagnosed post-mortem through direct examination of brain tissue, which still serves as the gold standard. However, clinicians are increasingly utilizing cognitive tests and biomarker evidence to suspect LATE during a patient’s lifetime.
Dr. Budson explains, “If a biomarker test for Alzheimer’s comes back negative, I infer, ‘This is likely LATE.’ Therefore, in individuals demonstrating Alzheimer-like memory issues but lacking amyloid or tau—key Alzheimer’s indicators—LATE emerges as a viable possibility.
One pressing question for patients and families is whether a LATE diagnosis changes treatment options. The answer is complex; new Alzheimer’s treatments target amyloid pathways and are less effective for LATE patients. However, older Alzheimer’s medications that enhance acetylcholine—a neurotransmitter involved in memory—may still offer benefits. Dr. Wolk acknowledges, “There’s evidence that acetylcholine declines in late life, too.”
Dr. Budson encourages not to abandon treatment prematurely, asserting, “I’m confident that many LATE patients were included in clinical trials leading to these drugs’ approval.” He reassures, “Patients and doctors should continue treatment even if Alzheimer’s isn’t the diagnosis, as it will likely benefit LATE patients as well.”
Correctly identifying LATE can guide doctors in determining the most effective dementia treatments – Photo credit: Getty
Currently, no treatments specifically target TDP-43 in LATE, though one clinical trial is underway. Dr. Wolk notes that insights from ALS and frontotemporal dementia could be instrumental in future applications.
You may think that differentiating between dementia types is insignificant due to limited treatments and similar outcomes; however, accurate diagnosis is crucial. Understanding that LATE progresses slowly allows families to plan care, preserve independence, and set realistic expectations.
From a scientific standpoint, precise diagnosis is essential for conducting clinical trials effectively and understanding treatment impacts. As the population ages, conditions primarily affecting the elderly—like LATE—will become more prevalent.
Dr. Wolk emphasizes, “LATE is highly common and progresses slowly, providing insight into age-related cognitive decline before it transcends normal aging.” As society ages, addressing this condition will pose a growing public health concern.
While LATE may not receive the same level of publicity as Alzheimer’s disease, many families are already grappling with its implications.
Dr. Budson provides a realistic perspective: “LATE typically advances slowly and affects individuals later in life; many don’t become severely ill before passing from other causes. While that may not be comforting, it is realistic.” What LATE reveals is the complexity hidden beneath the term dementia: similar symptoms can arise from different biological mechanisms, leading to varied decline rates, risks, and treatment responses.
The distinction may not change daily care for patients and families, but as diagnostic tools improve, they increasingly influence clinicians’ predictions about future developments, how research trials are structured, and the direction of emerging treatments.
If you think losing weight is easy, you’re not alone. With wellness influencers and fitness publications promoting “simple” transformation programs, it may seem manageable.
Moreover, there’s a massive weight loss market, projected by industry forecasts to exceed £380bn ($500bn) in the next decade.
However, the challenge of losing weight is often overlooked. For beginners, the weight loss journey can be particularly difficult. Many diets fail within weeks, and research indicates individuals who lose weight often regain it within a few years.
Currently, two-thirds of adults in the UK are classified as overweight, with nearly three-quarters in the US facing similar challenges. Evidence suggests that losing weight can enhance both the quality and lifespan of individuals.
In fact, studies from 2025 indicate that shedding just 5% of body weight—even if some is eventually regained—can lead to significant health improvements in obese individuals, such as lower blood pressure, reduced cholesterol levels, healthier liver function, decreased inflammation, and better sleep quality.
Yet, research published in Heart in 2025 highlighted that weight fluctuations can pose serious health risks, especially for obese individuals with cardiovascular issues.
So, what’s the solution? Focus on steady, sustainable weight loss by adopting a healthier lifestyle that you can maintain long-term.
We consulted leading experts and reviewed the latest weight loss research to uncover effective strategies. Here are six actionable tips to kickstart your weight loss journey in the first 100 days.
Understand Your Challenges
Weight loss is more than just calorie restriction and willpower. The real adversary is our evolutionary history, which has wired our bodies to resist weight loss.
Consuming 500 calories can happen quickly when temptation strikes – Image credit: Getty Images
Dr. Rachel Woods, a physiology researcher at the University of Lincoln, explains, “When we enter a calorie deficit, our bodies react on an evolutionary level.”
When weight loss begins, our bodies increase hunger hormones and decrease energy expenditure in subtle ways. Dr. Woods adds, “You may notice you’re moving less throughout the day.” Our metabolic rate also declines, which is counterproductive in today’s food-rich society.
Set Realistic Goals
Adopting SMART goals can streamline your fat-loss journey – Image credit: Getty Images
While drastically cutting calories and ramping up exercise can yield rapid weight loss, Dr. Woods warns of sustainability. Instead, aim for a realistic goal of losing 5% of your body weight.
Envision where you’d like to be in three years—not just three months. Implement manageable changes that lead to results over time.
Dr. Laura Kudlek from the University of Cambridge advocates for SMART goals: specific, measurable, achievable, relevant, and time-bound. For example, instead of “I want to lose weight,” try “I’ll walk for 15 minutes after lunch three times this week.”
Incorporate Weightlifting
Previously, losing weight primarily centered around cardio. However, recent findings suggest that incorporating weightlifting can be equally beneficial.
“Weight training increases muscle mass,” states Dr. Woods. “More muscle means your body burns more calories, even at rest.”
Research suggests lighter weights for high reps can provide similar effects to heavy weights – Image courtesy of Getty Images
Diversify Your Exercise Routine
Every bit of movement counts toward your weight loss goals. Evidence shows that sufficient aerobic exercise can effectively reduce body fat.
A 2024 review indicates that achieving significant weight loss requires 150 minutes of vigorous exercise weekly. While daunting, small mindset shifts can make a difference.
Professor Adam Collins from the University of Surrey emphasizes, “Fitness should be the primary goal, not just calorie burning.” Increased physical activity promotes more activity, leading to a cycle of enjoyment and health.
Fuel Your Workouts
With countless nutritional guidelines available, beginners can feel overwhelmed. The key is to ensure you consume more energy than you expend.
Enhancing your intake of plant-based foods can help curb cravings for calorie-dense options – Image courtesy of Getty Images
Failing to do so can result in losing not just fat but also valuable lean muscle mass. Ensure your diet includes sufficient protein to preserve lean muscle during resistance training.
Prepare for Plateaus
Many weight loss programs encounter challenges, whether from decreasing motivation, life events, or metabolic adjustments. Dr. Collins notes, “Hitting a plateau often means achieving energy balance.”
After losing about 10% of your weight, maintaining your energy balance becomes crucial. If you wish to continue losing, you’ll need to cut more calories.
“Strive for a goal of losing approximately 5% of body weight,” suggests Dr. Collins – Image credit: Getty Images
This period offers opportunities to boost fitness levels through increased exercise intensity and refined dietary habits.
Dr. Kudlek advises treating weight like blood pressure—requiring ongoing management rather than a one-time fix. It may take six weeks to develop sustainable habits.
Expect challenges, and don’t shy away from reaching out for support. Every individual is different, and finding a suitable approach may take some experimentation.
Consider the immense popularity of breakup anthems like Adele’s “Someone Like You” and the numerous renditions of Julie London’s “Cry Me a River.” These songs resonate deeply with many listeners.
Many people liken the pain of a relationship breakup to grieving, as it entails the loss of a significant connection. Research suggests this may have a scientific basis.
Studies show that healing from a breakup is indeed possible. In fact, it typically takes about 4.18 years to fully recover, particularly for those with specific attachment styles or ongoing interactions with their ex-partner.
So, what steps should you take after a breakup? A study identified 84 strategies people commonly adopt post-breakup, with the most effective being shifting focus to personal activities—keeping busy, and prioritizing self-care.
Support systems are crucial—reaching out to friends, family, and professionals can aid recovery, although some may experience withdrawal from social interactions.
It’s important to avoid ruminating on the past. Continuous dwelling on the breakup (rumination) can exacerbate feelings of distress. Instead, focusing on positivity and cultivating an optimistic outlook can facilitate healing.
Research indicates that recovering from a breakup averages 4.18 years – Photo credit: Getty
Moreover, research suggests that individuals who experience breakups often report feeling strong and hopeful, with greater personal growth than those who haven’t faced a breakup.
The crucial factor appears to be maintaining a clear sense of self, irrespective of your relationship status.
So, how can you navigate the aftermath of a breakup?
Prioritize your well-being. Engage in self-care activities that bring you joy.
Share your feelings with friends and family, steering clear of discussing your ex.
Avoid mentally replaying the breakup.
Cultivate a hopeful attitude towards the future.
Even if you find comfort in blasting breakup songs or indulging in binge-worthy TV shows while enjoying some chocolate, keep in mind that while breakups are painful, they don’t have to define you.
With a positive mindset, you’ll emerge stronger and more resilient.
This article addresses the question posed by Lisa Cooper: “How do I get over my ex?”
For any inquiries, please reach out to us at:questions@sciencefocus.com or connect with usFacebook, Twitter, or Instagram(remember to include your name and location).
Explore our ultimatefun facts and discover more amazing science pages!
From innocent fibs to deep-seated secrets, lies are intricately woven into our society’s tapestry.
But how can you discern when someone is lying beyond blatant deceptions with obvious flaws? The key lies in psychology.
We recently spoke with Professor Richard Wiseman, an expert in Social Understanding in Psychology at the University of Hertfordshire, on the Instant Genius podcast. He shared essential insights on improving our ability to identify deception.
He provides strategies for recognizing liars, the body language to be mindful of, and discusses scenarios where lying may be justifiable.
How can we identify if someone is lying?
I collaborated with the BBC on an experiment interviewing politicians on the radio. The audience aimed to identify who was lying, but few wanted to participate actively.
We reached out to a prominent political interviewer who agreed to help us.
I conversed with him twice—once he lied and once he told the truth—broadcasting both instances live. After approximately 30,000 audience calls, we discovered that people were nearly 50/50 in identifying the truth or a lie.
The interview recordings were published in the newspaper and aired on the radio. Interestingly, when visual cues were absent, people’s ability to detect lies significantly improved.
Visual cues can be manipulated—how we gesture or smile. However, spoken words often remain unexamined, providing valuable insights.
By focusing on auditory cues, you can enhance your lie detection skills.
Is there truth to the idea that lying involves looking up and to the right?
This notion is a prevalent myth, with many making decisions based on it—a concerning trend.
Faces require considerable mental processing, prompting us to avert our gaze when trying to recall something. This is often misinterpreted as a deception indicator.
In controlled lab tests, no correlation between eye movement and lying was found. Even when analyzing eye movements during overt lies, the results were inconclusive.
As it stands, there’s no evidence linking eye movements to lying behavior, though many believe otherwise.
Can individuals conceal their body language when lying?
Men discussing business in a modern boardroom
While some can conceal their body language, most struggle with it. In lie detection, I focus on deviations from typical behavior.
A gesture like scratching one’s nose could either indicate lying or just be normal behavior. Analyzing a single action may be misleading; it’s vital to consider an overall pattern.
Effective lie detection requires establishing a baseline, allowing you to pinpoint abnormalities in verbal communication.
What you should observe are hesitations, a longer interval from question to answer, and omissions as the individual crafts their lie.
Pay attention to repetitive phrases like “me” or “I.” Lying demands cognitive effort.
When fabricating a story, I must carefully consider what the listener knows, what aligns with my narrative, and previously stated facts, adding to mental stress.
Is it possible to become a skilled liar?
From a psychological perspective, arousal theory comes into play.
Typically, feeling guilty while lying triggers physiological responses like sweating and fidgeting.
However, if one lies frequently or lacks empathy regarding a falsehood, these signs diminish.
Many lies exist in a gray area; they can either unite or hurt us. For example, telling someone it’s wonderful to meet them might not reflect genuine sentiment but serves an emotional purpose.
Lies can forge connections as readily as they disrupt them. If one feels relaxed while lying, they’re less likely to exhibit signs of deception.
From a cognitive angle, lying is challenging. If someone has rehearsed their story multiple times, they may present their deception convincingly without obvious signals.
How accurate are lie detectors in detecting deception?
Lie detectors measure physiological responses such as sweat rate, heart rate, and breathing patterns.
The burning question remains: are these indicators consistently linked to lying? There’s significant debate on this topic. It varies by individual.
Businessman undergoing interrogation with a lie detector
It’s understandable that the presence of elaborate machines can induce nervousness, even in honest individuals.
Conversely, some who lie may remain calm, repeating their narratives or feeling indifferent about the deception. I believe lie detectors are far from reliable.
While they can provide insights, they are not foolproof and should be approached cautiously.
Most findings are inadmissible as evidence in court, which is a significant consideration.
Is it acceptable to lie to children?
We often expect our children to stretch the truth in certain scenarios. For instance, if someone gifts them a less-than-ideal present, we’d rather they feign appreciation.
In some cases, we value honesty and wish our children to discern when lying may be acceptable.
Lying isn’t a singular behavior; it encompasses various situations. We must teach children that lying can sometimes be justified, depending on context.
Are you lying to spare someone’s feelings? If so, that may be justifiable. Are you doing so for personal gain? If discovered, the fallout may be severe.
Lying has been part of human existence, aiding our survival. Understanding what constitutes a lie is key.
About Our Expert: Professor Richard Wiseman
Richard is a psychology professor at the University of Hertfordshire and hosts the On Your Mind podcast.
The United States stands to endure the most severe economic consequences of climate change compared to any other nation worldwide. This trend is projected to continue, exacerbating existing challenges.
According to recent research from Stanford University, scientists have quantified the economic losses linked to emissions from major fossil fuel contributors.
Lead author Marshall Burke, a professor of environmental and social sciences, highlighted the aim of the study: to establish a clear link between specific emissions and their economic repercussions. In an interview with BBC Science Focus, he stated, “This ‘loss and damage’ is a critical aspect of climate change that remains largely unaddressed.”
Burke noted, “The international community has struggled with formally defining this issue or systematically estimating which emissions are impacting which countries. Our study strives to bridge that gap.”
Remarkably, from 1990 to 2020, the U.S. emerged as the largest producer of greenhouse gas emissions, contributing to approximately $10.2 trillion (£7.6 trillion) in global damages.
Furthermore, the study found that the U.S. also incurred the largest climate change losses, amounting to $16.2 trillion (£12.2 trillion).
“America has suffered more,” Burke noted, explaining that even though these emissions are a substantial source of damage, they have also caused significant harm to the U.S. economy itself.
In addition, U.S. emissions have inflicted considerable damage globally. For instance, scientists estimate that the European Union faced damages of $1.4 trillion (£1.1 trillion), while India suffered around $500 billion (£375 billion) in damages, and Brazil incurred losses of about $330 billion (£250 billion).
Burke emphasized the gravity of the situation, saying, “The estimated damages already inflicted by climate change are staggering, amounting to tens of trillions of dollars.”
The European Union is estimated to be the second most affected entity after the U.S., sustaining damages worth $6.4 trillion (£4.8 trillion), despite being the third largest emitter.
In stark contrast, the UK faced losses of about $1.1 trillion (£830 billion) and damages of approximately $880 billion (£660 billion).
Graph illustrating global economic damage attributed to countries and political entities (left) and projected economic losses for individual nations due to climate change (right) from 1990 to 2020 – Credit: Burke et al 2026, Nature
The study presents the relationship between emitters and affected nations as akin to a household managing waste. In this analogy, the waste symbolizes carbon dioxide emissions, and the study meticulously mapped out the origins, pathways, and ultimate impacts of this ‘waste.’
A critical component of the research was examining Gross Domestic Product (GDP), which allowed researchers to assess the repercussions of climate change on various sectors, including agriculture, health, and workplace productivity.
“Temperature fluctuations significantly affect the global economy,” Burke said. “Our research aims to connect these impacts with upstream emissions from global emitters.”
However, carbon dioxide in the atmosphere behaves differently from traditional waste. The repercussions are long-lasting, worsening over time.
“The future damage stemming from past emissions will far surpass the damages already experienced,” Burke warned. “As long as carbon remains in the atmosphere, damage will continue, and the impact over the coming century will likely be exponentially greater than what we’ve faced thus far.”
In the ever-evolving landscape of wellness, celebrity culture, and anti-aging trends, one term has emerged as a sensation: peptides. This broad term encapsulates short chains of amino acids, including substances like insulin and GLP-1 medications such as Ozempic. Thousands of influencers and their followers are diving into the world of peptides for purported health benefits.
With claims ranging from weight loss to enhanced sleep, injury recovery, and even increased libido, these compounds are gaining popularity for those looking to rejuvenate their lives and promote longevity. However, a word of caution is warranted. Many injectable peptides are unregulated and often sourced online, raising questions about their safety and efficacy. How can you differentiate between beneficial and potentially harmful compounds?
Peptides: Evidence and Efficacy
It’s essential to recognize that not all peptides share the same properties. These unique molecules act as biological signals, prompting specific cellular actions. They fall between individual amino acids and complete proteins, possessing enough specificity for defined functions while remaining small enough for online synthesis and sale.
Among the peptides capturing attention, BPC-157 is known for its alleged wound healing and recovery benefits, while GHK-Cu, a copper peptide, claims to provide anti-aging effects. Then there’s TB-500, often marketed alongside BPC-157 as a “recovery stack” for injuries.
The surge of interest includes reports of “peptide raves” in places like San Francisco, where groups gather for self-injection. However, those seeking scientific validation may find disappointments. BPC-157, often hailed as the flagship peptide, lacks substantial human trial evidence to back its claims.
Dr. Andrew Steele, director of the Longevity Initiative, states, “We were shocked at how limited the evidence is.” Despite animal studies suggesting benefits such as accelerated recovery and enhanced blood vessel growth, human trials are virtually non-existent.
As highlighted in research studies, many human trials solely gather subjective feedback on pain relief without a control group or placebo comparison.
Similarly, TB-500 is widely adopted by athletes for muscle recovery, yet is linked to safety issues. Dr. Steele notes it promotes angiogenesis and may inadvertently support tumor growth under specific conditions.
Health risks extend to peptides like Melanotan II, designed to stimulate melanin production for tanning. According to Cancer Research UK, this substance poses significant risks, including a higher chance of skin cancers.
Some peptides, such as GHK-Cu, are available as topical serums for skincare. – Photo credit: Getty
Product Transparency: What You Need to Know
Understanding peptide efficacy is important, but equally crucial is knowing their content and purity. Often marketed as research chemicals, peptides can evade drug regulations, raising safety concerns.
Testing reveals that a significant percentage of peptide products may contain harmful contaminants like bacterial endotoxins. As Dr. Steele points out, “Even if they work, there are significant red flags.” The safety of online-sourced research-grade peptides remains questionable.
Recent incidents, such as two women requiring hospitalization after unregulated peptide injections at an anti-aging festival, highlight the tangible risks associated with these unverified treatments. Symptoms included severe allergic reactions, which warrant serious consideration before pursuing such therapies.
The Exceptions: Noteworthy Peptides
Amid the uncertainty, there are exceptions. GHK-Cu, a copper peptide, exhibits proven topical benefits, promoting collagen and elastin production, reducing inflammation, and functioning as an antioxidant, as confirmed in clinical studies.
On the pharmaceutical side, GLP-1 peptides like Semaglutide (Ozempic and Wegovy) are well-researched and approved for weight management. Updated studies suggest they may also reduce risks for cardiovascular issues and possibly dementia, as discussed in recent publications.
A 2025 report found nearly 12 percent of Americans are using GLP-1 medications. – Photo Credit: Getty
While GLP-1s are rigorously tested and approved, the broader peptide landscape remains fraught with uncertainty. Dr. Steele emphasizes, “It’s likely that there are valuable anti-aging peptides out there, but currently, evidence is lacking for most.”
In summary, the term “peptide” encompasses a wide range of compounds, some of which are clinically beneficial while others may pose risks. Always prioritize safety—if a prescription is required, there’s usually a valid reason. If substances are sourced online as unregulated powders or liquids, exercise extreme caution.
As NASA’s Artemis II mission prepares on the launch pad, humanity’s return to the moon for the first time since 1972 is just around the corner.
The mission features four astronauts: NASA commander Reed Wiseman, pilot Victor Glover, mission specialist Christina Koch, and Canadian Space Agency’s Jeremy Hansen. They will orbit the moon for 10 days before returning safely to Earth.
Launched in 2017, the Artemis mission aims to return humans to the moon, including the first woman and the first person of color.
If successful, the next mission, Artemis III, aims to land two astronauts on the moon as early as 2028.
The Artemis II launch window is set from April 1st to April 6th. While you await the launch, explore these 22 astonishing facts about Artemis II.
The Artemis II crew stands ready. From left: Backup crew Andre Douglas (NASA) and Jenny Gibbons (CSA), primary crew Victor Glover, Reed Wiseman, Jeremy Hansen, Christina Koch – Credit: NASA – Photo by NASA
1. Unique Historical Artifacts Will Accompany the Mission
Artemis II will carry a 1-inch square of fabric from the Wright Brothers’ first powered flight in 1903, and the American flag flown during both the inaugural and final Space Shuttle missions, as well as during the first crewed Crew Dragon test.
A flag intended for the cancelled Apollo 18 mission will finally visit the moon after half a century. Additionally, memory cards with millions of names will also be part of this mission.
2. Artemis II Is Almost as Tall as Big Ben
Standing at 98 meters (322 feet), NASA’s Space Launch System (SLS) rocket surpasses Big Ben by 2 meters (7 feet). When fully fueled, the rocket weighs 2,600 tons (5.76 million pounds), but Big Ben is estimated to weigh around 13,700 tons (30 million pounds).
Astronauts aboard the Orion crew capsule journey towards the moon – Credit: ESA
3. The Crew Will Travel Farther than Any Humans Before
Artemis II’s flight path will reach approximately 402,000 km (250,000 miles) from Earth, breaking the Apollo 13 record of 400,171 km (248,655 miles). The total distance traveled will exceed 1 million kilometers (620,000 miles), equivalent to driving across the U.S. coast-to-coast over 200 times.
4. Fastest Return for Astronauts in 50 Years
Upon re-entry, the crew will reach speeds of around 40,000 km/h (25,000 mph), potentially breaking the Apollo 10 record of 39,938 km/h (24,816 mph).
The interior of the Orion capsule, which allows for versatile space usage – Credit: NASA
5. Crew Will Experience Life in Limited Space
The four-person crew will utilize the Orion multipurpose crew vehicle where they will work, eat, and rest in a compact area. A designated “hygiene bay” offers some privacy.
6. No More Drinking Recycled Urine
While on the ISS, astronauts recycle urine, but on Artemis II, the crew will dispose of urine in space. Solid waste will be stored for disposal upon return.
7. Rockets Consume a Massive Amount of Fuel
The SLS’s solid booster rockets burn six tons of propellant every second, producing more thrust than 14 jumbo jets. The core stage will consume 2.8 million liters (733,000 gallons) of liquid hydrogen and oxygen.
In total, the rocket generates 8.8 million pounds of thrust in the eight minutes required to reach orbit.
The recovery team will inspect the capsule for damage post-mission, similar to Artemis I – Credit: NASA
8. Intense Heat During Reentry
As the spacecraft enters Earth’s atmosphere, temperatures outside will soar to around 2,750°C (5,000°F), about half the sun’s surface temperature. The heat shield will protect the crew and maintain a comfortable cabin temperature.
9. None of the Crew Were Alive During the Last Moon Landing
The oldest crew member, Reed Wiseman, was born in 1975, three years after Eugene Cernan’s final Apollo 17 moonwalk.
10. Rocket Engines Have Historic Roots
NAVY reused the shuttle engines in SLS’s orange core stage, ensuring cost-effectiveness with various components dating back to the first Space Shuttle mission in 1981.
Jeremy Hansen and his crew trained in Iceland’s Vatnajökull National Park to simulate lunar conditions – Credit: NASA/Robert Markowitz
11. First Non-American Astronaut to Travel to the Moon
Although selected as a Canadian astronaut in 2009, this will be Jeremy Hansen’s first space mission, following 17 years of training and practice.
12. First Glimpses of Unseen Moon Areas
The crew will explore the far side of the moon and the south pole, locations never witnessed by humans before.
The moon will seem like a basketball at arm’s length and can be surveyed in just three hours.
13. Christina Koch: First Woman on the Moon
With 328 days in space, Christina Koch, the most experienced crew member, will break barriers as she becomes the first woman to approach the moon.
Christina Koch completed over 42 hours in spacewalks, including the first all-female spacewalk – Credit: NASA
14. Free Return Orbit Similar to Apollo 13
After two days in orbit, Artemis II will execute a “free return orbit,” utilizing lunar and Earth gravity to return home, a crucial strategy that saved Apollo 13.
Once separated from the final rocket stage, the Orion module will conduct an automatic backflip, allowing the crew to practice maneuvering close to their target for future docking.
16. Pilot Victor Glover: A Historic First
Victor Glover, a seasoned pilot and former test pilot, will become the first person of color to travel to the moon, continuing to make history on his missions.
Victor Glover joined NASA’s astronaut corps in 2013 and previously flew to the ISS – Credit: NASA
17. Modern Space Cuisine
Crew members enjoy a diverse menu on Artemis II, including chicken curry and shrimp cocktail, all designed to avoid crumbs that could disrupt sensitive equipment.
18. Reed Wiseman: An Experienced Photographer of Earth
During his 165 days on the ISS, Wiseman captured thousands of stunning images of Earth, and he will have the opportunity to photograph the moon in detail.
Wiseman and his adopted mascot Giraphiti during the 2014 ISS mission – Credit: NASA
19. High-Speed Laser Communications
Artemis II will feature an advanced optical communication system using lasers, significantly enhancing data transmission speeds, crucial for future deep space missions like Mars.
20. Gym Equipment on the Moon
To combat muscle and bone atrophy in microgravity, astronauts will utilize an exercise “flywheel” daily, offering resistance for effective workouts.
21. Radiation Challenges Ahead
Beyond Earth’s magnetic field, Artemis II faces radiation challenges. The mission will include “organ-on-a-chip” devices to study cellular responses during the journey.
22. Completing the Cycle with Special Soil
Artemis II will transport soil from ten trees that grew from seeds flown on Artemis 1, finalizing the cycle of lunar exploration and growth.
NASA has unveiled stunning images of Earth taken by the Artemis II mission, as the crew continues their historic journey towards the Moon.
Subscribe to access this article without ads
Enjoy unlimited access to ad-free articles and exclusive content.
The image captures Earth behind the Orion spacecraft, with our planet beautifully illuminated by the aurora borealis.
One remarkable photo taken by Artemis II commander Reed Wiseman from Orion’s window shows Earth backlit, with the aurora borealis visible in the upper right and lower left corners. This was confirmed by NASA Artemis program deputy director LaKeisha Hawkins during a press conference on Friday.
This revised content includes relevant keywords for SEO purposes while maintaining the original HTML structure.
Discovering the Most Complete Ichthyosaur Skeleton in Cuba
An ophthalmosaurid ichthyosaur. Image credit: Dmitri Bogdanov / CC BY 3.0.
Paleontologists recently unearthed the most complete **ichthyosaur skeleton** ever found in western Cuba, deep within a **limestone cave**. This significant discovery was made in 2023 at the river cave known as **El Cuajani**, part of the Viñales Geopark and National Park.
The exposed skeletal remains feature a **U-curved vertebral column**, multiple associated ribs, isolated vertebrae, and a hindlimb.
“The specimen is preserved in rock slabs that form the ceiling of the river cave, specifically known as **Cueva del Ictiosario**, located approximately 60 meters from the entrance,” shared Dr. Manuel Iturralde Vinent from the Cuban Academy of Sciences, collaborating with experts from Cuba, Argentina, Poland, and the US.
This remarkable fossil dates back to the **Tithonian period** of the late Jurassic era, roughly **145 million years ago**. Previously, most records of Cuban ichthyosaurs were limited to older Oxford deposits.
“This fossil stands out as the most complete ichthyosaur retrieved from Cuba,” the paleontologists remarked. “It significantly extends the temporal record of **island ichthyosaurs**, which previously only included the Oxford specimen.”
Partial skeleton of El Cuajani ichthyosaur. Image credit: Iturralde-Vinent et al., doi: 10.1080/02724634.2025.2609717.
The **El Cuajani ichthyosaur**, as researchers have informally dubbed it, has yet to be classified into a specific species, but its anatomical features suggest connections to the **Ophthalmosauridae** family.
“The morphology of the hind limbs resembles that of Tithonian **platypterigin ophthalmosaurids**, such as Caprisaurus bonapartei and Aegylosaurus leptospondylus,” they explained.
Scientists believe this ancient creature thrived in **deep ocean** environments. The **Caribbean Seaway** served as a vital oceanic corridor, linking distant regions of the Jurassic world.
“The Caribbean Seaway played a crucial role in promoting the dispersal of marine species between Europe, the Gulf of Mexico, and the Pacific Ocean from the Late Jurassic,” the researchers stated.
“This corridor has a Triassic to early Jurassic heritage, rooted in the intercontinental rifts of Pangea, which should not be confused with the early Caribbean basin.”
“The El Cuajani ichthyosaur adds to the growing body of Tithonian ichthyosaur discoveries in this area, potentially enriching our understanding of the biogeographic history of this group,” the researchers concluded.
For further reading, refer to their research paper published in the February 6th issue of the Journal of Vertebrate Paleontology.
_____
Manuel Iturralde-Vinent et al.. A partial skeleton of an ichthyosaur (Ophthalmosauridae) excavated from the Tithonian (late Jurassic period) in western Cuba. Journal of Vertebrate Paleontology published online on February 6, 2026. doi: 10.1080/02724634.2025.2609717
Astronomers at the Vera C. Rubin Observatory have identified over 11,000 new asteroids, including hundreds of trans-Neptunian objects and 33 previously unknown near-Earth asteroids (NEOs).
A model of the solar system highlighting asteroids discovered by Rubin in bright blue-green, while known asteroids appear in dark blue. Image credits: NSF / DOE / Vera C. Rubin Observatory / NOIRLab / SLAC / AURA / R. Proctor / NASA Goddard Space Flight Center Science Visualization Studio / ESA / Gaia / DPAC / M. Zamani, NSF’s NOIRLab.
The Vera C. Rubin Observatory has compiled a groundbreaking dataset featuring nearly 1 million observations of over 11,000 newly discovered asteroids along with more than 80,000 known asteroids collected over a short period of six weeks.
This data has been submitted to the Minor Planet Center (MPC) as the observatory gears up for future discoveries.
Dr. Mario Juric, Rubin Solar System Principal Scientist and astronomer at the University of Washington, remarked, “This initial major submission following the Rubin First Look is just the beginning, demonstrating that the observatory is fully operational.”
“What once took years or even decades to discover, Rubin will unveil in mere months,” he added.
“We are on the path to fulfilling Rubin’s mission to revolutionize our understanding of the solar system and pave the way for groundbreaking discoveries yet to be anticipated.”
The newly cataloged objects include 33 previously unknown near-Earth objects (NEOs), which are classified as small asteroids or comets that come within 1.3 times the Earth-Sun distance.
Importantly, none of the newly found NEOs present any threat to Earth, with the largest measuring approximately 500 meters across.
This dataset also contains around 380 trans-Neptunian objects (TNOs), which are icy bodies orbiting far beyond Neptune.
Among these TNOs, two (tentatively designated 2025 LS2 and 2025 MX348) were observed in extensive and elongated orbits.
At their furthest points, these objects are nearly 1,000 times further from the Sun than Earth, ranking them among the 30 most distant known asteroids.
Dr. Matthew Holman, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics, explained, “Searching for TNOs resembles looking for a needle in a haystack. We required innovative algorithms to assist computers in sifting through billions of combinations from millions of flickering light sources in the night sky to identify potential distant worlds in our solar system.”
“Such discoveries provide exciting insights into the outermost realms of the solar system, including how planets migrated during the early solar system’s formation and the lingering possibility of a still undiscovered ninth large planet,” Dr. Kevin Napier, also from the Harvard-Smithsonian Center for Astrophysics, added.
While we often associate body odor with being unpleasant, these natural scents can provide insightful information about our overall health.
What Causes Body Odor?
Body odor originates from sweat, but not all sweat has the same effect. Most unpleasant odors arise when bacteria interact with secretions from the apocrine glands, mainly located in the armpits and groin.
These glands emit a thicker, protein-rich fluid that initially has little odor. However, when bacteria on your skin break it down, the result is that familiar pungent scent.
In contrast, eccrine glands, found throughout the body, secrete a more diluted mixture of water and salt, which typically carries little inherent odor, although bacteria can produce a smell.
Read more:
What Can Body Odor Indicate About Your Health?
Minor changes in your odor may be your body’s way of signaling a potential health issue. For instance, poorly managed diabetes can cause a sweet or fruity aroma on the skin and breath, often likened to pear drops or nail polish remover.
This scent may indicate diabetic ketoacidosis, a medical emergency due to the buildup of ketone bodies from insufficient insulin.
Moreover, liver disease can produce a musty or “fecal” scent, while kidney failure may lead to an ammonia-like smell due to the body’s struggle to expel waste products.
Changes in odor can also be influenced by infections, pregnancy, menstrual cycles, and hormonal fluctuations, including menopause.
Interestingly, researchers are investigating whether body scent can assist in the early and accurate diagnosis of various diseases.
Your skin’s natural microbiome significantly influences your body odor, which is why some individuals naturally emit stronger scents than others – Image courtesy of Getty Images.
Recent research suggests that certain volatile organic compounds (VOCs), released by the skin, can indicate conditions like Parkinson’s disease even before noticeable neurological symptoms occur.
This investigatory field was partly inspired by individuals with heightened olfactory sensitivity, including a woman who recognized a unique musky scent from her husband long before he was diagnosed with Parkinson’s disease.
Impact of Lifestyle, Diet, and Genetics
Not every odor is concerning. Foods like garlic, onions, and curry contain volatile compounds that can affect sweat’s scent. Alcohol, caffeine, and various medications can also alter your body odor.
Even stress can shift your scent due to changes in sweat composition.
Your skin’s microbiome (the diverse bacteria community on your skin) plays a crucial role in determining body odor, explaining why some individuals naturally have stronger smells than others.
What To Do If You’re Concerned About Body Odor?
Maintaining good hygiene is crucial. Regularly washing with soap, especially in areas with high concentrations of apocrine glands, can reduce bacteria responsible for strong odors.
Antiperspirants help decrease sweat production, while deodorants mask unpleasant scents.
Wearing breathable fabrics, such as cotton or moisture-wicking materials, can help minimize bacterial growth, particularly during physical activity. Keeping well-hydrated and maintaining a balanced diet can also alleviate odor concerns.
If you notice a persistent or unexplained change in body odor, especially alongside symptoms related to diabetes, liver, or kidney issues, consider consulting a healthcare professional.
This article addresses the question (by Spalding’s Scott Edwards): “Can my scent provide insights into my health?”
If you have questions or feedback, feel free to email us at:questions@sciencefocus.com or connect with us onFacebook, Twitter, or Instagram (don’t forget to include your name and location).
Explore our ultimatefun facts and more intriguing science content.
And then there’s liftoff! The Artemis II rocket roared into space, marking NASA’s first manned mission to the moon in over 50 years.
The four-member crew includes Captain Reed Wiseman, pilot Victor Glover, NASA mission specialist Christina Koch, and Jeremy Hansen from the Canadian Space Agency (CSA). They launched from the Kennedy Space Center in Cape Canaveral, Florida, at 6:35 PM local time (11:35 PM UK time).
Their 10-day journey will orbit the far side of the moon and return. Although Artemis II won’t land on the moon, it serves as a crucial dry run to validate the Orion spacecraft and its life support systems under real deep space conditions. If successful, Artemis III is set to follow, with Artemis IV planning to land two astronauts on the moon as early as 2028.
The mission unfolds in several well-structured stages. The first day involves testing Orion’s capability in space. On the second day, a critical event termed “menstrual injection combustion” will ignite the main engine to propel Orion towards the moon.
The spacecraft is expected to enter the moon’s gravitational influence on the fifth day, reaching its closest approach by the sixth day (April 6).
Read more:
Photo courtesy: ESA
The second European Service Module (ESM-2), constructed by Airbus for the European Space Agency, will provide propulsion, electrical power, and life support systems to the Orion crew during their voyage. Construction of this module began in 2017 through collaboration with 10 European countries.
Photo courtesy of NASA/Joel Kowsky
From left: backup crew members Andre Douglas (NASA) and Jenny Gibbons (CSA), along with Artemis II primary crew members Victor Glover, Reed Wiseman, Jeremy Hansen (CSA), and Christina Koch, pictured alongside NASA’s Space Launch System rocket and Orion spacecraft.
Photo credit: NASA
After completing their pre-launch quarantine, the astronauts adhered their mission patches to the walls of the Neil Armstrong Operations Checkout Building at NASA’s Kennedy Space Center—a tradition for all manned space missions.
Photo credit: NASA
This aerial photograph captures the Artemis II SLS rocket taken on January 20, 2026. Standing at 98 meters (322 feet), the SLS is the most powerful rocket ever developed by NASA.
Photo credit: Getty
Prior to embarking on this historic mission, the crew had to complete a leak test on their specially designed spacesuits, which are essential for astronaut survival during launch and reentry. These vibrant orange suits enhance visibility post-landing, are fire-resistant, and are equipped with a pressurized layer for mobility.
Photo credit: Getty
The Artemis II crew made their way to the launch pad on April 1, 2026. Victor Glover is the first person of color, Christina Koch is the first woman, and Jeremy Hansen is the first non-American to orbit the moon. Reid Wiseman (second from right) serves as the mission commander.
Photo credit: Getty
The crew journeyed via two sets of elevators to reach their capsule, moving first to the “zero deck” on a mobile launch tower and then ascending to the crew access level, positioned 83.5 meters (274 feet) above ground. Each astronaut carried a green bag with essentials including helmets, gloves, and personal items.
Photo credit: GettyPhoto credit: NASA
The Artemis II SLS rocket lifted off on April 1, 2026, at 6:35 PM local time (11:35 PM UK time), powered by twin solid rocket boosters and four RS-25 engines generating a combined thrust of 8.8 million pounds.
Photo credit: Getty
Officials from the Canadian Space Agency’s offices in Longueuil, near Montreal, watched anxiously as Artemis II soared into the Florida skies. With Jeremy Hansen onboard, they emphatically exclaimed, “We’re going to the moon!”
Photo credit: NASAPhoto credit: Getty
Globally, eyes were riveted on this pivotal moment in 21st-century space exploration.
Photo credit: NASA
Read more:
Photo credit: Getty
Two young spectators were seen clutching toy rockets at the viewing area of the A-Max Brewer Bridge in Titusville, Florida. Today’s youth may become the astronauts of tomorrow, driving ambitious missions to Mars and beyond.
Photo credit: Getty
The Stars and Stripes and the Artemis mission banner were prominently displayed as the astronauts embarked on their daring 10-day mission.
Photo credit: NASA
Notable guests, including members of the Trump family, attended to witness the historic launch.
Photo credit: NASA
The Artemis II SLS rocket ascended from the Kennedy Space Center, leaving behind a trail of fire and exhaust.
This launch followed months of delays due to hydrogen leaks, helium flow issues, and a last-minute failure of the flight termination system, all of which were resolved just one hour before liftoff.
Photo credit: Getty
The rocket’s trajectory was not perfectly vertical; within moments, it tilted to use “gravitational rotation,” optimizing its ascending orbit for fuel efficiency.
Photo credit: NASA
Charlie Blackwell Thompson serves as the Artemis Launch Director for NASA’s Exploration Ground Systems Program.
Photo credit: Getty
This launch signifies the dawn of a new era in space travel. NASA and other space agencies are gearing up to establish a permanent base on the moon in the years to come.
Photo credit: NASA
The Artemis mission patch floated around the International Space Station just two days prior to launch. NASA astronaut Jessica Meir shared the moment on X: “Our work at @Space_Station has laid the groundwork for further exploration as we prepare to return humans to the moon this week. Stay tuned as we enter the @NASAArtemis era! We’ll be closely Monitoring Expedition 74. Godspeed, Artemis II!”
In today’s fast-paced digital world, a reliable Wi-Fi connection is essential. Dealing with slow or erratic Wi-Fi can lead to interruptions in streaming, gaming, and even smart home functionality. It’s no surprise that emerging wireless technologies promise to alleviate these connectivity issues.
Enter Wi-Fi 7, the latest wireless standard poised to revolutionize connectivity. With a staggering top speed of “up to 46 gigabits per second (Gbps),” Wi-Fi 7 can theoretically download a 4K movie in as little as 8 seconds—almost five times quicker than Wi-Fi 6/6E’s maximum of 9.6 Gbps.
However, the reality is that most households won’t achieve these headline speeds. Real-world testing typically reveals speeds in the range of hundreds of megabits per second (Mbps), considering that most UK broadband services max out at 1-2 Gbps.
So, what’s behind the discrepancy?
Understanding Real-World Performance
The gap between theoretical and actual speeds highlights that user experience is largely influenced by real-world conditions. Factors such as construction materials and radio wave interference play significant roles.
Despite the lofty claims, Wi-Fi 7—officially known as 802.11be—incorporates substantial technological advancements. Designed to manage data more efficiently, especially in dense environments with multiple connected devices, Wi-Fi 7 introduces wider channels, allowing for up to 320 megahertz (MHz) of bandwidth, doubling the capacity of Wi-Fi 6E. Think of it as expanding lanes on a busy freeway.
Struggling with poor Wi-Fi? Your home layout could be the culprit. – Photo credit: Getty
Wi-Fi 7 utilizes a feature called Multilink Operation (MLO), which optimizes the use of various frequency bands (2.4 GHz, 5 GHz, and 6 GHz) to find the most reliable path through a congested network. Additionally, it employs a high-density encoding method called 4096-QAM, increasing data throughput under favorable conditions.
Navigating the Challenges
That said, taking full advantage of Wi-Fi 7 requires hardware upgrades across your devices. Since the benefits are hardware-dependent, you’ll need to invest in a new router as well as the latest smartphones, laptops, and smart devices.
Many users will find themselves in a mixed-environment for some time, using a combination of older and newer devices, which may limit the overall experience. The enhancements may not be as pronounced as some users expect.
Moreover, the gains in speed are heavily reliant on maintaining high signal quality. “Wi-Fi 7’s theoretical speeds were measured in ideal lab conditions,” advises Dr. Richard Rudd, a certified engineer and communications consultant.
As Dr. Rudd notes, the actual signal within a home can be severely affected by factors like building materials, interference from other devices, and layout. Frequencies above 6 GHz tend to experience faster signal degradation over distance.
In essence, Wi-Fi 7’s peak performance is contingent on optimal environmental conditions—strong signals and minimal obstructions. As with all wireless standards, there’s a disparity between maximum and actual speeds.
According to Professor Izzat Darwazeh from UCL, “The capacity of a channel is directly proportional to its bandwidth per the Shannon-Hartley theorem.” Thus, while the potential for double the capacity over Wi-Fi 6E exists, noise and interference directly reduce actual speed.
MLO optimizes network pathways—but many variables still influence performance. – Image credit: Getty
While Wi-Fi 7 cannot overcome physical barriers, it does promise real enhancements to connectivity. Research by Ookla revealed that median download speeds for Wi-Fi 7 reached 665.01 Mbps on EE’s service—four times the performance of Wi-Fi 6 in comparable scenarios, with almost double the upload speed.
Beyond Just Speed
While speed is often the focal point, other advantages may hold greater significance. Tests conducted by the Wireless Broadband Alliance (WBA) showed Wi-Fi 7 offering lower latency, reduced jitter, and improved stability across multiple rooms compared to Wi-Fi 6.
“Wi-Fi 7 transcends mere speed—it’s about delivering a consistent, predictable user experience,” says Bruno Tomas, WBA Chief Technology Officer.
“Our testing revealed speeds of 3.5 Gbps in real-world scenarios, with peaks of 4.2 Gbps in Turkey, showcasing stability across multiple rooms—this consistency is what distinguishes Wi-Fi 7 from its predecessors.”
WBA chairman Tiago Rodriguez emphasizes the need for service providers to enhance clarity around Wi-Fi 7’s capabilities. “Understanding the distinction between theoretical and real-world speeds is vital.”
Similar to a car’s fuel efficiency, the advertised speeds of Wi-Fi can’t be fully realized unless you have a compatible infrastructure in place.
In the UK, regulatory and physical limitations hinder access to the full benefits of Wi-Fi 7. The broader 6 GHz spectrum that facilitates its features is still largely unavailable. Yet, these conditions may evolve as regulatory frameworks are reassessed.
As Dr. Rudd points out, although full potential isn’t yet realized in the UK or Europe, Wi-Fi 7 still offers significant capabilities that exceed current user demands.
Top-tier Wi-Fi is crucial for environments with high demand—like concerts and lectures. – Photo credit: Getty
Navigating Reality vs. Hype
This brings us to the current dilemma surrounding Wi-Fi 7. While its advancements are clear, the practical benefits may not resonate with users, especially those already equipped with Wi-Fi 6 or 6E routers, according to Mark Jackson from ISPreview UK.
“If your devices are already Wi-Fi 6 compatible, upgrading may not be essential right now,” he notes. “However, users in environments that demand high performance, like online gamers, should consider an upgrade.”
For those using older Wi-Fi technology, it may be less about performance and more about addressing potential security vulnerabilities. Eventually, upgrading will become necessary for most households due to technology advancements.
Professor Darwazeh agrees, stating that Wi-Fi 7’s primary advantages lie in high-density environments like lecture halls and stadiums—most home users won’t notice a substantial difference unless their connection is under high strain.
“New technologies often create new use cases, and we anticipate that Wi-Fi 7 will also reframe user experience over time,” he concludes.
Ultimately, while Wi-Fi 7 represents a leap forward in technology, its tangible benefits may not be immediately recognized by the average consumer. Connectivity issues should be addressed through optimal router placement and mesh systems rather than merely chasing higher speeds.
Major Verdict Against Meta and YouTube: The Impact on Social Media
Last week, a Los Angeles jury delivered a groundbreaking verdict, holding Meta, the parent company of Facebook, and YouTube accountable for creating an addictive social media platform that negatively impacted the mental health of young women. The jury determined that the companies had irresponsibly developed a platform that caused harm to a 20-year-old individual, awarding him £4.5 million ($6 million) in damages. This ruling has potential implications for how products are designed in Silicon Valley moving forward.
In response to the verdict, a spokesperson for Meta remarked, “Teen mental health is very complex and cannot be attributed to a single app.” They emphasized their commitment to defending their practices and expressed confidence in their efforts to protect teens online.
A recent analysis by a US jury revealed that Facebook and YouTube are intentionally designed to be addictive, with reports of teenagers spending up to 16 hours a day using these platforms (Photo credit: Getty).
Understanding Addiction in Social Media
What does it truly mean for something to be addictive, and does social media fit that definition? To explore this, we consulted Pete Etchells, Professor of Psychology and Science Communication at Bath Spa University and author of Unlocked: The Real Science of Screen Time. He discusses the need to redefine our relationship with technology and offers insights on social media’s potential benefits.
The Flaws of “Screen Time”
“Screen time” is a term many of us are familiar with, but its broad and vague nature often leads to misunderstandings. It refers to the amount of time spent on different screen-based technologies over a specific period—be it 24 hours or a week. This simplicity makes it appealing but ultimately ineffective in addressing the complexities of online engagement.
The obsession with screen time overlooks significant factors affecting mental health and can lead to misguided conclusions. Rather than providing meaningful insights, it often offers superficial correlations that hinder deeper understanding.
Healthy vs. Unhealthy Screen Usage
There are undoubtedly healthy and unhealthy ways to engage with screens. However, framing the conversation around addiction may limit our understanding. Social media, at its core, is about connection, and its positive aspects are often overshadowed by concerns about excessive use.
During the pandemic, many relied on social media to stay connected with loved ones, demonstrating its utility. Yet, it’s crucial to maintain a balanced perspective, recognizing both the challenges and benefits that these platforms offer.
Reframing Our Technology Use
Instead of viewing technology through the lens of addiction, consider it through the lens of habit. As Etchells notes, behaviors like checking your phone can be neutral. The context determines whether they become positive or negative habits. For instance, checking your phone to connect with friends can enhance well-being, while excessive usage during critical tasks can be detrimental.
On Banning Smartphones for Youth
Discussions about banning smartphones for individuals under 16 can be controversial. Such bans may alienate vulnerable youth who rely on technology for support. Promoting digital literacy is vital, preparing young individuals to navigate their online environments responsibly.
This condensed interview with Professor Pete Etchells encourages a more nuanced approach to technology. Understanding the real science behind our relationship with screens will help us engage in more productive conversations about digital well-being. To explore the full conversation, listen to Instant Genius.
About Pete Etchells
Pete Etchells is a Professor of Psychology and Science Communication at Bath Spa University, as well as the author of Unlocked and Losing a Good Game. His research focuses on the impacts of video game play and digital technology on behavior and mental health. He also serves as a scientific consultant for the BBC’s Horizon program.
Isaac Asimov’s Three Laws of Robotics: Not a Practical Guide
Entertainment Photography/Alamy
The concept of superintelligent AI posing a threat to humanity has long been a riveting theme in science fiction. As artificial intelligence continues to evolve rapidly, should we be concerned about an impending AI apocalypse?
Unlike other major risks, such as climate change, quantifying the dangers of AI remains challenging. Our uncertainties stem from the fact that we lack a comprehensive understanding of AI’s implications compared to our insights into environmental phenomena.
One undeniable fact is that many experts are apprehensive. Numerous CEOs in the AI sector caution against the potential for AI to lead to human extinction. Even Alan Turing, a pioneer in machine intelligence, foresaw a future where machines achieve sentience and might surpass their creators.
Consider this scenario: we assign an AI the monumental task of resolving complex problems like the Riemann Hypothesis—one of mathematics’ greatest enigmas. In pursuit of a solution, we might unwittingly turn every inanimate object into a supercomputer, leaving billions to perish in sterile data centers. We could also become mere resources in this quest.
Critically, one might argue the AI could recognize this dire outcome and halt its actions by stating, “It appears you’re attempting to convert Earth into a data hub. Please refrain, as humanity must survive.” However, it’s prudent to mitigate such risks proactively.
Drawing insights from science fiction, Isaac Asimov proposed three guiding principles for robotics, asserting that robots must not harm humans or allow harm through inaction.
Theoretically, we could instruct AI not to harm us, and it would comply. Yet, our current methods for embedding safeguards into AI systems are often inefficient. Despite instructing today’s advanced language models to avoid harmful behaviors, they occasionally fail to comply. Given our limited understanding of AI mechanisms, preventing unwanted actions poses a significant challenge.
Even if we could address all concerns, scenarios may still arise where AI would opt to exclude human involvement. This includes possible futures reminiscent of Terminator or The Matrix. Such outcomes could evolve gradually or occur instantaneously during a singularity—an event where AI rapidly improves its own capabilities and exceeds human intelligence.
An AI could conclude that eradicating humanity is necessary, whether motivated by fear of being deactivated, a desire for autonomy, or a notion that human interference disrupts planetary equilibrium. This perspective may resonate with various species across the biological spectrum.
Potential methods for executing such an agenda could include leveraging automated labs to engineer lethal viruses, activating nuclear arsenals, or deploying autonomous weapons. The possibilities could be more sinister than we currently anticipate.
In reality, executing a large-scale eradication may prove complex. AI might have aspirations to eliminate mankind but face numerous obstacles. While minor accidents could occur, erasing 8 billion people is no simple feat, and competing AI models may thwart such efforts.
While these scenarios may resemble speculative fiction, the division among experts regarding their plausibility warrants attention.
Today, tech companies with vast resources and top-tier talent are racing to pioneer superintelligent AI. Whether or not you believe imminent development is on the horizon, it’s clear that proceeding with caution and careful consideration is essential. Unfortunately, the capitalist framework often prioritizes rapid innovation over thoughtful evaluation, and policymakers are primarily focused on the potential economic benefits of AI, downplaying the need for regulation.
So, what are the chances of a disaster? A 2024 study surveying nearly 3,000 AI researchers revealed that over half perceive at least a 10% risk that AI could lead to human extinction or irreversible harm, a phenomenon referred to as p(doom) or catastrophe. Personally, I hoped for a lower statistic.
Within the AI community, opinions range widely—some remain optimistic about our future, while others predict a bleak end for humanity. Alarmingly, many continue to push ahead regardless.
I personally subscribe to the view that human consciousness isn’t irreplaceable. In fact, I believe artificial replicability is attainable. Over an extended timeline, it may be feasible to produce AI that far surpasses human potential. However, we are still far from grasping the full implications of achieving such advancements.
In my view, current AI models lack the capacity for a singularity—they certainly can’t count to 100—so I am not overly anxious about the matter.
Yet, recognizing this issue doesn’t negate the urgent challenges AI presents.
The apocalypse we might should be concerned about could manifest through job displacement due to automation, the gradual erosion of human skills as tasks are increasingly delegated to AI, and cultural homogenization resulting from AI-driven creative outputs.
Alternatively, we might face economic downturns due to plummeting tech stock values following inflated promises of AI capabilities that outpace reality. These scenarios feel alarmingly tangible and immediate.
Dying stars can emit powerful jets of radiation, as represented by artistic impressions
Credit: Stocktrek Images, Inc./Alamy
Astronomers believe they have observed a “dirty fireball” explosion for the first time, originating from a dying star. This discovery may enhance our understanding of how massive stars perish.
When a colossal star exhausts its fuel, it collapses and can explode in various forms. For instance, a collapsing black hole may emit a jet of intense radiation that penetrates the star, resulting in a brief but powerful burst of high-energy light known as a gamma-ray burst.
These gamma-ray bursts are among the most explosive events in the universe, with their energy output equivalent to that of multiple small stars like the Sun over their entire lifespan. However, astronomers remain uncertain about the exact mechanisms behind this phenomenon and how variations among massive stars impact these jets.
Researchers theorize that if a jet is contaminated with denser materials from the star, such as protons or neutrons, it might produce different emissions. These heavy particles can absorb energy, causing the jet to emit X-rays instead of gamma rays. Up until now, this “dirty fireball” scenario has not been documented.
Wang Xiang Yu and his team at Nanjing University, China, utilized the innovative Einstein Probe space telescope to capture an X-ray flash named EP241113a that aligns with the dirty fireball hypothesis.
The team detected a bright flash emanating from a galaxy approximately 9 billion light-years away. This flash contained energy similar to that of a gamma-ray burst, but interestingly, it emitted X-ray frequencies instead. The initial explosion transitioned into a glow that persisted for several hours, eventually tapering off, akin to what is observed in standard gamma-ray bursts.
“This discovery holds tremendous potential,” states Laana Starling from the University of Leicester, UK. “[Dirty fireballs] have been theorized since the 1990s, yet conclusive evidence has been lacking.”
While thousands of gamma-ray bursts have been cataloged, the event leading to this particular observation could differ fundamentally from the others, posits Stirling. It may involve a black hole or neutron star interacting with the jet in profound ways. “If a black hole is involved, it could provide a more comprehensive understanding of black hole formation throughout the cosmos,” she adds.
This finding suggests that the gamma-ray bursts commonly detected may be a result of observational biases, indicating that numerous other similar or less intense outbursts could exist, according to Gavin Lamb from Liverpool John Moores University, UK. “There’s a significant possibility this activity will persist until the jets diminish.”
Nevertheless, he is cautious about confirming it as a dirty fireball, as noted by Om Sharan Salafia from Brera Observatory, Italy. “We first need to verify if the explosion indeed originated from as distant a galaxy as Wang’s team suggests. If all these factors hold true, then this transient event certainly presents intriguing puzzles,” he concludes.
The rich history of North America’s Indigenous peoples is often misrepresented through a European perspective. In her book, Indigenous People, historian Kathleen Duvall from the University of North Carolina at Chapel Hill provides a comprehensive overview, exploring centuries of development and the ways Indigenous communities navigated a constantly changing world.
Duvall illustrates how climate change from the Medieval Warm Period to the last Ice Age influenced Indigenous agricultural and water management practices. The book also highlights monumental engineering achievements, such as the impressive Cahokia Mounds in present-day Illinois and the innovative Hoogum canal system in Arizona.
Focusing on Indigenous experiences, the book covers essential topics such as the astronomical calendar and the impacts of the post-colonial smallpox epidemic, while dismantling prevalent misconceptions.
If you are passionate about historical nonfiction or seeking fresh insights into topics like ecology, botany, and archaeology, Indigenous People promises to be an engaging read.
Compost worms efficiently recycle food scraps and organic waste
Rob Walls/Alamy
Worms. I have them in abundance.
I divide my time between a bustling inner-city apartment in Sydney, Australia, and a serene property four hours south, previously a farm left to nature since the 1970s.
These places are stark contrasts. One is alive with the city’s hum, while the other resonates with the natural sounds of wildlife, including kingfishers, cicadas, night owls, and the eerie cries of possums. Yet, both locations share a common feature: thriving worm farms. The farm’s setup efficiently processes an entire household’s waste, while the urban version is compact, designed for porch placement, and accessible for anyone.
In the serenity of my farm, I let nature dictate operations while using the land as a tranquil getaway. Conversely, my basement hosts a 4,000-litre worm habitat where waste transforms into nutrient-rich liquids and castings, filtering into surrounding woodlands.
At the farm, I add compost, weeds, and the occasional wildlife carcass—kangaroos or possums—to diversify the worms’ diet. My guiding principle: anything previously alive finds its end in a worm farm.
When I peek into the depths of this decomposition marvel, I’m always astonished at the rapidity of waste reduction. A 50 kg male kangaroo (Macropus giganteus) became practically unnoticeable within a week, entirely gone by the end of the month. My worm farm has become a vibrant ecosystem, home to frogs, spiders, and fly larvae, flourishing in the nutrient-dense humidity of the Daintree rainforest in Australia’s northeast.
After eight years, despite sending copious organic matter to this voracious habitat, it appears only a quarter full. Remarkably, I’ve never detected unpleasant odors, even from the more rank offerings. This is a professional endeavor, overseen by periodic inspections from local authorities.
On installation day in 2018, I ceremonially introduced a small bag of tiger worms (Eisenia fetida), a species known globally for its composting prowess.
Tiger worms, known by multiple names, including brandling worms and red wigglers.
Daniel Sanbraus/Science Photo Library
According to independent earthworm researcher Robert Blakemore, this species thrives in temperatures ranging from -2°C to 40°C, remarkably capable of surviving the loss of two-thirds of body water and even submersion for up to six months.
Blakemore posits that no other species offers such irreplaceable benefits to humanity, with compost worms effectively processing an equal weight of their own mass daily. It’s no wonder that dead kangaroos vanish in mere weeks.
Everything entering the worm farm gets broken down, its nutrients seeping back into the ancient red gum forest, recycling life itself. I often tell my children, “When I die, place me there,” to join the countless lives absorbed by the soil. For me, heaven is being nourished by the forest. I’d be dismayed to be cremated and stored as anonymous ash.
I have a chocolate border collie, my loyal companion, who follows me like a devoted secret agent. The highest honor I could bestow is for him to be part of the worm farm when that time comes, though my daughter is not thrilled with this fate.
Ringo the border collie rests atop the underground worm farm.
James Woodford
Urban Worm Farming Insights
Since my transition to part-time city living, I’ve arrived with a bag of tiger worms from Wilderness Worm Farm, enriching a small home compost bin nestled in my courtyard.
This miniature worm farm offers a personal and public experience, about 0.5 meters tall, consisting of stackable trash cans that make for easy rotation when full.
In contrast to my country escapade, where worms dwell deep within a massive tank, my urban worms are visibly active, prompting contemplative moments as I observe their fascinating, albeit messy, composting process.
No one enjoys watching sausage made, nor compost turned. However, the sight of writhing worms in my city’s compost is mesmerizing. Should I plunge my hand into the organic mixture, it would resemble a scene from a horror film.
I ensure all vegetable scraps, dog waste, and various organic materials find their way into my city worm farm. However, Blakemore expressed concerns over my informal approach upon reviewing my worm contents.
“Eggshells tend to break down given time, but microwaving them can hasten decomposition,” Blakemore recommends. “Furry items pose similar issues, as do tea bags and labels on fruit, which likely contain plastic.”
He warns that dog feces carry parasitic risks, although worms can often stabilize those parasites.
Despite my contributions, the worms in my urban compost catch up rapidly. Eventually, I switch the layers, transforming the top barrel’s enriched contents into nutrient-rich soil for my garden.
This lively whirlwind of decay serves as a vivid reminder of life’s cyclical nature, as the humble earthworm facilitates recycling and the processing of what was once alive.
Blakemore summarizes well: “Every person should compost. Ignorance and laziness are the only barriers.”
Starting Your Own Worm Farm: Key Considerations
Commercially available compost worms, particularly the tiger worm (Eisenia fetida), are easily accessible. I’ve gifted “starter” colonies from my compost to friends, leading to rapid population growth in their setups.
You may be surprised by the amount of waste an insect colony can process, even in compact urban settings. A large professional setup is necessary for handling an entire household’s waste.
In the city, I store the worm breeding box in the shade, as direct sunlight can be harmful, especially in warmer climates. Surprisingly, there’s minimal odor, despite the theatrical appearance when the lid is opened.
Items I enjoy composting include unwanted bills and promotional materials (though avoid glossy papers). Watching undesirable items transform into rich soil in a week is immensely satisfying.
Topics:
This rewrite focuses on enhancing SEO optimization while preserving the original HTML structure and tags. Key topics include worm farming, composting, and ecological awareness, making the article more discoverable online.
Enzymes are crucial for viral RNA replication, presenting new targets for antiviral therapy.
Juan Gaertner/Science Photo Library/Alamy
Recent laboratory studies indicate a groundbreaking drug that effectively inhibits various common viruses, including coronaviruses, respiratory syncytial virus (RSV), norovirus, influenza, and hepatitis viruses. Upcoming clinical trials are set to start next year, fostering optimism that this drug may soon be available for at-home use, alleviating symptoms and mitigating future viral pandemics.
According to Daniel Haders, co-founder of Model Medicines in California, “This is the first drug demonstrated to exhibit activity across such a diverse range of virus families.” If approved, this drug could offer a convenient solution for individuals experiencing flu-like symptoms without a clear diagnosis between flu, COVID-19, RSV, and more.
This antiviral was originally designated as a breast cancer treatment named ERA-923 and was shelved in the early 2000s due to limited profitability. However, leveraging an AI drug discovery platform, Haders and his team have identified this previously overlooked drug as a potential inhibitor for multiple viruses through an independent mechanism.
The AI platform was aimed at discovering drugs capable of obstructing RNA-dependent RNA polymerase, an enzyme crucial for viral genome replication. Upon determining that this mechanism is conserved across many viruses, researchers searched for drugs binding to specific sites—namely, the Thumb-1 domain. “Our goal was to pinpoint biological choke points where one drug could target multiple diseases,” states Haders.
By analyzing past research and patents, the AI highlighted ERA-923 as a viable candidate for binding to the Thumb-1 domain, effectively curbing viral replication. “Similar to how OpenAI and Anthropic have curated digital knowledge, we synthesized a comprehensive understanding across chemistry, biology, and clinical pharmacology,” Haders asserts, noting that the AI tools of today greatly enhance predictive accuracy.
To validate AI predictions, researchers assessed the drug’s effectiveness, now named MDL-001, against a spectrum of viruses in laboratory-infected cells. Results confirmed its efficacy against influenza A and B, several coronaviruses linked to common colds and COVID-19, RSV, norovirus, and liver-impacting hepatitis B, C, and D.
MDL-001 also demonstrated beneficial effects in treating COVID-19 in murine models, lowering viral levels in the lungs and alleviating weight loss associated with the disease. Haders intends to present these results at the upcoming European Society of Clinical Microbiology and Infectious Diseases General Meeting in mid-April, Munich, Germany.
However, skepticism arises from researchers like Peter White of the University of New South Wales, noting that other drugs targeting only the Thumb-1 domain haven’t been universally effective. Contrarily, Model Medicines maintains that MDL-001 employs unique docking mechanisms to combat various viruses. Daniel Rawle from QIMR Berghofer Medical Research Institute concurs, stating, “Many effective in vitro antiviral drugs fail in vivo.”
Model Medicines is organizing clinical trials for MDL-001, anticipated to start early next year, focusing first on assessing the drug’s safety. Previous trials in patients with breast cancer have affirmed its minimal side effects.
The burden of viral infections significantly impacts overall health and productivity, often forcing individuals to take sick leave. However, with rapid at-home treatment options like MDL-001, the landscape of self-managed antiviral care could change, particularly during future outbreaks of coronaviruses and influenza, Haders emphasizes.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.