The grand stone statues of Easter Island may have originated from diverse artistic and spiritual traditions, where multiple communities independently created their own massive carvings, rather than through a centralized effort led by a powerful ruler. This revelation aims to better identify the island’s primary quarries.
Easter Island, or Rapa Nui, located in the Pacific Ocean, is believed to have been settled by Polynesian navigators around 1200 AD.
Archaeological observations indicate that the Rapa Nui were not politically unified, prompting discussions on whether the numerous moai statues were produced under a centralized authority.
The island had only one quarry, Rano Raraku, that provided the volcanic rock utilized for the statue carvings.
Curl Lipo and his team at Binghamton University in New York employed drones and advanced mapping technology to develop the first 3D representation of the quarry, which holds many incomplete moai. Lipo noted that earlier studies yielded varying results regarding the number of moai remaining at the site.
Lipo and his associates documented 426 features representative of the moai at different completion stages, 341 grooves indicating the planned carving blocks, 133 carved cavities for removing the statues, and five bollards likely used for lowering the moai into position.
It was also noted that the quarry was divided into 30 distinct working areas, each functioning independently with various carving methods, according to Lipo.
The idea that small factions of workers may have relocated the moai statues, along with prior evidence of separate territories marked by groups at freshwater sources, hints that the statue carvings stemmed from community-level competition rather than centralized governance, Lipo explained.
“Monumentalism signifies a competitive display among peer communities instead of top-down mobilization,” he stated.
Historians continue to discuss the alleged decline of the Rapa Nui, with some contending that resource over-exploitation resulted in a severe social breakdown, while others challenge this narrative.
Lipo argues that the collapse theory presumes a centralized leadership pushed for monument construction, leading to deforestation and social disintegration. “However, if monuments are decentralized and arise from community competition rather than intentional expansion, then deforestation cannot be attributed to egotistical leadership,” Lipo comments.
Nevertheless, some researchers are skeptical about this perspective. Dale Simpson, from the University of Illinois at Urbana-Champaign, concurs there wasn’t a singular overarching chief as seen in other Polynesian regions such as Hawaii and Tonga; however, he suggests clans were not as isolated as proposed by Lipo and others, indicating there must have been collaboration among the groups.
“I think they’ve had a bit too much Kool-Aid and haven’t fully considered the limiting factors in a confined area like Rapa Nui, where stone is paramount. It’s not feasible to carve moai within a single clan without interaction and stone-sharing,” he notes.
Jo Ann Van Tilburg from the University of California, Los Angeles, mentioned that further investigations are in progress to ascertain how the Rapa Nui exploited Rano Raraku, asserting that the conclusions drawn by Lipo’s team appear “premature and overstated.”
Machu Picchu and the Science of the Incas: Peru
Immerse yourself in the vital ruins of the Inca civilization with two visits to Machu Picchu, and discover that the tale of the Incas encompasses much more than just one location.
Our personal genome (an organism’s genetic information) contains remnants of viruses that once infected our ancestors.
No need to worry though. These viruses aren’t contagious like those that cause COVID-19 or the common cold; instead, they are sequences that have been integrated into our DNA over millions of years.
Most of these sequences come from a specific group of viruses known as retroviruses, which invade host cells and manipulate them into producing replication-required proteins.
Sometimes, a retrovirus can insert itself into a sperm or egg cell, which allows it to propagate across subsequent generations.
While this occurrence is rare, its frequency increases over extensive periods of evolution. Currently, about 8 percent of our DNA is comprised of these viral remnants.
Viruses have subtly merged into our DNA over millions of years – Image credit: Science Photo Library
For many years, scientists believed these viral sequences were mostly insignificant, referring to them as “junk DNA” that merely existed within cells without serving any important purpose.
However, recent research has shifted this perspective. Modern iterations of these viral proteins have been found to play crucial roles in functions such as memory retention, the development of the placenta, and enhancing our immune system’s ability to combat harmful microorganisms.
Nonetheless, it’s not all positive. Certain viral DNA fragments are linked to various human diseases, including amyotrophic lateral sclerosis (ALS), certain cancers, and type 1 diabetes.
While they may not directly cause disease, they could play a role in the intricate biological processes that researchers are exploring.
This article addresses the question (submitted by Nick Conley via email): “Can a virus alter my DNA?”
If you have any inquiries, please reach out to us at:questions@sciencefocus.com or send us a messagefacebook,×orInstagram Page (remember to include your name and location).
Explore our ultimatefun facts and discover more incredible science pages
While the precise amount of urine contributed by cetaceans to the ocean remains unclear, marine biologists have recently highlighted the crucial role whale urine plays in sustaining a healthy marine ecosystem by redistributing significant amounts of nutrients.
For instance, female humpback whales feed in the Gulf of Alaska and then travel thousands of miles to the Hawaiian Islands to give birth.
This is particularly important for newborn calves, as they require a warm and comfortable environment to thrive, supported by a thick layer of insulating blubber. Conversely, the most nutritious feeding grounds for whales are found in the cold, krill-laden waters of polar regions.
Whales can produce hundreds of gallons of urine daily – Image credit: Getty
When whales head to their breeding areas, they typically cease feeding and rely on stored fat for energy. Consequently, the nutrients they consumed in high-latitude regions are released as urine and feces.
Particularly noteworthy is the significance of urine on this conveyor belt; a 2025 study revealed that gray, humpback, and right whales collectively transport nearly 4,000 tons of nitrogen annually.
In regions around the Hawaiian Islands, migrating whales can effectively double the nutrient influx into shallow waters.
This nutrient flow is critical as it stimulates the growth of phytoplankton, injecting energy into the marine food web.
The impact of this process was even greater prior to commercial whaling, when the nutrient transport via the Great Whale Conveyor Belt was likely three times more than it is today.
This article addresses the inquiry (made by Lou Grant in Birmingham): “What portion of the ocean consists of whale pee?”
If you have any inquiries, feel free to email us at:questions@sciencefocus.com or send us a messagefacebook,×orInstagramPage (please include your name and location).
Discover our ultimatefun facts for more incredible science content.
A dog (Canis lupus familiaris) and a wolf (canis lupus) can interbreed to create fertile offspring, but such occurrences are far less common than in domestic and wild populations of other species. In a recent study, researchers from the American Museum of Natural History, the Smithsonian National Museum of Natural History, and the University of California, Davis, combined localized ancestry estimation with phylogenetic analysis of the genomes of 2,693 ancient and modern dogs and wolves. They discovered that 64.1% of contemporary purebred dogs possess wolf ancestry in their nuclear genomes, stemming from admixture that occurred nearly 1,000 generations ago, while all analyzed free-ranging dog genomes showed signs of ancient wolf ancestry.
German shepherd puppy. Image credit: Marilyn Peddle / CC BY 2.0.
“Modern dogs, especially those kept as pets, seem quite distant from the often vilified wolves,” states Dr. Audrey Lin, a postdoctoral fellow at the American Museum of Natural History.
“However, certain wolf-derived traits are highly valued in our current dogs, and we have intentionally preserved them in this lineage.”
“While this research focuses on dogs, it reveals much about their wild relatives, the wolves.”
Dogs evolved from a gray wolf population that faced extinction due to human influence during the late Pleistocene, approximately 20,000 years ago.
Though wolves and dogs inhabit overlapping areas and produce fertile offspring, instances of interbreeding are infrequent.
Aside from rare cases of intentional interbreeding, there is limited evidence of genetic exchange between the two groups following dog domestication, which separated their gene pools.
“Prior to this study, prevailing theories posited that for a dog to be classified as such, it would need to have minimal or no wolf DNA,” remarked Dr. Lin.
“Yet, upon examining the modern dog genome closely, we found wolf DNA present.”
“This indicates that the dog’s genome can incorporate wolf DNA to varying extents without losing its identity as a dog.”
The researchers scrutinized historical gene flow between dogs and wolves utilizing 2,693 publicly accessible genomes from wolves, purebred dogs, village dogs, and other canids from the late Pleistocene to the present, sourced from the National Center for Biotechnology Information and the European Nucleotide Archive.
The findings revealed that 64.1% of breed dogs possess wolf ancestry in their nuclear genomes, a result of crossbreeding occurring about 1,000 generations ago.
Moreover, all genomes from village dogs (free-ranging canines residing near human settlements) displayed detectable wolf ancestry.
The Czechoslovakian wolfdog and Saarlos wolfdog, which were purposefully crossbred with wolves, exhibited the highest levels of wolf ancestry, ranging from 23% to 40% of their genomes.
The breeds considered most “wolf-like” include the Great Anglo-French Tricolor Hound (4.7% to 5.7% wolf ancestry) and the Shiloh Shepherd (2.7% wolf ancestry).
The Shiloh Shepherd is the result of breeding efforts that included wolf-dog hybrids aimed at producing healthier, family-friendly sheepdogs in the U.S., while the origins of the significant wolf ancestry in the Great Anglo-French Tricolor Hound (the prevalent modern hunting dog in France) remain enigmatic.
The Tamaskan is another “wolf-like” breed that emerged in the UK during the 1980s by selectively breeding huskies, malamutes, and others to achieve a wolf-like appearance, containing roughly 3.7% wolf ancestry.
Researchers identified several patterns within the data. Larger dogs and those bred for specific tasks, such as arctic sled dogs, “pariah” breeds, and hunting dogs, exhibited higher levels of wolf ancestry.
Terriers, gundogs, and scent hounds typically have the least wolf ancestry on average.
While some large guardian breeds have wolf ancestry, others, such as the Neapolitan Mastiff, Bullmastiff, and St. Bernard, showed no signs of wolf ancestry.
Interestingly, wolf ancestry was also detected in a variety of dog breeds, including the miniature Chihuahua, which has around 0.2% wolf ancestry.
“This shouldn’t surprise anyone who owns a Chihuahua,” Dr. Lin noted.
“What we’ve discovered is that this is actually common. Most dogs have a hint of ‘wolfishness’ in them.”
The authors also analyzed the frequency with which personality traits were assigned to breeds labeled with high versus low levels of wolf ancestry by Kennel Clubs.
Breeds with lower wolf ancestry were often described as “friendly,” followed by terms like “eager to please,” “easy to train,” “courageous,” “active,” and “affectionate.”
Conversely, dogs exhibiting higher wolf ancestry were more frequently characterized as “independent,” “dignified,” “alert,” “loyal,” “discreet,” “territorial,” and “suspicious of strangers.”
Traits such as “smart,” “obedient,” “good with kids,” “dedicated,” “calm,” and “cheerful” appeared with relative consistency across both groups of dogs.
The researchers clarified that these traits reflect a biased assessment of behavior and that it’s uncertain whether wolf genes directly influence these characteristics, though their findings lay the groundwork for future explorations in canine behavioral science.
Additionally, significant adaptations inherited from wolves were uncovered. For instance, the wolf ancestry in village dogs enhances their olfactory receptor genes, crucial for locating human food waste, and distributions of Tibetan wolf-like genes assist Tibetan mastiffs in surviving low-oxygen conditions on the Tibetan Plateau and Himalayas.
“Dogs are our companions, but it appears that wolves significantly influenced their evolution into the beloved partners we cherish today,” commented Dr. Logan Kistler from the National Museum of Natural History.
“Throughout history, dogs have tackled numerous evolutionary challenges that arise from living alongside humans, such as thriving at high altitudes, foraging for food around villages, and safeguarding their packs. They seem to leverage wolf genes as part of their adaptive toolkit for an ongoing evolutionary success story.”
For more details, check the findings published this week in Proceedings of the National Academy of Sciences.
_____
Audrey T. Lin et al. 2025. The legacy of genetic intertwining with wolves has shaped the modern dog. PNAS 122 (48): e2421768122; doi: 10.1073/pnas.2421768122
Paleontologists conducted an analysis of the path taken by an exceptionally long sauropod at the West Gold Hill Dinosaur Tracking Station in Colorado, USA. Their findings suggest that the massive dinosaurs responsible for it might have exhibited a limp.
Aerial view of the West Gold Hill dinosaur track site in Colorado, USA. Image credit: USDA Forest Service.
Paleontologist Anthony Romilio from the University of Queensland and his team examined over 130 footprints along a 95.5-meter trail that dates back 150 million years.
“This is a remnant from the late Jurassic period, a time when long-necked dinosaurs like diplodocus and camarasaurus thrived across North America,” stated Dr. Romilio.
“This track is particularly special because it forms a complete loop.”
“Although the reason for the dinosaur’s turnaround remains unclear, this trajectory provides a rare chance to analyze how the substantial sauropod executed a sharp turn before returning to its original direction.”
“The scale of the West Gold Hill Dinosaur Track necessitated a novel approach,” remarked Paul Murphy, a paleontologist from the San Diego Museum of Natural History.
“Given the size of the tracks, capturing these footprints from the ground proved to be quite challenging.”
“We utilized a drone to photograph the entire track in high resolution.”
“These images can now be leveraged to create detailed 3D models that can be digitally examined in the lab with millimeter-level accuracy.”
The virtual model reconstructed the sauropod’s movement throughout the entire path.
“It became evident right away that this animal started moving northeast, looped around, and ultimately ended up facing the same direction,” Dr. Romilio explained.
“Within that circular path, we discovered subtle yet consistent indications of its behavior.”
“A notable observation was the variance in width between the left and right footprints, which changed from very narrow to distinctly wide.”
“This transition from narrow to wide footprints suggests that the width may naturally fluctuate as dinosaurs walked. This implies that short segments of seemingly uniform width could misrepresent their typical walking style.”
“We also noted a small but ongoing difference in stride length of roughly 10 cm (4 inches) between the left and right sides.”
“It’s challenging to determine if this signifies a limp or merely a preference for one side.”
“Many extensive dinosaur trails worldwide could benefit from this method to uncover previously hidden behavioral insights.”
The team’s study was published in the journal Geography.
_____
Anthony Romilio et al. 2025. Track by track: West Gold Hill Dinosaur Tracking Site (Upper Jurassic, Bluff Sandstone, Colorado) reveals sauropod rotation and lateralized gait. Geography 5(4):67;doi: 10.3390/geomatics5040067
Unexplained radiation surrounding the Milky Way may hint at dark matter’s composition
Trif/Shutterstock
A mysterious glow detected in the outer regions of the Milky Way may provide the first clues about the nature of dark matter, yet astronomers caution that it’s premature to draw any definitive conclusions.
Dark matter is theorized to account for 85% of the universe’s total mass, but scientists have struggled to identify the particles constituting it.
Among the potential candidates for dark matter are weakly interacting massive particles (WIMPs). These elusive particles are notoriously hard to detect as they seldom interact with normal matter but are believed to occasionally self-annihilate, creating bursts of high-energy radiation in the form of gamma rays.
If dark matter is uniformly distributed across the galaxy as indicated by its gravitational effects, and if it consists of WIMPs, we should observe gamma rays as these particles self-annihilate. For over a decade, astronomers have been investigating whether the anomalously high gamma-ray emissions from the galactic center could signal this phenomenon, yet conclusive evidence remains elusive.
Now, Tomonori Toya, a professor at the University of Tokyo, claims he may have detected such a signal emanating from the Milky Way’s outer halo, utilizing 15 years’ worth of observations from NASA’s Fermi Gamma-ray Space Telescope.
Toya devised a model predicting the expected gamma-ray radiation in this region based on established sources like stars, cosmic rays, and vast bubbles of radiation identified above and below the Milky Way. Upon subtracting this known radiation from the total observed by Fermi, he found a residual gamma-ray glow with an energy level around 20 gigaelectronvolts.
This specific gamma-ray energy strongly aligns with the theoretically anticipated emissions from WIMPs’ self-annihilation, according to Toya. Although he admits it is too early to assert that these gamma-ray spikes are definitively due to dark matter, he describes the findings as “the most promising candidate for radiation from dark matter known to date.”
“Though the research began with the aim of identifying dark matter signals, I initially felt skeptical—like winning the lottery. When I first observed what seemed to be a signal, I approached it with caution,” says Totoni. “However, after thoroughly checking everything and confirming its accuracy, I was filled with excitement.”
“This represents a significant result worthy of further investigation, but firm conclusions cannot be drawn at this stage,” states Francesca Karoly from the French National Center for Scientific Research in Annecy. Accurately modeling all gamma-ray sources in the Milky Way, aside from dark matter, is quite complex, and Totoni has yet to deeply validate her models.
Silvia Manconi of France’s Sorbonne University asserts that the results need additional scrutiny, and more robust models are essential to establish whether the signals are genuine. Additionally, gamma-ray signals from other sources, like dwarf galaxies, are still unobserved and require thorough explanation, she mentions.
Many alternative radiation sources, including radio waves and neutrinos, will also need analysis to ensure the gamma rays aren’t being attributed to something else, says Anthony Brown from Durham University, UK. “Analyzing from just one perspective isn’t sufficient,” he states. “Dark matter necessitates an abundance of high-quality data.”
CERN and Mont Blanc: Exploring dark matter and frozen phenomena in Switzerland and France
Get ready to experience the wonders of CERN, the European center for particle physics, situated near the picturesque city of Geneva, where scientists operate the renowned Large Hadron Collider.
For nearly a century, dark matter has posed a significant enigma. Although it outnumbers ordinary matter by a ratio of five to one, it remains invisible and undetectable by current technology.
A daring new analysis of 15 years of data from NASA’s Fermi Gamma-ray Space Telescope now claims to shed light on this mystery.
The latest research reveals the detection of a peculiar halo-like glow of gamma rays surrounding the Milky Way galaxy, with distinct peaks in energy that align closely with the signals predicted for a specific type of hypothetical dark matter particle.
These particles, referred to as weakly interacting massive particles (WIMPs), can generate gamma rays by annihilating one another.
“If this is validated, it would be the first instance where humanity has ‘seen’ dark matter,” stated Professor Tomonori Toya, an astronomer at the University of Tokyo and co-author of the study.
In an interview with BBC Science Focus, he expressed his initial skepticism: “When I first noticed what looked like a traffic light, I was doubtful, but after careful investigation, I became convinced it was accurate—it was an exhilarating moment,” he shared.
However, despite the excitement surrounding the new signals, independent experts caution that this discovery is far from conclusive.
This possible breakthrough emerges nearly a century after Swiss astronomer Fritz Zwicky first proposed dark matter’s existence, after observing that the galaxies in the Milky Way cluster were moving too swiftly for their visible mass.
Mr. Toya’s study, published in the Journal of Cosmology and Astroparticle Physics, scrutinized 15 years of data from the Fermi telescope, focusing on the regions above and below the Milky Way’s main disk—known as the galactic halo.
After modeling and accounting for known sources of gamma rays, such as interstellar gas interactions, cosmic rays, and massive bubbles of high-energy plasma at the galaxy’s center, he identified a leftover component that shouldn’t exist.
“We detected gamma rays with a photon energy measuring 20 giga-electron volts (or an impressive 20 billion electron volts), extending in a halo-like formation toward the Milky Way’s center,” Toya explained. “This gamma-ray-emitting component aligns with the expected shape of a dark matter halo.”
A gigaelectronvolt (GeV) represents a unit of energy utilized by physicists to quantify subatomic particles’ energy levels—approximately a billion times the energy that a single electron attains when traversing a 1-volt battery.
The potential dark matter signal identified by Toya sharply rises from a few GeV, peaks around 20 GeV, and subsequently declines, consistent with predictions for WIMPs, which possess about 500 times the mass of a proton.
This gamma-ray intensity map illustrates a signal that may originate from dark matter encircling the Milky Way halo. The gray horizontal bar in the central area represents the galactic plane, which was exempted from the analysis to avoid strong astrophysical radiation. – Photo credit: Tomonori Toya, University of Tokyo
In Totani’s perspective, this data significantly indicates the existence of dark matter. “This marks a crucial advancement in astronomy and physics,” he asserts.
Nevertheless, Jan Conrad, a professor of astroparticle physics at Stockholm University in Sweden and an independent expert in gamma-ray searches for dark matter, advises prudence.
“Making claims based on Fermi data is notoriously challenging,” he remarked to BBC Science Focus.
This isn’t the first instance of astronomers witnessing such phenomena; the story stretches back to 2009, shortly after the Fermi telescope’s launch. In that year, researchers identified an unexplained surplus of gamma rays emanating from the galactic center.
For years, this finding stood out as a compelling hint of dark matter. However, Conrad pointed out that even after 16 years, the scientific community has yet to arrive at a consensus about the signal’s dark matter roots.
“It’s believed to be related to dark matter,” he claims. “Despite accumulating data and enhanced methods since then, the question of dark matter’s existence remains unresolved.”
Even at this juncture, researchers who have spent over a decade working to disprove the galactic center excess are unable to definitively prove it is astrophysical in nature (originating from sources other than dark matter), nor can they confirm it is attributable to dark matter. The issue remains unsolved.
Conrad emphasized that the emerging signals from the halo are insufficiently studied and will likely necessitate many more years of investigation for verification. Both the new halo anomaly and the much-debated galactic center signal share a common challenge: noise interference.
In these regions, gamma rays potentially stemming from dark matter annihilation may also originate from numerous other, poorly understood sources—complicating efforts to reach definitive conclusions.
“The uncertainties surrounding astrophysical sources make it exceedingly difficult to assert strong claims,” Conrad stated.
Despite their differing confidence levels, both Totani and Conrad highlight the same forthcoming focus: dwarf galaxies.
These small, faint galaxies orbiting the Milky Way are believed to contain significant amounts of dark matter while exhibiting minimal astrophysical gamma-ray background, rendering them ideal for studying dark matter annihilation.
“If we detect a similar excess in dwarf galaxies, that would provide compelling evidence,” Conrad said. “Dwarf galaxies provide a much cleaner environment, allowing for potential confirmation.”
Dr. Toya concurred, noting, “If the results of this study are validated, it wouldn’t be surprising to observe gamma rays emitting from dwarf galaxies.”
The Cherenkov Telescope Array Observatory (CTAO) is the most sensitive ground-based gamma-ray observatory ever constructed, offering a powerful new approach to scrutinize whether this enigmatic signal is indeed dark matter. – Photo credit: Getty
Yet, the ultimate verification of Toya’s discovery might be closer to home. Experiments designed to detect dark matter are currently taking place in facilities situated deep underground around the world.
“If we were to observe a signal there that aligns with a WIMP of the same mass…that would present a robust argument, as it would be much cleaner,” Conrad pointed out.
In the coming years, the next-generation Cherenkov Telescope Array Observatory (CTAO) will significantly enhance sensitivity to high-energy gamma rays, enabling researchers to analyze halo signals with greater detail.
“Naturally, if this turns out to be true, it’s a significant discovery,” Conrad said. “The true nature of dark matter remains elusive. A clear signal indicating dark matter particles would be monumental. However, further research is essential to explore alternative explanations for this excess.”
A group of Japanese scientists conducted experiments on the model moss species protenema (larval mosses), brood cells (specialized stem cells activated under stress), and sporophytes (protected spores). They investigated Physcomitrium patent to identify the most resilient spores under simulated space conditions, which were then sent to the external environment of the International Space Station (ISS). After nine months in space, over 80% of the spores survived and maintained their capacity to germinate. These findings highlight the potential of land plants like Physcomitrium patent to endure extreme environments when studied in space.
Physcomitrium patent spores demonstrate remarkable resilience to simulated space conditions. Image credit: Meng et al., doi: 10.1016/j.isci.2025.113827.
With the recent rapid changes in the global environment, exploring new avenues for the survival of life beyond Earth has become essential.
Understanding how Earth-origin organisms adapt to extreme and unfamiliar conditions, such as those found in space, is crucial for expanding human habitats on the Moon and Mars.
Researching the survival limits of organisms in both terrestrial and extraterrestrial conditions enhances our comprehension of their adaptability and prepares us for the challenges of ecosystem maintenance.
“Most living organisms, including humans, cannot endure even a brief exposure to the vacuum of space,” explains Dr. Tomomichi Fujita, a researcher at Hokkaido University.
“Yet, the moss spores maintained their vitality even after nine months of direct exposure.”
“This offers astonishing evidence that life forms evolved on Earth possess unique cellular mechanisms to withstand the challenges of space.”
In this study, Dr. Fujita and colleagues examined Physcomitrium patent, a well-studied moss commonly referred to as spread earth moss, under simulated space conditions, which included high levels of ultraviolet radiation, extreme temperature fluctuations, and vacuum settings.
They assessed three structures: Physcomitrium patent — protenema, brood cell, and sporophyte — to determine which is best suited for survival in space.
“We anticipated that the combination of space-related stressors, like vacuum, cosmic radiation, extreme temperature changes, and microgravity, would result in greater damage than any isolated stressor,” remarked Dr. Fujita.
The research revealed that UV light posed the greatest threat to survival, with sporophytes exhibiting the highest resilience among the three moss structures.
Young moss could not tolerate elevated UV levels or extreme temperatures.
Although brood cell viability was significant, the encased spores demonstrated a resistance to UV light that was 1,000 times greater.
These spores survived and germinated after enduring temperatures as low as -196 degrees Celsius for over a week and withstanding heat up to 55 degrees Celsius for a month.
The scientists proposed that the protective structures surrounding the spores may absorb UV light while physically and chemically shielding the spores inside from damage.
This resilience is likely the result of evolutionary adaptations. Moss plants, which evolved from aquatic to terrestrial species approximately 500 million years ago, have survived multiple mass extinctions.
In March 2022, the researchers sent hundreds of sporophytes aboard the Cygnus NG-17 spacecraft to the ISS.
Upon arrival, astronauts affixed the sporophyte samples to the ISS’s exterior, exposing them to space for a total of 283 days.
The spores made their return trip to Earth aboard SpaceX CRS-16, which was returned to the laboratory for analysis in January 2023.
“We had anticipated the survival rate to be nearly zero, but the results were the opposite: the majority of spores survived,” said Dr. Fujita.
“We were truly astounded by the remarkable durability of these tiny plant cells.”
Over 80% of the spores successfully completed the intergalactic journey, with nearly all of them—except for 11%—able to germinate upon returning to the lab.
The research team measured chlorophyll levels in the spores, discovering that all types exhibited normal levels, apart from a 20% reduction in chlorophyll a. Though chlorophyll a is sensitive to changes in light, this decrease did not appear to hinder the spores’ health.
“This study exemplifies the incredible resilience of life that has developed on Earth,” said Dr. Fujita.
Curious about the duration spores could survive in space, the researchers utilized pre- and post-expedition data to formulate a mathematical model.
They projected that the encased spores could endure up to 5,600 days, or around 15 years, under space conditions.
However, they emphasize that this estimate requires further validation through larger datasets to more accurately assess how long moss can thrive in space.
“Ultimately, we hope that this research paves the way for developing ecosystems in extraterrestrial environments like the Moon and Mars,” Dr. Fujita concluded.
“We desire that our moss research can serve as a foundation.”
For further details, refer to the published paper in iscience.
_____
Meng Chang Hyun et al. The extreme environmental resistance and space survivability of moss, Physcomitrium patent. iscience, published online on November 20, 2025. doi: 10.1016/j.isci.2025.113827
Few entities in the universe are as intricate as dark matter, an unseen and exotic “matter” believed to account for most of the mass within galaxies.
The hypothesis suggests that aligning our current physical theories with observed universe phenomena necessitates the presence of substantial volumes of invisible matter. Scientists are convinced that this “missing mass” is real due to its gravitational pull, although direct detection has eluded them; they can only infer its presence.
Nearly a century after dark matter was first hypothesized, Japanese astrophysicists claim to have found the first concrete evidence of its existence—gamma rays emanating in a halo-like formation near the heart of the Milky Way.
“Naturally, we’re extremely enthusiastic!” said Tomonori Toya, a professor in the astronomy department at the University of Tokyo, in an email to NBC News. “While the research aimed at detecting dark matter, I thought the chances of success felt akin to hitting the jackpot.”
Toya’s assertion of being the first to identify dark matter is met with skepticism by some experts. Nonetheless, the findings, published on Tuesday in the Journal of Cosmology and Astroparticle Physics, shed light on the relentless pursuit of dark matter and the challenges of investigating the unseen in space.
Dark matter is estimated to constitute around 27% of the universe, whereas ordinary matter (like humans, objects, stars, and planets) makes up roughly 5%, according to NASA. The remainder consists of another enigmatic component known as dark energy.
Toya’s research utilized data from NASA’s Fermi Gamma-ray Space Telescope, which is focused on the center of our galaxy. This telescope is adept at capturing a powerful form of electromagnetic radiation called gamma rays.
The idea of dark matter was first proposed by Swiss astronomer Fritz Zwicky in the 1930s when he detected anomalies in the mass and movement of galaxies within the gigantic Coma cluster. The galaxies’ velocities exceeded expectations, implying they were bound together rather than escaping the cluster.
The subsequent theory introduced a truly extraordinary form of matter. Dark matter is undetectable because it does not emit, absorb, or reflect light. However, given its theoretical mass and spatial occupation in the universe, its presence can be inferred from its gravitational effects.
Various models strive to elucidate dark matter, but scientists contend that it comprises exotic particles that exhibit different behaviors compared to familiar matter.
One widely considered theory posits that dark matter consists of hypothetical particles known as WIMPs (weakly interacting massive particles), which have minimal interaction with ordinary matter. However, when two WIMPs collide, they can annihilate and emit potent gamma rays.
In his investigation, Toya identified a gamma-ray emission equating to about one millionth of the brightness of the Milky Way. The gamma rays also appeared spread out in a halo-like formation across extensive sky areas. Should these emissions originate from a single source, it may indicate that black holes, stars, or other cosmic entities, rather than diffuse dark matter, generate the gamma rays.
Gamma-ray intensity map covering roughly 100 degrees toward the galactic center. The gray horizontal line in the central section corresponds to the galactic plane, which was excluded from the analysis to avoid strong astrophysical radiation.Tomonori Toya / University of Tokyo
“To my knowledge, there’s no cosmic phenomena that would cause radiation exhibiting the spherical symmetry and unique energy spectrum observed here,” Toya remarked.
However, certain scientists not associated with the study expressed doubts about the findings.
David Kaplan, a physics and astronomy professor at Johns Hopkins University, emphasized that our understanding of gamma rays is still incomplete, complicating efforts to reliably connect their emissions to dark matter particles.
“We don’t yet know all the forms of matter in the universe capable of generating gamma rays,” Kaplan indicated, adding that these high-energy emissions could also originate from rapidly spinning neutron stars or black holes that consume regular matter and emit energetic jets.
Thus, even when unusual gamma-ray emissions are identified, deriving meaningful interpretations is challenging, noted Eric Charles, a scientist at Stanford University’s SLAC National Accelerator Laboratory.
“There are numerous intricacies we don’t fully grasp, and we observe a plethora of gamma rays across extensive areas of the sky linked with galaxies. It’s particularly difficult to decipher what transpired there,” he explained.
Dillon Braut, an assistant professor at Boston University’s Department of Astronomy and Physics, remarked that the gamma-ray signals and halo-like formations discussed in the study appear in regions of the sky that are “incredibly challenging to model.”
“Therefore, any claims should be treated with utmost caution,” Braut communicated to NBC News via email. “And, naturally, extraordinary claims necessitate extraordinary proof.”
Kaplan labeled the study as “intriguing” and “meriting further investigation,” but remained uncertain if subsequent analyses would substantiate the findings. Nonetheless, he anticipates that future advancements will allow scientists to directly validate dark matter’s existence.
“It would be a monumental shift as it appears poised to dominantly influence the universe,” he stated. “It accounts for the evolution of galaxies and, consequently, stars, planets, us, and is crucial for comprehending the universe’s origin.”
Toya himself acknowledged that further exploration is necessary to authenticate or refute his assertions.
“If accurate, the outcomes would have such significance that the research community would earnestly evaluate their legitimacy,” he noted. “While I have confidence in my findings, I hope other independent scholars can verify these results.”
A recently identified moon boasts an estimated diameter of 38 kilometers (23.6 miles) and a V magnitude of 28, marking it as the faintest moon ever found orbiting a trans-Neptunian object.
This image of Quaor and its satellite Waywot was captured by the NASA/ESA Hubble Space Telescope on February 14, 2006. Image credit: NASA / ESA / Hubble / Michael E. Brown.
Discovered on June 4, 2002, Quaor is a trans-Neptunian object that measures approximately 1,100 km (690 miles) in diameter.
Similar to the dwarf planet Pluto, this object is located in the Kuiper Belt, which is a region filled with icy debris and comet-like entities.
The moon, known as 2002 LM60, orbits between 45.1 and 45.6 astronomical units (AU) from the Sun, completing a full orbit every 284.5 years.
In 2006, astronomers found Quaor’s moon, Waywot. This moon has a diameter of 80 km (50 miles) and orbits at a radius of 24 km around Quaor.
Recently, two rings named Q1R and Q2R were discovered encircling Quaor.
“Over the past decade, stellar occultations have shown that rings can exist around small celestial bodies,” explained Benjamin Proudfoot, an astronomer at the Florida Space Institute, along with his colleagues.
“Of these small ring systems, the ring around Quaor is perhaps the most enigmatic.”
“The two rings located so far are situated beyond Roche’s limits and exhibit heterogeneity.”
“Quaor’s outer ring, referred to as Q1R, seems to be partially confined by mean-motion resonance with Quaor’s moon Waywot and by spin-orbit resonance due to Quaor’s triaxial shape.”
“The inner ring, Q2R, appears to be less dense, and its confinement is more ambiguous.”
“Recently, simultaneous dropouts from two telescopes during a stellar occultation indicated the presence of a previously unknown dense ring surrounding a moon, or Quaor.”
“The dropout duration suggests a minimum diameter/width of 30 km.”
Artist’s rendition of Quaor and its two rings, featuring Quaor’s moon Waywot on the left. Image credit: ESA/Sci.News.
In a recent study, astronomers set out to determine the orbit of this new satellite candidate.
They discovered that the object likely follows a 3.6-day orbit, closely aligned with a 5:3 mean-motion resonance with Quaor’s outermost known ring.
Additional observations of satellites using stellar occultations were also considered.
“Quaoar will be favorably positioned within the Scute nebula for the next decade, offering optimal conditions for occultation during its 286-year orbit,” the researchers stated.
“Current ground-based and space telescopes may struggle to detect the newly identified moon due to its dimness (9 to 10 magnitudes fainter than Quaor) and its angular distance from Quaor.”
“Our analysis of Webb/NIRCam images from the Quaor system has not shown any definitive evidence of the satellite,” they remarked.
“Achieving direct imaging with present technologies would necessitate considerable telescope time to reacquire the satellite’s phase, even if it were indeed visible.”
“However, future telescope generations will likely be able to detect it easily.” The discovery of this new moon offers insights suggesting that the ring surrounding Quaor was likely once part of a broad impact disk, which may have undergone significant changes since its formation, the researchers indicated.
“Studying the formation and evolution of the lunar disk system can yield valuable information about the origins of trans-Neptunian objects,” the researchers remarked.
“We advocate for advanced tidal mechanics, hydrodynamics, and collisional modeling of the Quaor system.”
The principles of thermodynamics, particularly aspects like heat and entropy, provide valuable methods for assessing how far a system of ideal particles is from achieving equilibrium. Nevertheless, it’s uncertain if the existing thermodynamic laws adequately apply to living organisms, whose cells are complexly intertwined. Recent experiments involving human cells might pave the way for the formulation of new principles.
Thermodynamics plays a crucial role in living beings, as their deviations from equilibrium are critical characteristics. Cells, filled with energetic molecules, behave differently than simple structures like beads in a liquid. For instance, living cells maintain a “set point,” operating like an internal thermostat with feedback mechanisms that adjust to keep functions within optimal ranges. Such behaviors may not be effectively described by classical thermodynamics.
N. Narinder and Elisabeth Fischer-Friedrich from the Technical University of Dresden aimed to comprehend how the disequilibrium in living systems diverges from that in non-living ones. They carried out their research using HeLa cells, a line of cancer cells derived from Henrietta Lacks in the 1950s without her consent.
Initially, the scientists employed chemicals to halt cell division, then analyzed the outer membranes of the cells using an atomic force microscope. This highly precise instrument can engage with structures just nanometers in size, enabling researchers to measure how much the membranes fluctuated and how these variations were affected by interference with cell processes, such as hindering the development of certain molecules or the movement of proteins.
The findings showed that conventional thermodynamic models used for non-living systems did not fully apply to living cells. Notably, the concept of “effective temperature” was found to be misleading, as it fails to account for the unique behaviors of living systems.
Instead, the researchers emphasized the significance of “time reversal asymmetry.” This concept examines how the distinctions in biological events (like molecules repeatedly joining to form larger structures only to break apart again) differ when observed forwards versus backwards in time. These asymmetries are directly linked to the functional purposes of biological processes, such as survival and reproduction, according to Fischer-Friedrich.
“In biology, numerous processes are reliant on a system being out of equilibrium. Understanding how far the system deviates is crucial,” states Chase Brodersz from Vrije Universiteit Amsterdam. Recent findings have unveiled a promising new metric for assessing this deviation.
This development marks a significant stride toward enhancing our knowledge of active biological systems, as observed by Yair Shokev at Tel Aviv University. He notes the novelty and utility of the team successfully measuring time-reversal asymmetry alongside other indicators of non-equilibrium simultaneously.
However, to understand life through the lens of thermodynamic principles, further advancements are necessary. Fischer-Friedrich and her team aspire to formulate a concept akin to the fourth law of thermodynamics, specifically applicable to organisms with defined processes. They are actively investigating physiological observables—key parameters measurable within cells—from which such laws could potentially be derived.
Among them is a new paper published in Philosophical Transactions of the Royal Society B. Researchers Gianmarco Maldarelli and Onur Güntürkün from Ruhr University Bochum emphasize three key areas where birds exhibit significant parallels with mammalian conscious experience: sensory consciousness, the neurobiological foundation, and the nature of self-consciousness.
Maldarelli and Güntürkün demonstrate that there is increasing evidence that (i) birds possess sentience and self-awareness, and (ii) they also have the necessary neural structures for these traits. Image credit: Kutte.
First, research on sensory consciousness reveals that birds do not just automatically respond to stimuli; they also experience them subjectively.
Similar to humans, pigeons can alternate between different interpretations of ambiguous visual signals.
Moreover, crows exhibit neural responses that reflect their subjective perception rather than just the physical presence of a stimulus.
At times, crows consciously recognize a stimulus, while at other times, they do not; certain neurons activate specifically in correspondence to this internal experience.
Second, bird brains possess functional components that satisfy theoretical requirements for conscious processing, despite their differing structures.
“The caudolateral nidopallium (NCL), which is akin to the prefrontal cortex in birds, features extensive connectivity that allows for flexible integration and processing of information,” noted Güntürkün.
“The avian forebrain connectome, illustrating the complete flow of information among brain regions, shows numerous similarities to those of mammals.”
“As such, birds fulfill criteria outlined in many established theories of consciousness, including the global neuronal workspace theory.”
Third, more recent studies indicate that birds may exhibit various forms of self-awareness.
While certain corvid species have successfully passed the traditional mirror test, alternative ecologically relevant versions of the test have unveiled additional self-awareness types in other bird species.
“Research has demonstrated that pigeons and chickens can differentiate their reflections in mirrors from real-life counterparts and respond accordingly,” explained Güntürkün.
“This indicates a fundamental sense of situational self-awareness.”
The results imply that consciousness is an older and more prevalent evolutionary trait than previously believed.
Birds illustrate that conscious processing can occur without a cerebral cortex, achieving similar functional solutions through different brain architectures.
_____
Gianmarco Maldarelli and Onur Gunturkun. 2025. Conscious birds. Philosophical Transactions of the Royal Society B 380 (1939): 20240308; doi: 10.1098/rstb.2024.0308
The wolf, the wild ancestor of dogs, stands as the sole large carnivore domesticated by humans. Nonetheless, the exact nature of this domestication remains a topic of debate—whether it was a result of direct human control over wild wolves or a gradual adaptation of wolf populations to human environments. Recent archaeological findings in the Stra Fjärväl cave on the Swedish island of Stra Karsø, located in the Baltic Sea, have revealed the remains of two canids with genetic ties to gray wolves. This island, measuring just 2.5 km2, possesses no native land mammals, similar to its neighboring Gotland, and thus any mammalian presence must have been human-introduced.
Canadian Eskimo Dog by John James Audubon and John Bachman.
“The discovery of wolves on such a remote island was entirely unexpected,” remarked Dr. Linus Gardland Frink, a researcher from the University of Aberdeen.
“They not only had genetic links indistinguishable from other Eurasian wolves but also seemed to coexist and feed alongside humans in areas that were only reachable by boat.”
“This paints a complex picture of the historical dynamics between humans and wolves.”
Genomic analysis of the canid remains indicates they are wolves, not dogs.
However, their traits suggest a level of coexistence with humans.
Isotope analysis of their bones indicates a diet high in marine proteins, such as seals and fish, mirroring the diet of the humans on the island, suggesting they were likely fed.
Furthermore, these wolves were smaller than typical mainland counterparts, and one individual demonstrated signs of low genetic diversity—a common outcome due to isolation or controlled breeding.
This findings challenge long-standing notions regarding the power dynamics between wolves and humans and the domestication of dogs.
While it is unclear if these wolves were domesticated, confined, or managed, their presence in human-occupied areas suggests deliberate and ongoing interactions.
“The fact that it was a wolf and not a dog was a complete surprise,” stated Dr. Pontus Skoglund from the Francis Crick Institute.
“This provocative case suggests that under certain conditions, humans may have kept wolves in their habitats and found them valuable.”
“The genetic findings are intriguing,” noted Dr. Anders Bergström from the University of East Anglia.
“We discovered that the wolf with the most complete genome showed less genetic diversity than any ancient wolf previously analyzed.”
“This resembles what is observed in isolated or bottlenecked populations, or in domesticated species.”
“Although we cannot completely dismiss the idea that low genetic diversity may occur naturally, it implies humans were likely interacting with and managing wolves in ways not previously considered.”
One Bronze Age wolf specimen also presented advanced pathology in its limb bones, which would have restricted its mobility.
This suggests care or adaptation to an environment where large prey hunting was unnecessary for survival.
Professor Jan Stroh of Stockholm University stated: “The combined data offers new and unexpected perspectives on human-animal interactions during the Stone and Bronze Ages, especially regarding wolves and dogs.”
“These findings imply that prehistoric interactions between humans and wolves were more intricate than previously understood, involving complex relationships that extend beyond simple hunting or avoidance, hinting at new aspects of domestication unrelated to modern dogs.”
A study detailing this research was published on November 24th in the Proceedings of the National Academy of Sciences.
_____
Linus Gardland-Frink et al. 2025. A gray wolf in the anthropogenic setting of a small prehistoric Scandinavian island. PNAS 122 (48): e2421759122; doi: 10.1073/pnas.2421759122
As we grow older, our brains undergo significant rewiring.
Recent studies indicate that this transformation takes place in various stages, or “epochs,” as our neural structures evolve, altering how we think and process information.
For the first time, scientists have pinpointed four key turning points in the typical aging brain: ages 9, 32, 66, and 83. During each of these phases, our brains display distinctly different structural characteristics.
The findings were Published Tuesday in Nature Communications, revealing that human cognitive ability does not merely peak and then decline with age. In reality, research suggests that the interval between 9 and 32 years old is the sole period in which our neural networks are increasingly efficient.
In adulthood, from 32 to 66 years, the structure of the average brain stabilizes without significant modifications, leading researchers to believe that intelligence and personality tend to plateau during this time.
Following another turning point, from age 83 and beyond, the brain increasingly relies on specific regions as connections between them slowly deteriorate.
“It’s not a linear progression,” comments lead author, Alexa Maudsley, a postdoctoral researcher at the University of Cambridge. “This marks an initial step in understanding how brain changes differ with age.”
These insights could shed light on why certain mental health and neurological issues emerge during specific rewiring phases.
Rick Betzel, a neuroscience professor at the University of Minnesota and not a part of the study, remarked that while the findings are intriguing, further data is necessary to substantiate the conclusions. He cautioned that the theory might face challenges over time.
“They undertook a very ambitious effort,” Betzel said about the study. “We shall see where things stand in a few years.”
For their research, Maudsley and colleagues examined MRI diffusion scans (images illustrating water molecule movement in the brain) of around 3,800 individuals, ranging from newborns to 90 years old. Their objective was to map neural connections at varying life stages.
In the brain, bundles of nerve fibers that convey signals are encased in fatty tissue called myelin—analogous to wiring or plumbing. Water molecules diffusing into the brain typically travel along these fibers, allowing researchers to identify neural pathways.”
“We can’t open up the skull…we depend on non-invasive techniques,” Betzel mentioned, discussing this form of neuroscience research. “We aim to determine the location of these fiber bundles.”
A groundbreaking study utilized MRI scans to chart the neural networks of an average individual across their lifetime, pinpointing where connections strengthen or weaken. The five “eras” discussed in the paper reflect the neural connections observed by the researchers.
They propose that the initial stage lasts until age nine, during which both gray and white matter rapidly increases. This phase involves the removal of redundant synapses and self-reconstruction.
Between ages 9 and 32, there is an extensive period of rewiring. The brain is characterized by swift communication across its regions and efficient connections.
Most mental health disorders are diagnosed during this interval, Maudsley pointed out. “Is there something about this second phase of life that might predispose individuals to mental health issues?”
From ages 32 to 66, the brain reaches a plateau. It continues to rewire, but this process occurs at a slower and less dramatic pace.
Subsequently, from ages 66 to 83, the brain undergoes “modularization,” where neural networks split into highly interconnected subnetworks with diminished central integration. By age 83, connectivity further declines.
Betzel expressed that the theory presented in this study is likely reflective of people’s experiences with aging and cognition.
“It’s something we naturally resonate with. I have two young kids, and I often think, ‘They’re transitioning out of toddlerhood,'” Betzel remarked. “Science may eventually uncover the truth. But are they precisely at the correct age? I’m not sure.”
Ideally, researchers would gather MRI diffusion data on a large cohort, scanning each individual across their lifespan, but that was unfeasible decades ago due to technological constraints.
Instead, the team amalgamated nine diverse datasets containing neuroimaging from prior studies, striving to harmonize them.
Betzel noted that these datasets vary in quality and methodology, and attempts to align them may obscure essential variations and introduce bias into the findings.
Nonetheless, he acknowledged that the paper’s authors are “thoughtful” and proficient scientists who did their utmost to mitigate that risk.
“Brain networks evolve throughout life, that’s undeniable. But are there five precise moments of transition? I hope you’ll take note of this intriguing notion.”
The wiring of our neurons evolves over the decades
Alexa Mousley, University of Cambridge
Our brain’s functionality isn’t static throughout our lives. We know that our capacity for learning and the risk of cognitive decline fluctuate from infancy to our 90s. Recently, scientists may have uncovered a possible reason for this change. The wiring of our brains seems to experience four key turning points at ages 9, 32, 66, and 83.
Previous studies indicate that our bodies undergo three rapid aging cycles around the ages of 40, 60, and 80. However, the complexity of the brain complicates our understanding.
The brain consists of distinct regions that communicate through white matter tracts. These tracts are wire-like structures formed by long, slender projections known as axons, which extend from neurons, or brain cells. These connections significantly influence cognitive functions, including memory. Nevertheless, it was uncertain if this substantial change in wiring transpires throughout one’s life. “No one has combined multiple metrics to characterize stages of brain wiring,” states Alexa Mousley from Cambridge University.
In an effort to bridge this knowledge gap, Maudsley and his team examined MRI scans of roughly 3,800 individuals from the UK and US, primarily white, spanning ages from newborns to 90 years. These scans were previously gathered as part of various brain imaging initiatives, most of which excluded individuals with neurodegenerative diseases or mental health issues.
The researchers discovered that the brain wiring of individuals reaching 90 years old typically progresses through five significant stages, separated by four primary turning points.
In the initial stage, from birth to age nine, the white matter tracts between brain areas seem to become longer, more intricate, and less efficient. “It takes time for information to travel between regions,” explains Mausley.
This may be due to the abundance of connections in our brains as young children. As we age and gain experiences, we gradually eliminate unused connections. Mausley notes that the brain prioritizes making broader connections, beneficial for activities like piano practice, though at the expense of efficiency.
However, during the second stage, from ages 9 to 32, this trend appears to reverse, potentially driven by the onset of puberty and hormonal shifts affecting brain development. “Suddenly, your brain’s connections become more efficient. Connections become shorter, allowing information to traverse more swiftly,” says Mausley. This could enhance skills such as planning and decision-making, along with improved cognitive abilities like working memory.
The third stage, which spans from 32 to 66 years, is the longest phase. “During this stage, the brain continues to change, albeit at a slower rate,” Mausley explains. Specifically, she notes that connections between regions have a tendency to become less efficient over time. “It’s unclear what exactly triggers this change; however, the 30s often involve significant lifestyle alterations, like starting a family, which may play a role,” she adds. This inefficiency might also stem from general physical wear and tear, as noted by Katia Rubia from King’s College London.
From ages 66 to 83, the connections between neurons in the same brain area tend to remain more stable than those among different regions. “This is noteworthy, especially as the risk of developing conditions like dementia increases during this period,” Mausley remarks.
In the final stage, from ages 83 to 90, connections between brain regions weaken and rely more frequently on “hubs” that link multiple areas. “This indicates that there are fewer resources available to maintain connections at this age, leading the brain to depend on specific areas to serve as hubs,” Mausley explains.
Understanding these alterations in the brain could provide insights into why mental health issues arise, typically before the age of 25, and why individuals over 65 are particularly vulnerable to dementia, she states.
“It’s vital to comprehend the normal stages of structural changes in the brain throughout the human lifespan, so future research can explore deviations that occur in mental health and neurodegenerative disorders,” Rubia notes. “Grasping the causes of these deviations can assist us in pinpointing treatment strategies. For instance, we might examine which environmental factors or chemicals are responsible for these differences and discover methods to counteract them through treatments, policies, and medications.”
Nevertheless, Rubia emphasizes the need for further research to determine whether these findings apply to a more ethnically and geographically diverse population.
Grain cultivation can produce excess food that can be stored and taxed.
Luis Montaña/Marta Montagna/Science Photo Library
The practice of grain cultivation likely spurred the formation of early states that functioned like protection rackets, as well as the need for written records to document taxation.
There is considerable discussion on how large, organized societies first came into being. Some researchers argue that agriculture laid the groundwork for civilization, while others suggest it emerged from necessity as hunter-gatherer lifestyles became impractical. However, many believe that enhanced agricultural practices led to surpluses that could be stored and taxed, making state formation possible.
“Through the use of fertilization and irrigation, early agricultural societies were able to greatly increase productivity, which in turn facilitated nation building,” says Kit Opie from the University of Bristol, UK.
However, the timelines for these developments do not align precisely. Evidence of agriculture first appeared about 9,000 years ago, with the practice independently invented at least 11 times across four continents. Yet, large-scale societies didn’t arise until approximately 4,000 years later, initially in Mesopotamia and subsequently in Egypt, China, and Mesoamerica.
To explore further, Opie and Quentin Atkinson of the University of Auckland, New Zealand, employed a statistical method inspired by phylogenetics to map the evolution of languages and cultures.
They combined linguistic data with anthropological databases from numerous preindustrial societies to investigate the likely sequence of events, such as the rise of the state, taxation, writing, intensive agriculture, and grain cultivation.
Their findings indicated a connection between intensive agriculture and the emergence of states, though the causality was complex. “It appears that the state may have driven this escalation, rather than the other way around,” Opie notes.
Additionally, they observed that states were significantly less likely to emerge in societies where grains like wheat, barley, rice, and corn were not cultivated extensively; in contrast, states were much more likely to develop in grain-dominant societies.
The results suggested a frequent linkage between grain production and taxation, with taxation being uncommon in grain-deficient societies.
This is largely because grain is easily taxed; it is cultivated in set fields, matures at predictable times, and can be stored for extended periods, simplifying assessment. “Root crops like cassava and potatoes were typically not taxed,” he added. “The premise is that states offer protection to these areas in exchange for taxes.”
Moreover, Opie and Atkinson discovered that societies without taxation rarely developed writing, while those with taxation were far more likely to adopt it. Opie hypothesizes that writing may have been developed to record taxes, following which social elites could establish institutions and laws to sustain a hierarchical society.
The results further indicated that once established, states tended to cease the production of non-cereal crops. “Our evidence strongly suggests that states actively removed root crops, tubers, and fruit trees to maximize land for grain cultivation, as these crops were unsuitable for taxation,” Opie asserted. “People were thus coerced into cultivating specific crops, which had detrimental effects then and continues to impact us today.”
The shift to grain farming correlated with Neolithic population growth but also contributed to population declines, negatively affecting general health, stature, and dental health.
“Using phylogenetic methods to study cultural evolution is groundbreaking, but it may oversimplify the richness of human history,” notes Laura Dietrich from the Austrian Archaeological Institute in Vienna. Archaeological records indicate that early intensified agriculture spurred sustained state formation in Southwest Asia, yet the phenomena diverged significantly in Europe, which is a question of great interest for her.
David Wengrow points out, “From an archaeological perspective, it has been evident for years that no single ‘driving force’ was responsible for the earlier formation of states in different global regions.” For instance, he states that in Egypt, the initial development of bureaucracy appeared to be more closely related to the organization of royal events than to the need for regular taxation.
For many years, researchers have been intrigued by two massive structures hidden deep beneath the Earth’s surface. These anomalies might possess geochemical characteristics that differ from the surrounding mantle, yet their source remains unclear. Geodynamicist Yoshinori Miyazaki from Rutgers University and his team offer an unexpected explanation regarding these anomalies and their significance in influencing Earth’s capacity to sustain life.
This diagram shows a cross-section that reveals the interior of the early Earth, featuring a hot molten layer situated above the core-mantle boundary. Image credit: Yoshinori Miyazaki/Rutgers University.
The two enigmatic structures, referred to as large low-shear velocity regions and ultra-low velocity regions, lie at the boundary between the Earth’s mantle and core, approximately 2,900 km (1,800 miles) beneath the Earth’s exterior.
Large low-shear velocity regions are vast, continent-sized masses of hot and dense rock.
One of these regions is located beneath Africa, while the other is situated beneath the Pacific Ocean.
The ultra-low velocity zone resembles a thin layer of melt that adheres to the core much like a puddle of molten rock.
Both structures significantly slow seismic waves and display unusual compositions.
“These are not random, odd phenomena,” Dr. Miyazaki, co-author of a related paper published in the journal Nature Earth Science, explained.
“They represent traces of Earth’s primordial history.”
“Understanding their existence could help us unravel how our planet formed and what made it habitable.”
“Billions of years in the past, the Earth was covered by an ocean of magma.”
“While scientists anticipated that as the mantle cooled, it would establish distinctive chemical layers—similar to how frozen juice separates into sweet concentrate and watery ice—seismic surveys have shown otherwise. Instead, large low-shear velocity regions and ultra-low velocity zones appear as irregular accumulations at the Earth’s depths.”
“This contradiction sparked our inquiry. When starting with a magma ocean and performing calculations, the outcome does not match the current observations in the Earth’s mantle. A critical factor was missing.”
The researchers propose that over billions of years, elements such as silicon and magnesium may have leached from the core into the mantle, mixing with it and hindering the development of pronounced chemical layers.
This process could clarify the bizarre structure of the large low-shear velocity and ultra-low velocity regions, potentially visibly representing the solidified remnants of a basal magma ocean tainted by core materials.
“What we hypothesized is that this material could be leaking from the core,” Dr. Miyazaki noted.
“Incorporating core components might account for our current observations.”
“This discovery goes beyond merely understanding the chemistry of the deep Earth.”
“Interactions between the core and mantle may have shaped the Earth’s cooling process, volcanic activity, and atmospheric evolution.”
“This could help clarify why Earth possesses oceans and life, while Venus is a frigid hothouse and Mars a frozen wasteland.”
“Earth has water, life, and a relatively stable atmosphere.”
“In contrast, Venus’ atmosphere is over a hundred times thicker than Earth’s and is mainly carbon dioxide, while Mars’ atmosphere is much thinner.”
“While we do not fully comprehend why this is the case, the processes occurring within the planet—its cooling and layer evolution—could be a significant part of the explanation.”
By synthesizing seismic data, mineral physics, and geodynamic modeling, the authors reaffirm that the extensive low-shear velocity regions and ultra-low velocity zones offer crucial insights into Earth’s formative processes.
These structures may also contribute to volcanic hotspots like those in Hawaii and Iceland, thereby connecting deep Earth dynamics to the planet’s surface.
“This study exemplifies how the integration of planetary science, geodynamics, and mineral physics can aid in unraveling some of Earth’s long-standing enigmas,” said co-author Dr. Jie Deng, a researcher at Princeton University.
“The notion that the deep mantle may still retain the chemical memory of ancient core-mantle interactions provides fresh perspectives on Earth’s unique evolution.”
“Every new piece of evidence contributes to piecing together Earth’s early narrative, transforming scattered hints into a more coherent picture of our planet’s development.”
“Despite the limited clues we have, we are gradually forming a significant narrative,” Dr. Miyazaki remarked.
“With this research, our confidence in understanding Earth’s evolution and its distinctiveness can now be bolstered.”
_____
J. Deng et al. 2025. Heterogeneity in the deep mantle formed through a basal magma ocean contaminated by core materials. Nature Earth Science 18, 1056-1062; doi: 10.1038/s41561-025-01797-y
NASA’s STEREO (Solar-Earth Relations Observatory), the NASA/ESA SOHO (Solar-Heliospheric Observatory), and NASA’s PUNCH (Corona-Heliosphere Integrating Polarimeter) missions had the extraordinary capability to observe sky regions near the Sun, enabling them to monitor 3I/ATLAS as it traversed behind the Sun from Earth’s perspective.
3I/ATLAS moves at an incredible speed of 209,000 km (130,000 miles) per hour, visualized through a series of colorized stacked images captured from September 11 to 25, 2025, using the Heliocentric Imager-1 instrument aboard NASA’s STEREO-A spacecraft. Image credit: NASA / Lowell Observatory / Qicheng Zhang.
STEREO monitored the interstellar comet 3I/ATLAS between September 11 and October 2, 2025.
The mission aims to examine solar activity and its effects on the entire solar system and is part of a collection of NASA spacecraft studying comets, offering insights on their size, physical characteristics, and chemical makeup.
Initially, it was believed that comet 3I/ATLAS would be too dim for STEREO’s instruments, but advanced image processing using the visible-light telescope Heliospheric Imager-1 and the stacking of images revealed 3I/ATLAS effectively.
By overlaying multiple exposures, distinct images were produced, showing the comet slightly brighter at the center.
This image of 3I/ATLAS combines observations from the NASA/ESA SOHO mission between October 15 and 26, 2025. Image credit: NASA / ESA / Lowell Observatory / Qicheng Zhang.
The SOHO spacecraft managed to catch a glimpse of 3I/ATLAS from October 15 to 26, 2025.
During this time frame, the LASCO instrument suite onboard SOHO identified comets crossing its observation area from around 358 million km (222 million miles) away, which is more than twice Earth’s distance from the Sun.
SOHO orbits at Sun-Earth Lagrange Point 1, a gravitational equilibrium point approximately 1.6 million km (1 million miles) closer to the Sun along the Sun-Earth axis.
Members of the SOHO team also utilized stacking techniques to create images of 3I/ATLAS.
In this image, 3I/ATLAS is clearly visible as a bright object in the center, created by consolidating observations from NASA’s PUNCH mission conducted from September 20 to October 3, 2025. Image credit: NASA/Southwest Research Institute.
The PUNCH mission observed 3I/ATLAS from September 20 to October 3, 2025.
These observations indicated that the comet’s tail extended slightly to the lower right.
During this period, the comet was so dim that the PUNCH team was uncertain if the spacecraft would be able to detect it well, given its primary focus on studying the Sun’s atmosphere and solar wind rather than comets.
However, by collecting multiple observations, 3I/ATLAS and its tail became distinctly visible.
“We’re truly pushing the limits of this system,” stated Dr. Kevin Walsh, a planetary scientist at the Southwest Research Institute who led the PUNCH observations of comets.
The recently identified moon has an approximate diameter of 38 kilometers (23.6 miles) and a V magnitude of 28, making it the faintest moon ever found orbiting a trans-Neptunian object.
This image of Quaor and its satellite Waywot was captured by the NASA/ESA Hubble Space Telescope on February 14, 2006. Image credit: NASA / ESA / Hubble / Michael E. Brown.
Discovered on June 4, 2002, Quaor is a trans-Neptunian body approximately 1,100 km (690 miles) in diameter.
Similar to the dwarf planet Pluto, Quaor is located within the Kuiper Belt, a frigid region populated with comet-like objects.
The satellite, also referred to as 2002 LM60, orbits between 45.1 and 45.6 astronomical units (AU) from the Sun, completing an orbit every 284.5 years.
In 2006, astronomers confirmed Quaor’s moon Waywot, measuring 80 km (50 miles) in diameter and orbiting at a radius of 24 around Quaor.
Recently, two rings, designated Q1R and Q2R, were identified surrounding Quaor.
“Stellar occultations over the last decade have indicated the presence of rings around small celestial bodies,” remarked Benjamin Proudfoot, an astronomer at the Florida Space Institute, alongside his colleagues.
“Among these small ring systems, the ring around Quaor is notably enigmatic.”
“The two rings discovered thus far lie well beyond Roche’s limits and exhibit heterogeneity.”
“Quaor’s outer ring, dubbed Q1R, seems to be at least partially confined by mean-motion resonance with Quaor’s moon Waywot, as well as by spin-orbit resonance linked to Quaor’s triaxial structure.”
“The inner ring, Q2R, appears less dense, and its confinement remains more indefinite.”
“Recently, simultaneous dropouts from two telescopes during a stellar occultation indicated the existence of a previously unidentified dense ring around a moon, or Quaor.”
“The length of the dropout suggests a minimum diameter/width of 30 km.”
Artist’s depiction of Quaor and its two rings, with Quaor’s satellite Waywot on the left. Image credit: ESA/Sci.News.
In a recent study, astronomers sought to further characterize the orbit of this new satellite candidate.
They determined that the object is likely on a 3.6-day orbit, close to a 5:3 mean-motion resonance with Quaor’s outermost known ring.
Additionally, they explored the potential for observing satellites through further stellar occultations.
“Quaor will be well-positioned within the Scute nebula for the next 10 years, providing the best opportunity for occultation throughout its 286-year orbit,” the researchers stated.
“Current ground-based and space-based telescopes will struggle to detect the newly discovered moon, given its brightness (9 to 10 magnitude fainter than Quaor) and its angular distance from Quaor.”
“Our analysis of Webb/NIRCam images from the Quaor system did not reveal any convincing evidence of the satellite,” they added.
“Direct imaging with existing equipment would necessitate considerable telescope time to blindly reacquire the satellite’s phase, even if the satellite were detectable.”
“However, future generations of telescopes will likely have the capability to easily observe it.” The discovery of this new moon suggests that the ring around Quaor may have originally formed from a broad impact disk and may have undergone significant evolution since its creation, according to the researchers.
“Studying the formation and evolution of the lunar disk system will yield valuable insights into the development of trans-Neptunian objects,” they remarked.
“We advocate for advanced tidal mechanics, hydrodynamics, and collisional modeling of the Quaor system.”
Our conscious experiences often shape our lives with positive joy. Feel the sunlight on your skin, listen to the birds singing, and embrace the moment. However, we also encounter pain. I recently fell down the stairs and my knee is hurting; I often find myself feeling pessimistic and in distress. Why have we, as living beings, evolved cognitive abilities that encompass not just pain and suffering, but also positive experiences? Dr. Albert Nguyen from Ruhr-Universität Bochum and Dr. Carlos Montemayor from San Francisco State University suggest distinguishing three fundamental phenomena of phenomenal consciousness: basic arousal, general arousal, and reflexive (self-)consciousness.
Scholars believe that consciousness is a fundamental property of the universe. Image credit: NASA / ESA / JPL-Caltech / STScI / Sci.News.
“From an evolutionary standpoint, basic arousal was the first to develop, providing the fundamental ability to place the body in a state of alert in life-threatening situations, enabling organisms to survive,” Dr. Nguyen stated.
“Pain serves as a highly effective means of detecting bodily harm and the related threat to life.”
“This often triggers survival mechanisms such as fleeing or freezing.”
The subsequent evolutionary stage is the emergence of general attention.
This allows you to concentrate on a single item even when overwhelmed with information.
For example, if we see smoke while someone is speaking to us, our focus shifts entirely to the smoke in search of its source.
“This enables us to learn about new correlations. Initially, it establishes a basic causal relationship: smoke comes from a fire and indicates its location,” Dr. Montemayor remarked.
“Furthermore, targeted attention allows us to discern complex scientific relationships.”
Humans, along with certain animals, then develop reflexive (self-)consciousness.
This capability allows for a nuanced reflection not only on ourselves but also on our past and future.
We can create a self-image and incorporate it into our actions and plans.
“Reflexive consciousness, in its fundamental form, developed alongside the two primary forms of consciousness,” Dr. Nguyen explained.
“In such instances, conscious experience is less about perceiving the surroundings and more about consciously acknowledging aspects of oneself.”
“This encompasses not just the state of your body, but also your perceptions, feelings, thoughts, and actions.”
“A simple example would be recognizing oneself in a mirror, which is a form of reflexive consciousness.”
“Children begin to develop this ability by 18 months, and some animals such as chimpanzees, dolphins, and magpies have demonstrated this as well.”
“The core function of reflexive conscious experience enhances our ability to integrate into society and collaborate with others.”
The team’s paper will be published in Philosophical Transactions of the Royal Society B.
_____
Albert Nieuwen and Carlos Montemayor. 2025. Three types of phenomenal consciousness and their functional roles: Development of the ALARM theory of consciousness. fill. transformer. R.Soc.B 380 (1939): 20240314; doi: 10.1098/rstb.2024.0314
Jelly-like midsections, thunderous thighs, and muffin tops — derogatory terms abound for the parts of ourselves we feel insecure about. Many cultures view fat as, at best, mere insulation or an obstacle to be eliminated. However, it’s time to shift this perspective.
While excessive body fat is linked to various health issues such as cancer, heart disease, and type 2 diabetes, it’s noteworthy that not all individuals with obesity experience these adverse effects. This indicates a more complex scenario at play. Our comprehensive cover story reveals that fat is far from being a passive entity. Instead, it functions as a vital, dynamic organ that collaborates with the brain and bones to support overall health.
This essential reevaluation of fat allows us to perceive obesity as a form of organ dysfunction rather than a moral failing. Such a change in perspective can shift the dialogue from stigmatization and fat-shaming to developing effective treatments for obesity. Current research is exploring innovative methods to “reprogram” dysfunctional fat cells to enhance health and even transform “unhealthy” obesity into less harmful variations.
“
Fat is a crucial and vibrant part of the body, functioning as an organ that helps maintain our well-being. “
Encouragingly, this transformative approach does not necessitate drastic weight loss. Many advantages of contemporary weight loss medications seem to arise from enhancing the function and distribution of fat rather than merely promoting weight reduction.
Realizing this transformation could revolutionize not only health outcomes but also perceptions of what constitutes a healthy body shape. Yet, the phenomenal success of GLP-1 medications poses a risk of undermining the fat-positive movement and re-igniting outdated moral assessments regarding body size and self-discipline.
However, if fat can indeed be reprogrammed, more individuals may lead longer, healthier lives without the burden of self-consciousness about their size. Understanding the biology of fat and its interactions with the body is the first step towards this goal.
COP30 President Andre Correa de Lago (centre) alongside Advisor and UN Climate Change Secretary Simon Stiel (left)
Pablo Porciuncula/AFP via Getty Images
The COP30 climate summit held by the United Nations in Brazil faced severe challenges, including heavy rainfall, protests, and a partial electrical fire. The concluding session was momentarily halted over objections to the perceived weakness of the finalized document.
Despite these hurdles, the globally recognized climate action framework continued, with nearly all nations except the United States engaging in 12 days of discussions in the Amazon to establish a unified framework.
Notably, the final agreement omitted any mention of fossil fuels, responsible for a significant portion of greenhouse gas emissions, despite a prior commitment made at COP28 in Dubai to pivot away from such energy sources. Over 80 nations at COP30 aimed for a detailed transition plan regarding fossil fuels, but oil-exporting nations excluded a key clause that mandated unanimous consent from all 194 countries.
“An agreement born out of climate change denial is a failed agreement,” remarked Diana Mejía, the Colombian representative, expressing support from delegates from Panama and Uruguay who voiced frustrations about Brazil’s dismissal of their comments before the text’s submission.
Brazil argued it was unaware of the request but committed to helping draft a roadmap for transitioning away from fossil fuels outside the UN’s framework.
“It’s akin to designing a board game,” commented Natalie Jones, a professor at the International Institute for Sustainable Development, reflecting on the stalled transition roadmap, “We’re engaged in play, yet some are still deliberating on the rule set.”
The final decision, named “Global Mutilan” after an indigenous Brazilian term for “collective endeavor,” at least indicated that international collaboration on climate issues has withstood some severe challenges this year, as U.N. Climate Secretary Simon Stiel noted. said in his closing remarks.
President Donald Trump again withdrew the United States, the second-largest emitter globally, from the COP process, threatening to do the same with Argentina, raising alarms about the potential collapse of annual negotiations. Throughout other global conferences this year, the U.S. has sought to advance talks on minimizing shipping emissions and reducing plastic pollution.
Corporate entities, industry coalitions, and non-profits have also begun retreating from addressing climate change, with Bill Gates suggesting a focus on poverty and health instead of emissions at COP30.
A decade post the Paris Agreement at COP21, which aimed to cap global warming to 2°C above pre-industrial levels, we are currently experiencing steady progress towards 2.6℃— an increase that had already approached 4°C before the agreement’s onset.
In a letter to the UN last year, leading scientists and diplomats expressed concerns that the COP process is “no longer fit for purpose.” However, one of the letter’s signatories, former Irish president Mary Robinson, commented post-COP30 that many nations are moving forward “during a time when multilateralism is under stress.”
The nations reaffirmed their collective commitment to the Paris Agreement and the conclusions of the Intergovernmental Panel on Climate Change. In conjunction with climate pledges, the G20 Summit Declaration was issued on the same day, while participants from major economies, along with the U.S., opted out, describing it as “a significant pushback against Trump.” Joanna Depledge, a COP historian at the University of Cambridge, remarked.
This conveys a strong message to businesses, investors, and local authorities, according to her.
As foreign aid budgets decline and the U.S. eliminates aid agencies, low-income nations are expressing dissatisfaction with historically large polluters for not aiding them in coping with climate challenges. COP30 acknowledged the necessity to devise a “just transition mechanism” for support, also promising to triple adaptation funding, though the specifics remain vague, and the original deadline of 2030 has been postponed to 2035.
“Beyond the just transition mechanism… there’s little to celebrate,” said Harjeet Singh from the Satthat Sampada Climate Foundation, which aids climate-vulnerable populations. “We should have aimed higher.”
COP30, convened in Belém at the Amazon’s edge, did not achieve consensus on a plan to halt and reverse deforestation, despite the efforts of over 90 nations. Prior to the summit, however, Brazil launched the Tropical Forest Forever Facility, an investment initiative rewarding countries for maintaining forest areas.
Brazil and its sponsors have so far contributed $6.6 billion to the fund, which is far below the $25 billion target. Tightening the fund’s operational guidelines is necessary, stated Kate Dooley from the University of Melbourne, indicating that it represents a welcome shift away from carbon offsets that yield no actual climate benefits.
“Brazil’s leadership on deforestation could be among the top outcomes from COP30,” remarked Marco Duso, a sustainability consultant at Ernst & Young. “And this leadership is resonating on the global stage.”
Climate change activists march on the sidelines of the COP30 summit in Belém, Brazil
Pablo Porciuncula/AFP via Getty Images
A decade following the Paris Agreement, there should be a significant leap in climate initiatives. Yet, in the past four years, there has been scant advancement, highlighted by the latest COP summit, which did not make substantial progress in phasing out fossil fuels or curbing deforestation. What went wrong?
I cannot provide a clear answer. However, as the planet continues to warm and the consequences become increasingly dire, I fear our responses are leaning toward irrationality instead of rationality. If true, the resulting climate impacts may be far worse, and the decline of our global civilization could become a more plausible scenario than previously imagined.
Let’s revisit the 2015 Paris Agreement. The concept of an international climate accord, wherein each nation would establish its own greenhouse gas emission targets, seemed to me incredibly naive. The ambitious 1.5 degrees Celsius target was a stark shift from prior plans. Advocates claimed progress would be made incrementally through a “ratchet mechanism,” allowing nations to enhance their commitments over time.
I remained skeptical. I left Paris believing this was largely a façade for environmentalism. My expectation was minimal immediate influence but increased action as the consequences of warming became undeniable. In essence, reason would eventuate.
Yet, the opposite has occurred. Based on current policies, the Climate Action Tracker estimated back in 2015 that the world was on course for approximately 3.6°C of warming by 2100. By 2021, that figure was revised to around 2.6°C—a significant improvement, suggesting Paris was making strides.
However, the most recent Climate Action Tracker report prior to the COP30 summit presents grim findings. For four consecutive years, there has been “little or no measurable progress.” The report states, “Global progress remains stagnant.” Although a handful of countries are genuinely advancing, others are stalling or reversing their climate efforts.
While the increase in renewable energy generation is surpassing expectations, it’s counterbalanced by substantial funds still being allocated to fossil fuels. Simply harnessing cheap solar energy won’t suffice. The proliferation of solar installations can lead to diminishing returns on profits. Moreover, although producing green electricity is manageable, progress in more challenging sectors like agriculture, aviation, and steel manufacturing remains inadequate.
In addition, the issue is not solely the failure to reduce emissions; we are also ill-equipped to handle what’s coming. We continue constructing cities on sinking land adjacent to rising seas. As noted in an April report, “Adaptation progress is either too slow, stagnant, or misdirected,” a sentiment echoed by the UK’s Climate Change Committee.
The pressing question is why climate action has plateaued without intensification. In some regions, this is strikingly due to political leaders who either disregard climate change as a priority or blatantly deny it, such as seen with the US’s withdrawal from the Paris Agreement.
Even those governments that vocalize climate change as a priority are taking minimal action, often citing more immediate concerns like the cost of living crisis. However, this crisis is intertwined with climate issues, as escalated severe weather patterns fuel rising food prices. As the climate continues to warm, the repercussions on food production and the broader economy will likely intensify.
Will we reach a moment where governments find themselves paralyzed on climate action due to the costs associated with combating rising sea levels inundating metropolises? Will citizens persist in supporting climate change deniers out of fear regarding global conditions, regardless of public opinion? Most individuals worldwide support increased climate action.
The notion that mounting evidence will lead leaders to rectify their course appears ever more naive. We navigate an unusual reality, reminiscent of the CDC’s handling of misinformation, such as the baseless anti-vaccination movements undermining public health even amid measles outbreaks, alongside some politicians suggesting that hurricanes stem from climate manipulation.
As we continue to break temperature records annually, the reality of climate change has never been clearer. But perhaps that’s part of the issue. Philosopher Martha Nussbaum posited that fear can drive detrimental behavior, prompting people to discard rational thought for fleeting pleasure over long-term benefits. Research indicates that environmental stress may lead individuals to act irrationally.
People often leap from perceiving difficulties to declaring imminent doom. No, we are not condemned. However, the longer rational thought is sidelined, the graver the consequences will become. Perhaps what we’re witnessing is merely a transient response linked to the pandemic’s aftermath and the Ukraine war. Alternatively, something more troubling might be unfolding.
The origins of the sperm swimming mechanism date back to ancient times.
Christoph Burgstedt/Alamy
The evolutionary roots of sperm can be traced to the unicellular forerunners of all existing animals.
Nearly all animals go through a unicellular phase in their life cycle, which involves two forms of sex cells, or gametes. Eggs are sizeable cells that hold genetic information and the nutrients necessary for early development, while sperm transport genetic material from one organism to another to fertilize eggs and create new life.
“Sperms play a crucial role in the process that allows life to be transmitted from generation to generation,” states Arthur Matt from Cambridge University. “It carries the legacy of over 700 million years of evolutionary history and is likely linked to the origins of animals themselves. Our aim was to explore this extensive evolutionary narrative to understand the origins of sperm.”
Matt and his team utilized an open science dataset containing information about sperm proteins from 32 animal species, including humans. They combined this data with the genomes of 62 organisms, including various related single-cell groups, to track the evolution of sperm across different animal lineages.
The research revealed a “sperm toolkit” comprising about 300 gene families that make up the last universal common sperm core genome.
“We have now identified numerous significant advancements in sperm mechanisms occurring long before multicellular animals emerged, even before the sperm themselves,” explains Matt.
This indicates that the sperm mechanics, represented by a “flagellum that propels a single cell,” were already evolving prior to the development of multicellular organisms.
Thus, our ancient progenitors were once all single-celled oceanic swimmers, and the sperm toolkit was present in our earliest swimming unicellular predecessors long before the advent of animals.
“Animals evolved multicellularity and cellular differentiation, but they did not create sperm from nothing. They repurposed the body structure of their swimming forebears as the foundation for sperm,” states Matt. “In essence, sperm are not a novel creation of multicellular organisms but are constructed upon the designs of a single-celled organism repurposed for reproduction.”
The study also indicated that the significant technological developments leading to the vast variety of current sperm primarily affected the cell heads, while the tails have remained largely constant since their common ancestor.
According to the research team members, fertilization can occur in various manners, with some sperm reaching the egg within the body, while others swim in open waters, notes Adria Leboeuf, also from the University of Cambridge. “Finding eggs in these different settings presents unique challenges and requires specialized machinery,” she explains. “However, the tail remains well-preserved since it must be capable of swimming in all environments.”
“This illustrates how evolution can modify existing structures instead of creating mechanisms from scratch,” says Jenny Graves, from La Trobe University in Melbourne, Australia.
Life encompasses more than mere figures, yet it often seems otherwise in today’s world. We exist in a time dominated by wearable tech, health tracking, and extreme optimization.
With just a few unobtrusive devices, driven individuals can transform themselves into intelligent data compilers.
We can keep an eye on blood oxygen levels, breathing rates, blood sugar, REM sleep, skin temperature, heart rate variability, body composition, and an array of other biomarkers regularly.
If desired, you can document your meals, mood, menstrual cycles, and even bowel habits.
The goal is to have access to all this information so we can enhance and extend our lives. But how do we extract significance from it?
How can we gain genuine health insights without dedicating hours to computations and organization? Because aside from a few bored billionaires, most of us don’t view our living spreadsheets as truly valuable.
Fortunately, researchers at Northwestern University in the US have some exciting news. In 2025, they discovered a method to: Combine two commonly measured health indicators to provide us with deeper insights into daily fitness and long-term health risks.
The daily heart rate per step (DHRPS) is a straightforward measure. Simply divide your average daily heart rate by your average step count.
Yes, you’ll need to constantly track both metrics using a health monitor, such as an Apple Watch or Fitbit (the latter being utilized in the research), but the calculations are done automatically.
In just 2 seconds, you can uncover critical information about your cardiovascular health.
“We discovered that [DHRPS measurement] has a stronger correlation with type 2 diabetes, heart failure, myocardial infarction, and heart attacks,” said Flynn Chen, the lead author of the paper. “It’s significantly more informative than merely tracking heart rate or steps.”
Improving Your Score
Here’s the breakdown: Suppose your average heart rate for the month is 80 beats per minute, and you walk an average of 6,000 steps daily. Your DHRPS score would then be 0.01333.
Now, if you boost your step count to an average of 10,000 steps per day over the following month, your DHRPS should drop to 0.008. In this case, a lower score is preferable.
In their study, Chen and colleagues monitored over 7,000 Fitbit users across five years, during which they recorded more than 50 billion steps.
Taking more steps can effectively benefit your overall health – Photo credit: Getty
The researchers categorized participants into three groups based on their DHRPS scores: low (below 0.0081), moderate (above 0.0081 and below 0.0147), and high (above 0.0147).
The simplest way to alter your score is by increasing your step count, Chen suggests.
“Numerous established studies indicate that daily step count is an independent risk factor for cardiovascular disease and overall mortality,” he adds.
“Our ongoing research reveals that heart rate in relation to step count may be an even stronger independent risk factor for cardiovascular disease than step count alone.
“By increasing your step count, you not only pursue the 10,000 steps daily goal, but also improve both metrics simultaneously.”
Chen advises that you need at least a week’s worth of consistent data from your smartwatch or tracker for a meaningful DHRPS score.
The Future of Heart Rate per Step
Since the release of this study, the health tracking community has started utilizing these insights, potentially leading to further advancements as more data becomes available.
“A crucial aspect is that our metrics correlate with VO.2 max scores,” Chen mentions.
This is significant because V.O.2 maximum measures the highest rate of oxygen consumption during exercise, providing valuable insights regarding your aerobic capacity and metabolic health.
The challenge lies in accurately measuring VO.2 max, as it typically requires a treadmill stress test, with limited availability of such tests.
If DHRPS proves to be a reliable indicator of VO.2 max, it could serve as another method to simplify health data access for everyone—no spreadsheets needed.
This year’s hurricane season was marked by three Category 5 storms—some of the most potent hurricanes ever documented—yet there were no landfalls on U.S. soil, leading to an unusual lull during the typically active period. These elements contributed to what many are calling a “screwball” season.
Atmospheric scientist Phil Klotzbach made this observation.
“It’s been quite an unusual year,” noted Klotzbach, a hurricane researcher at Colorado State University. “Characterizing this year’s patterns has been challenging.”
The official end of hurricane season is November 30th. Notably, the year 2025 aligns with anticipated increases in storm frequency as climate change progresses. Late-season hurricanes formed, some escalating rapidly and producing some of the most intense storms recorded.
In many respects, it was simply puzzling. Although fewer hurricanes developed than anticipated, nearly all that did reached major storm status. For the first time in a decade, the U.S. mainland avoided any landfalls, underscoring the unpredictable nature of hurricane seasons, despite improvements in forecast accuracy. This is particularly true in a warming climate.
Hurricanes will occur less frequently but with greater intensity.
In May, National Oceanic and Atmospheric Administration forecasters predicted a stronger-than-usual season, estimating six to ten hurricanes, including at least three major storms classified as Category 3 or higher, with winds of 111 miles per hour or more.
Klotzbach independently confirmed this forecast, as did other hurricane monitoring organizations. We were on the same page.
Ultimately, while the number of hurricanes was lower than expected, four out of the five that formed (Erin, Gabriel, Humberto, Imelda, and Melissa) were classified as major.
Hurricane Imelda impacted Bermuda on October 1st.NOAA
“This marks the highest rate seen in the past 50 years,” remarked Brian McNoldy, a hurricane researcher at the University of Miami’s Rosenstiel School of Ocean, Atmospheric and Earth Sciences.
Additionally, three of those storms reached the Category 5 level, the pinnacle of hurricane intensity.
Despite the limited number of storms, forecasters’ predictions of an above-average season held true, as measured by a metric called accumulated cyclone energy, which gauges the total intensity and duration of tropical cyclones throughout the season.
Klotzbach estimated the stored energy would be 125% of the 30-year average; the season concluded at 108%. This indicates that, given the fewer storms, each one was particularly powerful.
“It wasn’t about quantity this season; rather, it was about intensity,” he commented.
Klotzbach noted that nine of the last ten Atlantic hurricane seasons have been warmer than average, attributing this trend to rising ocean temperatures and the La Niña cycle, which generally weakens upper-level winds that inhibit hurricane formation.
McNoldy, who meticulously tracks Atlantic Ocean temperatures, stated that 2025 is expected to be “unusually warm.”
“Regardless of the storms we experienced, there was undoubtedly a significant amount of fuel available,” McNoldy said. Heat from the ocean promotes evaporation, driving warm, moist air upward and leading to convection. For hurricanes to develop, ocean temperatures must be at least 79 degrees Fahrenheit.
“Plants lack ears and brains, so they can’t experience music like we do…”
Credit: Michele Cornelius/Alamy
Do you serenade your plants? As a botanist passionate about houseplants, I often get asked this. The idea of playing music for plants gained traction in the 1960s, alongside the rise of “music for plants” albums, and it’s making a comeback online. But what does current research reveal about this enduring topic?
Clearly, plants lack ears or brains, so they cannot enjoy music in the way humans do. However, recent studies, including one study, indicate that they can detect vibrations in their environment and adapt their behavior accordingly. For instance, mouse worms exposed to the sound of caterpillar chewing produced high levels of a defensive bitter toxin. Astonishingly, plants can differentiate between the vibrations caused by munching insects and those from wind or mating calls, activating their defenses only when threatened.
Moreover, plants react to the sounds of opportunity. Certain flowers, like tomatoes, blueberries, and kiwis, ignore the buzzing of non-pollinating bees and release pollen only when stimulated by the vibrations of specific pollinators. This response can be rapid; for example, evening primrose flowers show changes in nectar composition within three minutes of being played sounds of bees in flight: Rich in sweet rewards. Researchers even reported pea plants can shift their root growth toward the sound of flowing water.
Nonetheless, as anyone who’s heard a seven-year-old on a recorder can attest, there’s a significant distinction between noise and “music.” Experiments aimed at assessing music’s impact on plant growth yielded mixed results. A recent study found certain music tracks enhanced lettuce growth significantly, while alfalfa showed no improvement.
Another investigation into background noise discovered that sage and marigold plants exposed to 16 hours of continuous traffic noise daily exhibited notably reduced growth. Could this continual noise be obstructing plants’ ability to perceive vital sound cues? At this stage, that remains uncertain.
The takeaway? Recent studies reveal that plants are not entirely oblivious to sound; in fact, they are significantly impacted by it. Yet, much about the specifics remains unclear, so we can’t definitively predict which sounds, at what frequencies or volumes, will yield desired results. So before you consider blasting Katy Perry for your plants’ benefit, remember that they might not appreciate it—and neither will your neighbors.
James Wong is a botanist and science writer focused on food crops, conservation, and environmental issues. Trained at the Royal Botanic Gardens in Kew, London, he personally owns over 500 houseplants in his compact apartment. Follow him on X and Instagram @botanygeek.
Beatie Wolfe (left) and Brian Eno prepare for the launch of their latest album.
Cecily Eno
liminal Brian Eno and Beatie Wolfe, Verve Records
One sunny October day, I found myself in a field in New Jersey, gazing up at a giant metallic marvel. I was at the Holmdel Horn Antenna, and I can confidently say it was the most peculiar album launch I’ve ever experienced. Beside me stood Nobel Prize laureate Robert Wilson, the astronomer who reshaped our understanding of the universe. In 1964, he and his colleague Arno Penzias uncovered the cosmic microwave background radiation (CMB), a faint energy signature permeating the cosmos and a significant confirmation of the Big Bang theory.
In addition to this cosmic radiation is liminal, the third installment in a trilogy of albums by ambient music innovator Brian Eno and conceptual artist Beatie Wolfe. Wolfe and Eno refer to their album as “dark matter music,” a fitting description for the enigmatic yet captivating tracks it encompasses. “This album ties everything together, bringing forth the unseen elements surrounding us,” says Wolfe. Eno elaborates, “There’s a notion that the universe teems with entities we cannot perceive.”
Wilson and his colleague Greg Wright repurposed the Holmdel Horn, transforming the 16-ton antenna from a receiver to a transmitter. We leaned over the signal modulator, attempting to catch a glimpse of Wolfe’s resonant voice through the tiny apparatus. “Beatie’s voice possesses a beautiful, rich undertone that’s often elusive,” Wilson noted. But through the horn, the authentic recording emerges, even if it’s inaudible from where I stood.
“The beam width measures around 1 degree, thus any triangulation will diminish the signal before it escapes Earth’s orbit,” Wilson explained. He asserted that the album’s signal is potent enough to resonate within low Earth orbit, but by the time it reaches the moon, it will be drowned out by the CMB.
“
Brian Eno expresses that the album evokes the notion that the universe brims with things we cannot detect. “
Wright and Wilson directed their horns skyward, ready to transmit a message: liminal to the stars. The album depicts a surreal landscape, with layered synths and guitars creating lush ambient tracks, interspersed with songs that showcase Wolfe’s poignant vocals. The immersion is surreal, nearly impossible to articulate—listening felt like the sensation of slipping off a ship and drifting into the vastness of the ocean in a liberating manner.
Following the release of two albums earlier this year, Luminal and Lateral Direction, this work completes the trilogy. “Frequently, when I revisit my work, I struggle to recall how I crafted it,” Wolfe admits. “Including who actually generated the sounds,” Eno adds. “It’s akin to having an intriguing dialogue with someone; you often forget the nuances of how it unfolded.”
The album flows like a conversation, transitioning between dynamic yet tense tracks such as matrix, coupled with foreboding robotic lyrics amidst a whirlwind of drones. Then it evolves into something all-encompassing and deeply evocative, epitomized by little boy—Eno’s favored track.
“Over the past 70 or 80 years, the most significant development in music has been the ability to create new sonic realms that only exist in a fictional sense,” he explains. “One could employ a year-long reverb or fabricate an infinitely expansive space. What we aim to explore is these novel environments and the experience of existing within them.”
While it’s common to label ambient music as “otherworldly,” liminal offers more than that. Its edges lack polish, rendering the human voice and imperfections audible. “Recognizing that different individuals contributed to these creations was crucial,” says Eno. “Interestingly, this view contributes to my skepticism about AI. While I admire AI-generated content, I often feel a void when I realize it was produced by a machine.”
When I inquired whether they believed someone in space might hear their music after transmitting it, they surprised me by revealing they don’t really consider their audience during the creative process. “The beauty of this music lies in the fact that we weren’t focused on anyone while crafting it. We created it simply because it felt enjoyable, thrilling, and exploratory,” Wolfe reflects.
“Play is integral to science, just as it is to art. All the scientists I know are driven by their fascination. It’s the same underlying motivation, as they feel they’re uncovering something profound and significant.”
I recall Wilson, standing in the room where he transformed our comprehension of the cosmic timeline, smiling at his laptop as he discussed the current state of music. It is stretching out beyond the Moon, mingling with the rest of the dark matter on its journey to the constellation Canis Major.
These ‘murder koalas’, or marsupial lions, are the highlight of the show
Apple TV
In 1999, the BBC introduced Walking with Dinosaurs, pioneering a new format of wildlife “documentaries” showcasing long-extinct species. As a fan of this genre, I found Prehistoric Planet: Ice Age, a production by BBC Studios for Apple TV, to be exceptional.
The earlier series brought prehistoric planet dinosaurs to vivid life. Now, this third installment highlights the remarkable mammals that inhabited Earth until relatively recently.
The visuals are breathtaking. You could easily mistake the extinct creatures on screen for real footage, especially their incredibly lifelike eyes.
There were occasional awkward moments in the animals’ movements, but my discerning son remarked, “The only unreal thing is how stunning it looks.”
Paleontologists who previewed the trailer seem genuinely impressed. Ultimately, if you’re at all intrigued by extinct species, Prehistoric Planet: Ice Age is a must-watch.
What I particularly appreciate about this series is its breadth; it’s not solely focused on woolly mammoths fleeing saber-toothed tigers. Iconic Ice Age animals are featured, including giant sloths, woolly rhinos, giant armadillos, scimitar-toothed cats, and Columbian mammoths.
This series explores not just the icy polar regions, but also global ecosystems, showcasing many lesser-known species—including some I had never heard of. The animal deemed the “king of beasts” in Ice Age Africa came as a complete surprise.
Prehistoric Planet: Procoptodon, the giant ice age kangaroo
Apple TV
Another standout was the “murder koala” or marsupial lion (Thylacoleo). A recent study’s findings were published just this month. Koalas are our closest living relatives. The inclusion of this marsupial lion suggests the producers were aware of this finding beforehand. Other Australian creatures, such as a massive marsupial called diprotodon, also make an appearance.
Prehistoric Planet: Ice Age Woolly Mammoth
Apple TV
Additionally, there are charming moments, like a squirrel trying to eat a fruit resembling a giant cannonball, reminiscent of the animated film series Ice Age.
I found the change from David Attenborough to Tom Hiddleston as narrator to be somewhat distracting, as Loki’s voice felt out of place at times.
Interestingly, the series avoids graphic content, perhaps considering a younger audience. I’ll refrain from specifics to avoid spoilers, but I was quite surprised by this approach.
My primary critique is that the final segment discussing the science is brief. I would have preferred more insights from the featured experts, particularly regarding the evidence and rationale behind the actions depicted. Many New Scientist readers might agree with this sentiment, although it could just be my perspective.
While the initial scientific trivia outlines why Ice Ages persisted for so long, it curiously omits mentioning carbon dioxide’s role. The reduction of CO2 was crucial in initiating these Ice Ages, and CO2 feedback significantly amplified orbital variations’ effects.
Lastly, keep an eye out for direwolves. I’ve extensively covered claims of reviving the dire wolf via gene editing on the gray wolf, noting the misconceptions stemming from the fantasy portrayals in Game of Thrones. This series offers a high-quality, accurate artistic representation of a real animal.
Ultimately, this science-based depiction of extinct creatures is a remarkable achievement. The direwolves aren’t just large white wolves; this portrayal captures their distinctive head shape and brownish fur.
Prehistoric Planet: Ice Age Direwolf
Apple TV
For me, the portrayal of extinct animals on screen represents a critical approach to de-extinction. As we approach the end of a lengthy Ice Age, we face the stark reality that there’s no longer a habitat for these extraordinary species on our planet.
There’s no denying that protein has become a major industry nowadays. A glance at the aisles of your neighborhood grocery store reveals numerous products highlighting their protein content, whether they originate from natural sources like meat and dairy or from processed items such as breakfast cereals and pasta.
Additionally, protein powders are available for those wishing to enhance their protein intake or source protein from non-animal origins, including fitness enthusiasts and vegans.
However, a concerning new report discloses that some of these powders contain another substance alongside protein: lead. Given this revelation, how concerned should you be about protein powder?
Lead Levels
Consumer Reports, an independent nonprofit organization in the United States that assesses the quality of consumer products, evaluated 23 different protein powder and shake formulations.
Their findings, revealed in October, were alarming. More than two-thirds of the products contained lead levels per serving that exceeded what Consumer Reports’ food safety experts deem safe for daily consumption.
Worryingly, certain products contained amounts that were 10 times the Dietary Intake Restrictions outlined by Consumer Reports.
At first glance, the levels of lead found in items meant for human consumption might appear dangerously high. However, it’s important to remember that Consumer Reports sets a relatively low daily dietary limit of 0.5 micrograms (μg) per day, whereas the U.S. Food and Drug Administration (FDA) has a limit of 12.5 μg per day.
Protein powders are made from proteins sourced from animals like casein and whey derived from milk, or from plant sources like soy, pea, and hemp. Source: Getty
Why is there such a significant difference between these recommendations? “My assumption is that Consumer Reports employs much lower benchmark levels than the FDA to address regulatory gaps,” says Dr. Kathryn Schilling, Assistant Professor of Environmental Health Sciences at Columbia University, USA.
This regulatory gap exists because supplements like protein powders do not fall under the categories of food or drugs in the United States. They are classified as dietary supplements and regulated by different FDA guidelines under the Dietary Supplement Health and Education Act of 1994 (DSHEA).
“There are no federal restrictions on heavy metals in supplements in the United States, and manufacturers aren’t required to demonstrate their products’ safety prior to market entry,” Schilling points out. “Given that research shows there is no safe threshold for lead, Consumer Reports may have established its own targets purely for health protection.”
In the UK and Europe, however, protein powders are considered food rather than dietary supplements, which mandates adherence to standard food safety regulations, including regular contaminant testing. But does this guarantee that UK protein powders are free of lead?
“No,” Schilling asserts. “Even with stricter supervision, trace levels can still emerge.”
The Danger
As Schilling emphasizes, no level of lead is safe. This is echoed by both the World Health Organization (WHO) and environmental health research in which Schilling was involved.
Toxic heavy metal exposure can have severe consequences on vital organs, including the brain, heart, liver, and kidneys; the documented harm is well-established.
For instance, a significant study by a major U.S. company published in The Lancet Public Health tracked blood lead levels in 14,000 adults over a 20-year period. Researchers discovered that individuals with elevated blood lead levels were 37% more likely to die from any cause and 70% more likely to succumb to heart disease compared to those with lower lead levels.
The body retains lead in the calcified tissues of bones and teeth, where it can build up and remain for decades. Source: Getty
Similarly, the WHO estimated in 2019 that excessive lead exposure led to over 300,000 deaths from strokes worldwide. Lead can harm blood vessel linings, resulting in inflammation, plaque accumulation, and high blood pressure. This is why the American Heart Association lists lead as a risk factor for cardiovascular diseases.
One of lead’s most insidious characteristics, apart from the damage it inflicts, is its tendency to persist in the body over extended periods.
“When lead enters the body, it accumulates in bones, teeth, and other tissues,” Schilling explains. “It can remain trapped in the skeleton for 10 to 30 years, gradually re-entering the bloodstream.”
Unfortunately, this coincides with the fact that even minimal lead consumption can result in bodily harm. Even microgram amounts of lead ingested daily are associated with increased risks of heart disease, kidney issues, and high blood pressure.
As noted earlier, the body eliminates lead at a sluggish pace. Therefore, consistent small amounts can accumulate more rapidly than they can be reduced.
Metal Detection
Consumer Reports’ analysis also revealed that the two protein powders containing the highest lead concentrations (up to 6.3 μg and 7.7 μg per serving) were plant-based products.
“There’s a scientific explanation for why some plant-based protein powders exhibited elevated metal levels,” Schilling states.
“Plants like peas, soybeans, and hemp have a tendency to absorb metals from the soil. If lead is present in even small amounts in the soil or irrigation water, the plants will take it up during growth.
“When these plants undergo processing, the metals from the original plants become concentrated in the final protein powder. Thus, the findings by Consumer Reports are plausible. However, their study examined only 23 products, leaving us unaware of the cultivation locations or manufacturing processes of the powders.”
Soy is a vital ingredient in many vegan and vegetarian supplements due to its high protein content. Source: Getty
“Lead is persistent in soil, dust, and outdated infrastructure,” Schilling notes. “It can still intrude into our homes, water supplies, and food; its prevalence in the environment makes it nearly impossible to eliminate all exposure.”
In fact, a 2019 FDA study estimated that the average American adult is exposed to as much as 5.3 micrograms of lead daily through dietary sources alone. If you inadvertently include a scoop of high-lead protein powder in this total, you could easily surpass FDA limits without even being aware of it.
Even more troubling, Schilling warns that high lead levels in protein powders have long been recognized in the U.S. “We’ve encountered reports like this repeatedly, and little has changed,” she states. “It’s not merely an issue with a single brand or batch; it represents a systemic contamination and oversight problem.”
So, given all this information, how concerned should you be about lead in your protein shakes and powders?
“Protein powder is just one aspect of the bigger picture,” Schilling concludes. “The essential message is not to panic after just one shake, but to acknowledge that even small amounts of lead from various sources can accumulate, highlighting the necessity for enhanced monitoring to remove lead from the products people regularly use.”
The Moon was created through a massive collision between the proto-Earth and the ancient protoplanet Theia. A recent study by a collaborative team of scientists from the United States, Germany, France, and China analyzed iron isotopes in lunar samples, Earth rocks, and meteorites believed to represent the isotope reservoir from which both Theia and early Earth may have formed. Their findings indicate that Theia and most of Earth’s constituent materials originated from the inner solar system, suggesting that Theia formed closer to the sun than Earth.
Artist’s impression of the collision between proto-Earth and Theia. Image credit: MPS/Mark A. Garlick.
“The composition of the body reflects its entire formation history, including its origin,” said Dr. Torsten Kleine, lead author of the study from the Sonnensystemforschung Institute at the Max Planck Institute.
“The ratio of specific metal isotopes within the body is particularly insightful.”
“Isotopes are different versions of the same element, varying only in neutron count in the atomic nucleus, which affects their weight.”
“In the early solar system, the distribution of isotopes was likely not uniform. For instance, at the solar system’s outer edges, isotopes existed in proportions that differed from those found near the Sun.”
“Thus, the isotopic makeup of a body holds clues about the origins of its components.”
The authors measured iron isotopes in Earth and Moon rocks with exceptional accuracy in this study.
The research involved 15 terrestrial rocks and six lunar samples collected by Apollo astronauts.
This outcome aligns with earlier findings, indicating that the Earth and the Moon are indistinguishable in terms of isotope ratios for chromium, calcium, titanium, and zirconium.
However, direct conclusions about Theia are elusive due to their similarities.
The multiplicity of potential collision scenarios also complicates matters.
While most models suggest that the Moon is largely composed of Theia material, it’s also plausible that it consists primarily of early Earth’s mantle material, or a mix of both Earth and Theia rocks.
To explore Theia’s characteristics, researchers employed a method akin to reverse engineering.
They analyzed the isotope ratios of contemporary Earth and Moon rocks to infer the size and composition of Theia, as well as the early Earth composition that resulted in the current state.
The study examined not only iron isotopes but also those of chromium, molybdenum, and zirconium.
Different elements provide insights into various phases of planetary formation.
Before the catastrophic collision with Theia, a sorting process was occurring within the early Earth.
As the iron core formed, elements like iron and molybdenum were sequestered there, almost completely removing them from the rocky mantle.
Thus, the iron found in Earth’s mantle today may have arrived post-core formation, potentially aboard Theia.
Other elements, like zirconium, which did not sink into the core, encapsulate the entire history of Earth’s formation.
Some mathematically feasible compositions of Theia and early Earth can be dismissed as unlikely.
“The most credible scenario suggests that the majority of components in Earth and Theia originated from the inner solar system,” stated Dr. Timo Hopp, a researcher at the University of Chicago and the Max Planck Institute.
“Earth and Theia were likely neighbors.”
“While the early Earth’s composition can be explained primarily through known meteorite mixtures, the same does not hold for Theia.”
“Distinct classes of meteorites formed in various regions of the outer solar system.”
“These provide a reference for the materials accessible during the early formation of Earth and Theia.”
“However, Theia’s composition may also include previously unidentified substances.”
“We hypothesize that this material originated closer to the Sun than to Earth.”
“Thus, our calculations imply that Theia was formed nearer to the sun compared to our planet.”
of result Published in this week’s Science magazine.
_____
Timo Hopp et al.. 2025. Theia, the impactor that formed the Moon, originated from within the solar system. Science 390 (6775): 819-823;doi: 10.1126/science.ado0623
The notion that humans might use chemical signals known as pheromones for communication has intrigued scientists and the general public alike for many years, leading to numerous investigations aimed at discovering evidence.
Pheromones are well-documented in the animal kingdom. Ants use chemical trails for navigation and communication, dogs mark their territory with scent signals, and moths emit airborne pheromones to attract partners.
However, the question of whether humans share this capability is much more complex. Can one person elicit a physical or emotional reaction in another without their awareness? Might this influence attraction?
After over six decades of research, the answers remain uncertain, but recent findings indicate we might be getting closer to understanding this phenomenon.
First Whiff
In 1959, Adolf Butenandt and his team identified the first pheromones, specifically bombykol, a chemical released by female silk moths to attract male counterparts.
Shortly after, scientists introduced the term “pheromone” to describe chemical signals emitted by one individual that trigger distinct responses in another of the same species.
This discovery opened the door to exploring potential human equivalents.
One of the earliest notable claims regarding human pheromones was put forth by Martha McClintock in 1971. Her study involving 135 women residing in university dorms suggested their menstrual cycles seemed to synchronize throughout the year.
This phenomenon, termed the “McClintock effect,” was widely regarded as evidence supporting the existence of human pheromones. However, subsequent studies did not replicate these findings and revealed that any apparent synchronization could be attributed to chance.
For many years, researchers have concentrated on four primary chemicals believed to be human pheromones. Androstenone and androstenol are thought to influence social perception and sexual attraction.
Androstadienone has been investigated for its impact on mood and alertness in women, while estratetraenol is believed to affect men’s perceptions of women.
Nonetheless, none of these substances have been definitively established as true human pheromones.
The doses used in studies are often much higher than what the body naturally produces, leading to less reliable outcomes. Furthermore, many experiments suffer from design flaws and weak statistics, resulting in inconsistent and inconclusive findings.
Read More:
T-Shirt Test
If discussions on human pheromones arise, Professor Klaus Wedekind’s “Sweaty T-shirt research” from 1995 is likely to be mentioned.
In this experiment, women were asked to smell T-shirts worn by men and indicate their preferences.
Interestingly, women who were not on birth control were more inclined to like the scents of men whose immune system genes (MHC genes) differed most from their own.
This preference aligns with evolutionary theory, as choosing mates with varied immune genes can enhance resistance to diseases in offspring.
This study has been replicated and is frequently hailed as a compelling instance of human chemical signaling, wherein body odor conveys social or biological information.
Yet, the scents involved in this research do not adhere to the strict definition of pheromones.
Most of the odor in sweat comes from a small number of underarm bacteria on your T-shirt, not pheromones. – Photo credit: Getty
Initially, a person’s complex “smell print” consists of multiple chemicals rather than a single one. Pheromones trigger automatic and unconscious responses, such as hormonal changes and instinctive behaviors, whereas this type of scent is subjective and conscious, forming personal preferences.
Invisible Clues
Although the T-shirt study does not clarify the role of pheromones in humans, some scientists believe that research in this area is far from complete.
Among them is Dr. Tristram Wyatt, a senior research fellow at the University of Oxford’s Department of Zoology, who has dedicated his career to studying the evolution of pheromones.
“If we consider humans as just another animal, it would be surprising to think we do not communicate chemically,” he explains. “For instance, our body odor evolves during puberty and becomes even more pronounced as we reach sexual maturity.
“In other animals, such odors frequently convey critical signals, so it is highly possible that humans emit similar signals; we just haven’t established this scientifically yet.”
The queen bee releases a pheromone that inhibits the reproduction of all other females in the hive – Photo credit: Getty
Even with this potential, pinpointing human pheromones has proven extraordinarily challenging.
“Studying human pheromones is akin to searching for a needle in a haystack,” Wyatt remarks. “Humans release thousands of odor molecules, making it difficult to identify which one triggers certain effects.
“Moreover, our reactions to odors are influenced by cultural, emotional, and individual differences, rendering our responses highly variable. Without reliable bioassays that provide clear, measurable reactions to odors, it is nearly impossible to pinpoint genuine pheromones.”
Another problem is reproducibility; many pheromone studies are based on small sample sizes, which makes their results statistically unreliable and susceptible to false positives.
Early research often lacks strict controls, and the field faces publication bias, increasing the likelihood of positive results being published.
The outcome? An evidentiary basis that appears more robust than it truly is. It comprises a collection of intriguing yet unreliable findings, with only a few holding up under repeat testing.
The Scent is Hot
Despite years of challenges, Wyatt remains hopeful, particularly about recent advances in research, including a French study that may represent the closest step toward identifying a human pheromone.
This investigation centered on secretions from Montgomery’s glands (small glands around the nipples that release tiny droplets during breastfeeding) in nursing mothers.
Researchers found that when newborns were exposed to the scent of these secretions, they instinctively turned their heads, displayed suckling behavior, and began searching for the nipple.
“This is the most exciting lead we’ve encountered to date,” says Wyatt. “Babies respond to these secretions even if they come from a different mother.
“Such a universal, instinctive reaction is precisely what we expect from an authentic pheromone. If we can identify the specific compound responsible, we might finally establish the first verified human pheromone.”
A recent breakthrough in pheromone research occurred in 2023 at the Weizmann Institute of Science in Israel. Researchers studied the effects of tears from women.
Men who smelled tears shed by a woman during a sad film showed decreased testosterone levels, and brain scans indicated changes in areas linked to both aggression and olfactory processing.
The study also revealed four receptors in the nose capable of detecting chemical signals in tears, and researchers are currently working to identify the specific compounds in tear fluid that elicit this response, potentially leading to compounds that mitigate aggression.
Recent research indicates that chemicals in women’s tears significantly affect men’s testosterone levels – Image courtesy of Getty Images
Nevertheless, while there is evidence that humans utilize scent in both social and sexual contexts, it has yet to be definitively proven that pheromones play a role in human communication.
“To conclusively ascertain whether human pheromones exist, rigorous research is necessary,” Wyatt asserts.
“This entails clear testing with consistent responses, larger and better-designed studies, and moving beyond the same old unproven molecules. Only diligent, evidence-driven research will yield real answers.”
“The quest for genuine human pheromones is just at its inception,” he concludes. “With the proper guidance, we could finally be on the brink of an exciting discovery.”
A submerged “storm” is eroding the ice shelf that shields Antarctica’s Thwaites “Terminal” glacier, prompting concerns that scientists may be underestimating future sea level increases.
These storm-like currents, referred to as “submesoscale” features, can extend up to 10 kilometers wide and begin to form when water with varying temperatures and densities collides in the open ocean. This process is akin to hurricanes that arise from gas mixtures in the atmosphere. Similar to hurricanes, these currents can surge toward the coast, with Antarctica predominantly consisting of ice shelves—floating extensions of glaciers that project tens of kilometers into the ocean.
“Their movements are so unpredictable that halting them is quite challenging,” states Mattia Poinelli from the University of California, Irvine. “The only course of action is for them to become trapped beneath the ice.”
Poinelli and colleagues’ modeling indicates that these submesoscale formations were responsible for one-fifth of the total ice melt in the Thwaites Mountains and nearby Pine Island over a nine-month timeframe. This research marks the first attempt to quantify the influence of these storms across the entire ice shelf.
Ice shelves play a crucial role in hindering the movement of glaciers into the sea and shielding them from wave erosion. The vulnerable Thwaites Glacier annually loses 50 billion tons of ice and could raise sea levels by 65 centimeters if it collapses.
In the Antarctic waters, hundreds of meters of cold, fresh water float above warmer, saltier, deeper water. When a storm becomes enveloped within a cavity beneath an ice shelf, its swirling motions push cold surface water away from the center of the vortex, pulling warmer, deeper water into the cavity and melting the ice shelf from below.
This triggers a feedback mechanism where the melting cold freshwater interacts with the warmer, saltier water, amplifying the rotation of the underwater storm and increasing melting.
In 2022, a deep-sea float that measured temperature, salinity, and pressure was “captured” by a large rotating eddy trapped beneath the ice tongue of Stancombe Wills at another location along the Antarctic coast. The data retrieved from the captured floats showed that Katherine Hancock from Florida State University and her team estimated that the swirl causes 0.11 meters of annual melting beneath its ice tongue.
“This highlights the importance of understanding rotating eddies beneath ice shelves,” says Hancock.
The smaller submesoscale storms from Poinelli’s research are likely causing similar effects, she adds, indicating that swirling water bodies of varying sizes are contributing to significant ice melting. “There’s a need for more precise quantification,” Hancock emphasizes.
As temperatures rise and additional fresh snowmelt escapes from Antarctica, these underwater storms may increase in intensity, possibly leading to greater sea level rise than currently anticipated.
Tiago Dot of Britain’s National Oceanography Centre stated that the “unexpected” findings necessitate further observations beneath the ice shelf.
“Considering the shifts in wind patterns and sea ice around Antarctica, how much are we genuinely overlooking by not monitoring these smaller scales?” he questions.
Artist’s Impression of Population III Stars in the Early Universe
Noir Lab/NSF/AURA/J. da Silva/Space Engine/M. Zamani
The James Webb Space Telescope (JWST) offers astronomers a unique opportunity to explore distant galaxies that exist far beyond the early Universe. Some of these galaxies exhibit chemical signatures that may suggest the presence of exotic supermassive stars, possibly weighing up to 10,000 times that of our Sun.
These enormous stars are puzzling, as our current understanding suggests that stars in the nearby universe generally have a maximum size limit. “Our models for galaxy evolution are predicated on the assumption that stars cannot exceed around 120 solar masses,” explains Devesh Nandal at the Harvard-Smithsonian Center for Astrophysics, Massachusetts. “While we had theorized about stars potentially larger than this, there were no observational data to validate it.”
That all changed recently. Nandal and his team analyzed JWST observations of a distant galaxy dubbed GS 3073, discovering its chemical signature contained an unexpectedly high concentration of nitrogen. Though elevated nitrogen levels have also been noted in several other remote galaxies,
For most galaxies, nitrogen concentrations aren’t high enough to cause ambiguity and can be attributed to certain classes of relatively ordinary stars or other cosmic phenomena. However, this isn’t the case for GS 3073, as Nandal asserts that the nitrogen levels are simply too elevated.
There exists a hypothetical category of protostar referred to as a Population III star, which models indicate can grow to considerable sizes. Simulations suggest that if these stars form, they would produce significantly more nitrogen than typical stars. Nandal and his co-researchers concluded that only a handful of Population III stars—ranging from 1,000 to 10,000 solar masses—could account for the excess nitrogen observed in GS 3073. “Our research provides the most compelling evidence yet for the existence of Population III supermassive stars in the early universe,” he declares.
However, some scholars challenge whether only supermassive Population III stars can account for this data, or if they do so accurately. “Population III should be linked with an environment where elements heavier than helium are scarce,” notes Roberto Maiorino of Cambridge University. “Conversely, GS 3073 is a fairly chemically mature galaxy, which makes it seem ill-suited for the types of environments typically associated with Population III.”
On the other hand, John Regan from Maynooth University in Ireland suggests that this may simply be an unusual galaxy. “When we look back at the early universe, what we see are incredibly strange, exotic galaxies. It’s challenging to assert that we shouldn’t expect the formation of supermassive stars simply because it’s peculiar; you just claimed these galaxies are quite bizarre,” he states.
If these colossal stars truly exist, they may unlock mysteries related to the formation of supermassive black holes in the universe’s distant past. Should they originate from supermassive stars instead of conventional stars, we could gain critical insights into how these black holes achieved their immense sizes in what appears to be a relatively brief time frame.
Confirming the existence of supermassive stars in GS 3073 and other nitrogen-rich galaxies from the early Universe is complex, and additional discoveries of these chemical signatures may be necessary. “It’s quite challenging to bolster the argument for their existence; establishing definitive signatures is difficult,” Regan lamented. “Nonetheless, this indication is incredibly robust.”
Our bodies comprise various soft, hard, and intricate components. What should we do when these components fail or don’t meet our needs? Medicine provides several solutions, including dentures, skin, heart, and hair transplants, but don’t expect an instant replacement.
In Alternative You: Adventures in Human Anatomy, popular science author Mary Roach explores the most intriguing historical and current efforts to repair, replace, or enhance our body parts.
These efforts range from dentures designed like mouth piercings, lab-grown anuses, to gene-edited pig hearts, each delivered with a humor that had me laughing, wincing, and holding my breath throughout the pages.
Roach, drawn to the “human element of exploration,” shares engaging tales as she travels the globe to meet surgeons, scientists, patients, and other individuals at the forefront of body modification.
Her bold and often cheeky questions animate these encounters. For example, during a dinner discussion about gut-derived vaginas with her surgeon, she mentions that intestinal tissue generally contracts to aid in food movement.
“That could be advantageous for partners who have penises, right?” she quips. “It’s not overly aggressive,” the surgeon replies, sipping his Chianti.
Roach embraces self-experimentation, visiting a hair transplant surgeon and persuading him to relocate hair follicles from her head to another body area. Her goal? To gaze in wonder at the few long strands that might sprout on her legs. While the transplant fails, she quickly dives into the trials of growing hair from stem cells. Spoiler: we’re not there yet.
One significant innovation Roach covers is ostomy, where surgeons create openings in the abdomen for waste drainage into an external pouch. She speaks with individuals who use stoma bags due to conditions like Crohn’s disease and colitis, which can lead to inflammation and frequent bowel movements, complicating life outside the home. Roach highlights the importance of reducing stigma around ostomies and discusses the remarkable technology supporting this procedure.
As expected from a book on body part replacement, there’s a chapter dedicated to 3D printed organs. Roach approaches this topic thoughtfully, noting that it’s not merely about feeding cells into a printer. Most organs consist of multiple cell types that must be arranged with precise specifications, and printed tissues often lack the authentic properties that remain elusive for researchers.
I highly recommend this book to anyone curious about the human body. However, be advised—some vivid surgical descriptions are included. (If that’s not your cup of tea, feel free to skip the next paragraph.) At one point, Roach compares the tubes of fat and blood pulled from patients to “raspberry smoothies.” Additionally, when a leg implant is affixed to the femur, it sounds like “tent stakes collapsing.”
Such sensory details might not appeal to everyone, but for those willing to confront the raw, sinewy, and delicate reality of our bodies, this book serves as a profound reminder of our complexity and depth. I certainly walked away feeling grateful for all that I have.
Unusual marks found on rocky surfaces in Italy may have been created by a group of sea turtles reacting to an earthquake around 83 million years ago.
Extreme climbers stumbled upon a peculiar feature in a restricted area on the slopes of Monte Conero along Italy’s east coastline.
Over 1,000 prints are evident in two distinct spots. One location is situated over 100 meters above sea level, while the other is a ledge that collapsed onto La Vera Beach. These limestone rocks were formed from fine sediments that settled on the shallow ocean floor during the Cretaceous era.
The climbers captured photographs that were subsequently shared with the Alessandro Montanari Cordigioco Geological Observatory in Italy and colleagues. Scientists were then granted permission by the Conero Regional Park authority to explore the area both on foot and using drones.
Montanari mentioned that while it is challenging to identify which animal made the marks, the only two types of vertebrates inhabiting the ocean then were fish and marine reptiles. The researchers dismissed fish, plesiosaurs, and mosasaurs, leading to the conclusion that sea turtles are the most probable culprits.
Given the dynamic nature of the ocean floor, the prints must have been buried almost immediately after formation to remain intact, potentially occurring during an earthquake.
“[It may have been] the powerful earthquake that frightened the poor animals, which were peacefully residing in their nutrient-rich shallow-water habitat,” states Montanari.
“In panic, they swam towards the open sea on the west side of the reef, leaving paddle impressions on the soft seabed.”
However, the notion of a turtle swarm remains speculative, and the team is eager to collaborate with ichthyologists who specialize in analyzing fossilized tracks for the next phase of their research.
Anthony Romilio, a researcher from the University of Queensland in Australia, claims that if these marks indeed are from sea turtles, they would be “potentially the most numerous in the world.”
Nevertheless, he has yet to visit the site or view high-resolution images and doubts the prints belong to sea turtles. “The surface patterns do not exhibit the spacing, rhythm, or anatomy expected in a sea turtle’s flipper stroke,” he comments. “I suspect they are abiotic formations rather than biological in origin.”
Dinosaur hunting in Mongolia’s Gobi desert
Join an exciting and unique expedition to uncover dinosaur remains in the expansive wilderness of the Gobi Desert, a renowned hotspot for paleontology.
Quantum Machine Professor Jonathan Cohen presenting at the AQC25 conference
Quantum Machines
Classical computers are emerging as a critical component in maximizing the functionality of quantum computers. This was a key takeaway from this month’s assembly of researchers who emphasized that classical systems are vital for managing quantum computers, interpreting their outputs, and enhancing future quantum computing methodologies.
Quantum computers operate on qubits—quantum entities manifesting as extremely cold atoms or miniature superconducting circuits. The computational capability of a quantum computer scales with the number of qubits it possesses.
Yet, qubits are delicate and necessitate meticulous tuning, oversight, and governance. Should these conditions not be met, the computations conducted may yield inaccuracies, rendering the devices less efficient. To manage qubits effectively, researchers utilize classical computing methods. The AQC25 conference held on November 14th in Boston, Massachusetts, addressed these challenges.
Sponsored by Quantum Machines, a company specializing in controllers for various qubit types, the AQC25 conference gathered over 150 experts, including quantum computing scholars and CEOs from AI startups. Through numerous presentations, attendees elaborated on the enabling technologies vital for the future of quantum computing and how classical computing sometimes acts as a constraint.
Per Shane Caldwell, sustainable fault-tolerant quantum computers designed to tackle practical problems are only expected to materialize with a robust classical computing framework that operates at petascale—similar to today’s leading supercomputers. Although Nvidia does not produce quantum hardware, it has recently introduced a system that links quantum processors (QPUs) to traditional GPUs, which are commonly employed in machine learning and high-performance scientific computing.
Even in optimal operations, the results from a quantum computer reflect a series of quantum properties of the qubits. To utilize this data effectively, it requires translation into conventional formats, a process that again relies on classical computing resources.
Pooya Lonar from Vancouver-based startup 1Qbit discussed this translation process and its implications, noting that the performance speed of fault-tolerant quantum computers can often hinge on the operational efficiency of classical components such as controllers and decoders. This means that whether a sophisticated quantum machine operates for hours or days to solve a problem might depend significantly on its classical components.
In another presentation, Benjamin Lienhardt from the Walter Meissner Institute for Cryogenic Research in Germany, presented findings on how traditional machine learning algorithms can facilitate the interpretation of quantum states in superconducting qubits. Similarly, Mark Saffman from the University of Wisconsin-Madison highlighted using classical neural networks to enhance the readout of qubits derived from ultra-cold atoms. Researchers unanimously agreed that non-quantum devices are instrumental in unlocking the potential of various qubit types.
IBM’s Blake Johnson shared insights into a classical decoder his team is developing as part of an ambitious plan to create a quantum supercomputer by 2029. This endeavor will employ unconventional error correction strategies, making the efficient decoding process a significant hurdle.
“As we progress, the trend will shift increasingly towards classical [computing]. The closer one approaches the QPU, the more you can optimize your system’s overall performance,” stated Jonathan Cohen from Quantum Machines.
Classical computing is also instrumental in assessing the design and functionality of future quantum systems. For instance, Izhar Medalcy, co-founder of the startup Quantum Elements, discussed how an AI-powered virtual model of a quantum computer, often referred to as a “digital twin,” can inform actual hardware design decisions.
Representatives from the Quantum Scaling Alliance, co-led by 2025 Nobel Laureate John Martinis, were also present at the conference. This reflects the importance of collaboration between quantum and classical computing realms, bringing together qubit developers, traditional computing giants like Hewlett Packard Enterprise, and computational materials specialists such as the software company Synopsys.
The collective sentiment at the conference was unmistakable. The future of quantum computing is on the horizon, bolstered significantly by experts who have excelled in classical computing environments.
Recent ultraviolet (UV) images from the imaging ultraviolet spectrometer (IUVS) on NASA’s MAVEN (Mars Atmosphere and Volatile Evolution) orbiter have provided unique insights into the interstellar comet 3I/ATLAS, offering details about its chemical composition and the amount of water vapor released as it warms under the Sun. These findings will aid scientists in understanding the past, present, and future of 3I/ATLAS.
This ultraviolet image displays the coma of 3I/ATLAS as observed on October 9, 2025, by NASA’s MAVEN spacecraft utilizing its IUVS camera. The brightest pixel in the center marks the comet’s location, while the surrounding bright pixels show the presence of hydrogen atoms emanating from the comet. Image credit: NASA/Goddard/LASP/CU Boulder.
MAVEN captured images of 3I/ATLAS over a span of 10 days starting September 27, 2025, using IUVS cameras in two distinctive methods.
Initially, IUVS generated multiple images of the comet across several wavelengths, akin to using various filters on a single camera.
Subsequently, high-resolution UV images were obtained to identify the hydrogen emitted by 3I/ATLAS.
Analyzing these images together allows researchers to pinpoint various molecules and gain a deeper understanding of the comet’s makeup.
“The images gathered by MAVEN are truly astounding,” remarked Dr. Shannon Currie, MAVEN’s principal investigator.
“The detections we observe are significant, and we have merely begun our analysis journey.”
This annotated composite image highlights hydrogen atoms from three origins, including 3I/ATLAS (left), captured by NASA’s MAVEN orbiter on September 28, 2025, using an IUVS camera. The bright stripe on the right corresponds to hydrogen released from Mars, while the dark stripe in the center represents interplanetary hydrogen present in the solar system. Image credit: NASA/Goddard/LASP/CU Boulder.
The IUVS data also provides an estimated upper limit on the ratio of deuterium to normal hydrogen in comets, which is crucial for tracking their origin and evolution.
During the comet’s closest approach to Mars, Curry and his team utilized IUVS’s more sensitive channel to map various atoms and molecules, such as hydrogen and hydroxyls, within the comet’s coma.
Further examination of the comet’s chemical makeup could shed light on its origins and evolutionary journey.
“I experienced a rush of adrenaline when I saw what we had documented,” stated Dr. Justin Dahan, co-principal investigator of MAVEN and a member of the Atmospheric and Space Physics Laboratory at the University of Colorado Boulder.
“Every observation we make about this comet will enhance our understanding of interstellar objects.”
The ecological shifts experienced on Easter Island (Rapanui) represent one of the most illustrative yet contentious examples in environmental archaeology. This discussion centers around the Polynesian rat (brown rat) amid the island’s deforestation, an event that wiped out an estimated 15 million to 19.7 million palm trees, specifically the palm tree (pashalococcos disperta) between approximately 1200 and 1650 AD.
Easter Island, known as Rapa Nui to its early inhabitants, is one of the least populated islands in the world. It is located approximately 3,512 km from the west coast of Chile and about 2,075 km west of the nearest inhabited island, Pitcairn Island. For reasons still unclear, the early Rapa Nui people began carving giant statues from volcanic rock. These monumental statues, known as moai, are among the most remarkable ancient artifacts discovered. Image credit: Bjørn Christian Tørrissen / CC BY-SA 3.0.
These majestic trees can survive for up to 500 years, but are slow-growing, taking around 70 years to mature and bear fruit.
By the time Europeans arrived in 1722, very few palm trees remained. When European interest in the island’s ecosystem peaked, these trees had largely disappeared.
“European accounts often describe islands devoid of trees, yet they also mention palm trees and their fronds,” notes Carl Lipo, a professor at Binghamton University.
“It’s uncertain whether they used this term to denote other types of trees.”
When exploring new islands, Polynesians transported various subsistence items such as taro, sweet potatoes, bananas, yams, dogs, chickens, and pigs, along with the omnipresent Polynesian rat.
In contrast to the Norway rat (brown rat), which was introduced post-European contact and favors the tree canopy, this smaller arboreal species provides a wealth of information for researchers.
“Their genetics showcase unique haplotypes due to the ‘founder effect’,” explains Professor Lipo.
“The genetic diversity of rats as they traverse the Pacific allows us to trace human migrations and the frequency of these settlements.”
The methods by which these rats entered Polynesian outrigger canoes is debated. Were they stowaways or intentionally included as a backup food source? Ethnographic evidence leans toward the latter.
“After European arrival, a naturalist collecting specimens for the British Museum witnessed a man walking with a mouse, who informed him it was for lunch.”
Additionally, rat bones have been uncovered in midden deposits, or ancient refuse piles, on various Pacific islands.
Upon their arrival at Rapa Nui around 1200 AD, the rats discovered a predator-free paradise filled with their preferred foods.
Their population surged into the millions within a few years, as they can breed multiple times annually.
“The palm fruit was like candy to the rats. They turned into a significant food source,” Professor Lipo commented.
Rapa Nui’s palm trees had coevolved with birds and did not develop the boom-and-bust production cycle that would have enabled some nuts to withstand rodent exploitation.
As a result, rats consumed the palm fruit, preventing the next generation of trees from establishing.
Simultaneously, humans cleared land for sweet potato fields. This dual pressure led to the deforestation now characteristic of the island.
Alongside plants and animals, Polynesians also incorporated practices such as slash-and-burn agriculture to enhance soil fertility.
Old volcanic islands like Rapa Nui possess poor soil, and rainfall depletes nutrients.
Clearing or burning parts of the forest temporarily rejuvenates soil quality.
Once nutrients are exhausted, farmers relocate, the land recuperates, and trees regrow.
“This pattern is also observable in New Guinea and other regions across the Pacific,” Professor Lipo mentions.
“However, in Rapa Nui, the slow growth of trees and the rats consuming coconuts inhibited regrowth.”
Eventually, the islanders shifted to a farming technique that utilized stone mulch to enrich their crops.
While the reduction of palm forests marked a significant ecological transformation, it was not a disaster solely orchestrated by humans.
The islanders’ survival did not hinge on the palm trees; rather, it depended on the availability of cleared land for agriculture.
Moreover, palms are not hardwoods; they belong to the grass family and do not provide material for canoes, homes, or fuel.
“The loss of palm forests is unfortunate, yet it wasn’t catastrophic for the people,” states Professor Lipo.
“They didn’t rely on them for survival.”
Though some palms may have persisted into European colonization, the introduction of sheep farming in the 19th century likely sealed their extinction, as any remaining seedlings would be consumed by sheep.
Ironically, the Polynesian mouse suffered a similar fate to the palm trees, being outcompeted by Norway rats or predated by non-native species like hawks on most islands.
Despite changes in species, islanders still discuss the rodents’ cyclical population booms and severe declines.
The narrative of Rapa Nui exemplifies unintended consequences as well as resilience and adaptability in one of the most remote inhabited islands, with its closest neighbor situated 1,931 km (1,200 miles) away.
“A more nuanced perspective on environmental change is essential,” says Professor Lipo.
“We are integral to the natural world and often modify it for our benefit; however, this does not necessarily imply we are creating an unsustainable environment.”
Findings from this study will be published in the archeology journal.
_____
Terry L. Hunt and Carl P. Lipo. 2025. Re-evaluating the role of Polynesian rats (brown rat) in the deforestation of Rapa Nui (Easter Island): Faunal evidence and ecological modeling. archeology journal 184: 106388; doi: 10.1016/j.jas.2025.106388
By utilizing data from the NASA/ESA/CSA James Webb Space Telescope along with ESO’s Very Large Telescope (VLT), two separate teams of astronomers have captured mid-infrared images of a system featuring four intricate spirals of dust encircling a pair of aging Wolf-Rayet stars located in a system known as Apep (2XMM J160050.7-514245).
Webb’s mid-infrared images reveal four coiled dust shells surrounding two Wolf-Rayet stars known as Apep. Image credits: NASA / ESA / CSA / STScI / California Institute of Technology Yeahuo Han / Macquarie University Ryan White / Alyssa Pagan, STScI.
Wolf-Rayet stars represent a rare class of massive binary stars where the universe’s earliest carbon is formed.
There are estimated to be only around 1,000 of these stars in the Milky Way galaxy, which contains hundreds of billions of stars in total.
Among the multiple Wolf-Rayet binaries observed so far, the Apep system stands out as the sole example of having two such Wolf-Rayet stars within our galaxy.
In a recent study, astronomer Ryan White from Macquarie University and his team set out to refine the orbital characteristics of the Wolf-Rayet stars in the Apep system.
They integrated precise ring position measurements from the Webb images with the shell’s expansion rate obtained over eight years of VLT observations.
“This is a unique system with a very extended orbital period,” White mentioned.
“The next longest orbit for a dusty Wolf-Rayet binary is roughly 30 years, while most orbits tend to span between 2 and 10 years.”
One of the team’s papers was published concurrently in the Astrophysical Journal alongside another study led by astronomer Yinuo Han from the California Institute of Technology.
“Observing the new Webb data felt like stepping into a dark room and flipping on a light switch. Everything became visible,” Dr. Han remarked.
“Dust is abundant throughout the Webb image, and telescope observations indicate that much of it is fragmenting into repeating and predictable structures.”
Webb’s observations yielded unprecedented images. It produced a clear mid-infrared image revealing a system of four swirling spirals of dust, each expanding in a consistent pattern. Ground-based telescopes had only identified one shell prior to Webb’s discoveries.
By merging Webb imagery with several years of VLT data, they refined the orbital frequency of the star pairs to every 190 years.
Within this remarkably lengthy orbit, the star approaches closely for 25 years, enabling dust formation.
Additionally, Webb’s observations confirmed the existence of three stars that are gravitationally bound to each other in this system.
The dust expelled by the two Wolf-Rayet stars is being cleaved by a third star, a massive supergiant, which creates holes in the dust cloud emanating from its expansive orbit.
“Dr. Webb has provided us with the ‘smoking gun’ evidence to confirm that a third star is gravitationally linked to this system,” Dr. Han noted.
Researchers were aware of this third star since VLT observed its brightest inner shell in 2018, but Webb’s findings helped refine the geometric model and reinforced the connection.
“We unraveled several mysteries with Webb,” Dr. Han added.
“The lingering mystery remains the precise distance from Earth to the star, which will necessitate further observations.”
_____
Ryan MT White et al. 2025. Snake eating its own tail: Dust destruction of the Apep impact wind nebula. APJ 994, 121; doi: 10.3847/1538-4357/adfbe1
Han Yinuo et al. 2025. JWST reveals the formation and evolution of dust in APEP, a binary star with colliding winds. APJ 994, 122; doi: 10.3847/1538-4357/ae12e5
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.