Astronomers have successfully mapped the vertical structure of Uranus’ ionosphere for the very first time, uncovering unexpected temperature peaks, a decline in ion density, and enigmatic dark regions influenced by the planet’s unique magnetic field. These groundbreaking findings, achieved through nearly a full day of observations using the NIRSpec instrument aboard NASA/ESA/CSA’s James Webb Space Telescope, confirm a decades-long cooling trend in Uranus’ upper atmosphere and offer an unprecedented look at how this ice giant interacts with its surrounding space differently than other celestial bodies in our solar system.
Tiranti et al. mapped the vertical structure of Uranus’s upper atmosphere, revealing variations in temperature and charged particles across different heights. Image credits: NASA / ESA / CSA / Webb / STScI / P. Tiranti / H. Melin / M. Zamani, ESA & Webb.
Uranus’s upper atmosphere remains one of the least understood components in our solar system, despite its critical role in elucidating the interactions between the giant planet and its space environment.
Astronomer Paola Tiranti from Northumbria University and her team dedicated nearly an entire day to observing Uranus with Webb’s NIRSpec instrument.
They successfully measured the vertical structure of the ionosphere, the electrically charged layer of the atmosphere where auroras occur.
“This is the first time we’ve been able to visualize Uranus’s upper atmosphere in three dimensions,” Tiranti remarked.
“Utilizing Webb’s sensitivity, we can investigate how energy migrates upward through the planet’s atmosphere, even observing the effects of polarized magnetic fields.”
Measurements revealed temperature peaks at approximately 3,000 to 4,000 km above the surface, while ion density peaked around 1,000 km, significantly weaker than previously modeled predictions.
Webb also identified two bright bands of auroral emission located near Uranus’s magnetic poles, along with an unexpected area of depleted emission and density, likely tied to the planet’s unusual magnetic field geometry.
These discoveries confirm a long-term cooling trend in Uranus’ upper atmosphere and highlight new structures shaped by its magnetic environment.
These findings offer critical benchmarks for future missions and enhance our comprehension of how giant planets—both within and beyond our solar system—maintain the energy balance in their upper atmospheres.
“Uranus’ magnetosphere is one of the most peculiar in the solar system,” Tiranti emphasized.
“Its tilt and offset from the planet’s rotational axis cause its auroras to be distributed in a complex fashion across the surface.”
“Webb has provided insights into how deeply these effects penetrate into the atmosphere.”
“By detailing Uranus’s vertical structure so thoroughly, Webb aids in our understanding of the energy balance of the ice giant.”
“This represents a significant step toward characterizing giant planets beyond our solar system.”
For further details, refer to the results published in the journal Geophysical Research Letters.
_____
Paola I. Tiranti et al. 2026. JWST uncovers the vertical structure of Uranus’ ionosphere. Geophysical Research Letters 53 (4): e2025GL119304; doi: 10.1029/2025GL119304
Cancer disrupts multiple layers of the biological blueprint, including the order of DNA sequences and the chemical markers on DNA known as DNA methylation. In cancer patients, tumor samples obtained from areas like the colon or skin contain a blend of healthy cells, which exhibit normal levels of methylation, alongside cancer cells that show abnormal methylation patterns. This mixture complicates doctors’ efforts to differentiate between the two and identify which methylation signals are genuinely sourced from the tumor.
Moreover, harvesting tumors directly often necessitates painful surgical procedures. Some scientists propose using blood samples as an alternative for initial diagnosis. However, blood samples generally face the same challenge, frequently containing only minute traces of cancer DNA.
Traditionally, scientists have averaged the methylation levels of numerous DNA fragments from patient samples to estimate the proportions of cancerous and normal DNA present. Unfortunately, this conventional approach overlooks valuable insights regarding rare and subtle disruptions to DNA. Researchers in Germany and Belgium contend that this missing information is vital for the early detection and diagnosis of cancer. Consequently, they have introduced a new analytical tool named Methylvert to tackle this issue. This tool examines individual DNA sequences to analyze DNA methylation, ensuring these subtle details are preserved.
The team developed MmethylBERT, utilizing the same technology that powers modern language models, such as ChatGPT, with a transformer architecture. They re-engineered this technology to interpret the language of DNA and its methylation signals rather than human language. Each DNA sequence served as a concise “sentence” for the model to analyze and discern the differences between tumor and normal DNA.
The researchers trained MmethylBERT in two phases. Initially, they exposed it to a template dataset derived from the human reference genome. This dataset was used to help the model recognize patterns in DNA sequences, independent of methylation or disease information. This step is akin to teaching students to read using only the letters that form words, without additional context. The model became adept at distinguishing various three-letter DNA combinations, recognizing that certain bases, particularly C and G in ATCG, manifest in specific patterns. The pre-training step proved crucial; omitting it would prevent the model from accurately classifying cancer cells versus normal cells.
In the second phase, they fine-tuned the pre-trained model using DNA sequences from actual cancerous and healthy samples, teaching the model to identify known tumor-specific methylation patterns. This strategy parallels instructing students on grammar, which adds context and meaning to words. The model learned that certain DNA regions exhibit high methylation levels in tumors and low or negligible methylation in normal cells, or vice versa. They devised a system that generates a probability score, indicating how likely each DNA fragment originates from tumor or normal tissue.
The team evaluated MmethylBERT against existing methods by employing simulated DNA sequence data of varying complexity. Their findings demonstrated that their method accurately detects cancer DNA, even while analyzing DNA fragments at genomic locations with minimal sequence reads—where traditional methods often falter. They successfully identified very small quantities of tumor DNA in the blood of colorectal and pancreatic cancer patients, further validating its applicability in non-invasive cancer detection.
Scientists noted that training models on human genome data is time-consuming, so they assessed whether a model trained on the mouse genome could analyze human cancer samples. Remarkably, the mouse-trained model performed nearly as well as the human-trained model when applied to human cancer data, resulting in only minor differences in the probability distribution. The researchers attributed this efficacy to the consistent organization of DNA across mammals, enabling models to transfer knowledge from one organism to another.
The researchers concluded that MethylBERT can identify cancer DNA in sequence data obtained from any sequencing platform, irrespective of the complexity of the methylation signal or the size of the tumor DNA in the sample. They also cautioned that the current version requires substantial computational resources for training and operation and have already commenced development on a more efficient iteration.
Beneath the Earth’s surface lies a largely unexplored ecosystem known as the critical zone. This unique area of soil stretches from the Earth’s surface to the base of the groundwater zone, acting as a dynamic interface where rock, water, air, and life converge. Despite their low content of carbon and nutrients compared to surface soils, the microbial communities found in these deep soils are remarkably diverse. Scientists are still uncovering how these microorganisms manage to thrive under such nutrient-scarce conditions.
To explore how microbes survive in the critical zone, researchers focused on a little-known group of bacteria identified globally in deep soils. Known as CSP1-3 Gate, these bacteria were first discovered in 2006 within a geothermal system in Yellowstone National Park. Since then, they have been found in various oxygen-limited and nutrient-poor environments, yet their exact role and characteristics remain mysterious.
Researchers collected soil samples from seven deep soil cores spanning 20 meters (approximately 65 feet) in Shaanxi province, China, and western Iowa, USA. By extracting and sequencing environmental DNA from these samples, they pieced together draft genomes of the microorganisms inhabiting these depths. Through metagenomic analyses, they aim to uncover where CSP1-3 microbes live, their dietary habits, their nutrient cycling processes, and the adaptations that facilitate their survival.
Analysis revealed CSP1-3 bacteria were abundant in deeper soils, comprising over 10% of all microorganisms found in 30 out of 86 soil layers below 5 meters (16 feet). In some layers, such as those at 17 meters (56 ft) and 22 meters (72 ft) deep, CSP1-3 accounted for up to 60% of the microbial population. Using DNA copy-counting methods, researchers estimated that nearly 50% of CSP1-3 cells in these deep soils were actively replicating.
Based on the assembled metagenomes, the research indicated that CSP1-3 bacteria utilize a flexible metabolism to thrive in deep soils. They identified genes that allow these bacteria to alternate between two methods of obtaining energy: autotrophy, which involves producing their own food, and heterotrophy, which entails consuming organic matter from their environment. This adaptability, referred to as mixotrophy, allows them to respond to varying nutrient availability.
Additionally, researchers uncovered genes enabling CSP1-3 bacteria to utilize diverse energy sources such as carbon monoxide (CO) and diatomic hydrogen (H2), both prevalent in deep soils. They also identified genes allowing these microbes to generate energy under varying oxygen conditions, providing an advantage in environments where oxygen levels fluctuate. Genes related to sugar synthesis, such as trehalose, contribute further to their endurance in resource-limited conditions, alongside genes linked to carbon, nitrogen, and sulfur management.
The team analyzed 521 genomes from diverse environments globally, including aquatic habitats, topsoil, and deep soil, to trace the evolutionary lineage of CSP1-3. Genome analysis indicated that these bacteria’s ancestors originated in aquatic settings before transitioning to topsoil and ultimately to deep soil, with significant genomic changes that augmented their carbohydrate and energy metabolism to facilitate adaptation to terrestrial ecosystems.
The researchers concluded that CSP1-3 bacteria are evolutionarily suited to thrive in deep, nutrient-poor soils due to their specialized metabolism and low-energy survival strategies. They posited that CSP1-3 plays a crucial role in energy and nutrient cycling, potentially influencing global environmental processes by enhancing soil fertility and nutrient availability, thereby stabilizing deep soil ecosystems. The ability of these microorganisms to utilize gaseous energy in nutrient-deficient environments offers compelling insights into their survival strategies under extreme conditions, contributing to ongoing planet protection efforts. However, further investigations are necessary to fully comprehend how these deep soil microbes impact soil chemistry and ecosystem functions over time.
Detecting decay in meat is often challenging. Fresh-looking meat inside a sealed package can conceal harmful microorganisms. Annually, food poisoning impacts millions globally, with 200 diseases linked to unsafe food consumption.
Consumers unknowingly ingest spoiled meat containing biogenic amines (BAs). Food inspectors traditionally detect these compounds through direct sampling and extensive lab analysis. However, once meat is packaged for retail, such testing becomes time-consuming and impractical, making spoilage hard to identify.
Researchers from the China Institute of Food Science and Technology have devised a novel approach for visually detecting spoilage inside sealed food packages. They utilized a tiny carbon-based material known as carbon dots, which are mere thousandths of a human hair in width. These nanoscale dots possess a unique ability to absorb ultraviolet light and emit visible fluorescence, with color variations contingent on their chemical environment. Although most carbon dots emit blue-green light, researchers are striving to shift this fluorescence to a noticeable red hue for easier identification.
The team synthesized these carbon dots using ethanol, which dissolves citric acid and a nitrogen-rich compound, o-phenyldiamine (OPD) known for enhancing red fluorescence. By heating this mixture at 220 °C (428 °F) for six hours and subsequently purifying it via centrifuge and filtration, researchers incorporated various elements to fine-tune the fluorescence properties of the carbon dots, developing OPD variants containing fluorine, chlorine, bromine, and iodine.
For sensitivity testing, researchers added up to 50 milligrams per liter (mg/L) of BAs to each carbon dot solution. They noted distinct fluorescence color changes after mixing for five minutes, with the chlorinated variant displaying the most pronounced transformation from orange-red to yellow. This reaction is attributed to BAs interacting with chlorinated carbon dots, altering their surface properties and resulting in color changes. Consequently, chlorinated carbon dots were identified as optimal indicators for visual BA detection. The biosensor was created by soaking filter paper in a 5 mg/mL chlorinated carbon dot solution for 30 minutes, followed by a 15-minute drying process at 37 °C (99 °F).
To evaluate real-world effectiveness, the researchers placed pork, beef, and mutton in separate plastic trays, attaching the biosensor underneath the lid. They sealed the trays and stored them at 25 °C (77 °F) under ultraviolet light. As a control, a similar tray was prepared containing only a moist sponge and the biosensor, without meat. Results indicated that the biosensors in pork and lamb trays turned bright yellow after 24 hours, while beef biosensors showed a color change after 36 hours. The control biosensor exhibited no noticeable changes.
Additionally, the team developed a smartphone app for color analysis, allowing for image processing and reporting of color values. This app computes numerical ratios between red, green, and blue color components, facilitating objective assessments of color changes linked to spoilage. They further compared these values with the globally acknowledged meat spoilage index, Total volatile basic nitrogen (TVB-N), a commonly used indicator for meat freshness. The researchers found a strong linear correlation between TVB-N values and their data, confirming that biosensor color changes reliably indicated spoilage.
In conclusion, the research team successfully created an efficient process to produce color-changing carbon dots functioning as visual spoilage sensors. Integrating these into food packaging enables real-time freshness assessment of meat, simply using ultraviolet light and a smartphone. This innovative technology holds potential to enhance food safety, better supply chain management, and reduce food waste.
A significant, long-term study indicates that engaging in brain-training video games may provide protection against dementia for decades. Experts deem this the most compelling evidence to date that cognitive training can yield enduring alterations in brain function.
“This is quite unexpected,” remarked Marilyn Albert, director of the Alzheimer’s Disease Research Center at Johns Hopkins University. “It’s not at all what I anticipated.”
This groundbreaking study, published Monday in the journal Alzheimer’s & Dementia: Translational Research & Clinical Interventions, follows the Advanced Cognitive Training for Independent and Vital Older Adults (ACTIVE) trial.
The researchers discovered that participants who engaged in up to 23 hours of a specialized cognitive training known as speed training over a three-year span exhibited a striking 25% decrease in the risk of developing Alzheimer’s disease and other forms of dementia during a follow-up period of 20 years.
The ACTIVE study was a comprehensive randomized controlled trial funded by the National Institutes of Health (NIH), involving around 3,000 participants aged 65 and older, hailing from six geographic regions and showing no prior major cognitive impairment. About 25% of participants were minorities, and the majority were women.
Women are especially vulnerable to Alzheimer’s disease, developing dementia at nearly double the rate of men.
Initially, study participants were assigned to train bi-weekly for 60 to 75 minutes per session for a maximum of 10 sessions over five weeks. Approximately half of each training group received an additional 23 hours of booster training over three years.
Researchers monitored medical records through Medicare to track dementia diagnoses in participants throughout the 20-year follow-up. Various forms of dementia, including Alzheimer’s disease, vascular dementia, and frontotemporal dementia, were aggregated into one category.
Participants who underwent speed training along with booster sessions exhibited a 25% lower risk of being diagnosed with dementia compared to the control group, while those who did not receive additional training showed no benefits.
“The findings suggest that a relatively small input of effort can yield substantial benefits over the long term,” stated Dr. Richard Isaacson, a preventive neurologist at the Neurodegenerative Disease Institute in Boca Raton, Florida, who was not involved in this study.
Dr. Thomas Wisniewski, chair of the Department of Cognitive Neurology at New York University Langone Health, praised the study results as “remarkable,” asserting this is the strongest evidence to support cognitive training’s efficacy.
“This is the first conclusive documentation in a randomized controlled trial indicating that some forms of cognitive training can diminish dementia risk,” added Wisniewski, who was also not involved in the study.
Participants were assigned to one of three cognitive training programs: speed training, memory training, and reasoning training, with a control group that received no training.
Dr. Sanjla Singh, a physician-scientist and lecturer in neurology at Harvard Medical School, explained that speed training focuses on enhancing the brain’s ability to process visual information quickly and effectively. This involves quickly identifying items on a screen and making corresponding decisions.
Albert compares this thought process to the situational awareness required when driving. “When we’re driving and must pay attention to multiple things happening around us, we need to discern what’s relevant and what’s not,” she elaborated.
In memory training, participants learned to memorize a series of words and strategies for retaining story details, such as creating mental images and associations.
Reasoning training involved exercises aimed at enhancing problem-solving skills based on identifiable patterns, such as recognizing sequences in letters or numbers.
However, no significant protective effect against dementia was observed in those who participated in memory and reasoning training alone.
Researchers remain uncertain about why speed training proved beneficial while the other forms did not; one theory relates to the distinction between implicit and explicit learning.
Implicit learning refers to acquiring unconscious habits and skills, like riding a bike. In contrast, explicit learning entails consciously memorizing facts, such as vocabulary from flashcards.
Albert noted that implicit and explicit learning processes engage different regions of the brain.
“Once the brain adapts to these skills, the changes can persist even without ongoing practice,” Singh remarked. “For example, a child can learn to ride a bike in around 10 hours, and that skill lasts a lifetime.”
Screenshot from the Double Decision game.Brain Head Office
Speed training is similarly thought to foster long-term alterations in the brain, a phenomenon defined by neuroplasticity—the brain’s capacity to adapt and reconfigure itself in response to lifelong learning.
Dr. Kellyanne Niotis, a preventive neurologist and clinical assistant professor of neurology at Weill Cornell Medical College, stated that speed training can significantly impact cognitive reserve—the brain’s ability to withstand dementia’s effects, which builds over time through various factors, including education, mentally engaging activities, and social engagement.
“We believe this visual processing speed training engages broader neural networks, thereby enhancing the brain’s resilience and cognitive reserve,” she explained.
Another hypothesis for the efficacy of speed training is its adaptive nature, meaning the difficulty escalates according to an individual’s performance. Those who initially excelled quickly progressed to more challenging tasks, a feature not seen in other forms of training.
Should I start speed training?
The speed training used in this study was devised by psychologists Carlene Ball and Daniel Loncar, with support from an NIH grant. This program has since been refined and is now available as a tool named “Double Decision” via BrainHQ, an online subscription platform.
BrainHQ’s Double Decision game (available in various difficulty levels).Brain Head Office
Based on the study results, Albert recommends this training for individuals aged 65 and older, akin to the study’s demographic.
However, early signs of Alzheimer’s disease can reportedly emerge decades before onset, indicating that those in their 40s or 50s could also experience protective benefits. She cautioned against making early conclusions regarding the advantages for younger individuals.
While these trial results are promising, experts emphasize that Alzheimer’s disease and other types of dementia are multifaceted, and no singular solution exists.
“Every individual possesses a brain that can be at risk for Alzheimer’s disease, and it’s crucial to prioritize brain health,” Isaacson urged.
Fortunately, various factors correlated with a decreased risk of developing dementia exist. In fact, one report suggests that nearly half of all dementia cases could be deferred or mitigated by addressing specific risk factors, according to the Lancet Commission Report 2024.
Niotis advises individuals to take the following steps:
Ensure regular hearing assessments.
Manage metabolic risk factors such as cholesterol, blood sugar, and blood pressure.
Correct vision issues, as vision loss is a known risk factor for dementia.
Regular exercise enhances blood circulation and nourishes the brain. Isaacson may also suggest combining cognitive-stimulating activities with exercise, such as walking during meetings or engaging in cognitive training while using a stationary bike.
Emerging research also indicates that the shingles vaccine might protect the brain against cognitive decline.
A comprehensive study from 2025 published in Nature revealed that individuals vaccinated against shingles were 20% less likely to develop dementia over a seven-year follow-up period than those who were unvaccinated.
The Food and Drug Administration (FDA) announced on Tuesday that it is taking steps toward potentially banning BHA, a food additive used in various processed foods, including meats and breads.
Butylated hydroxyanisole (BHA) has been a part of our food supply for decades. The FDA first designated this chemical as “generally recognized as safe” in 1958 and approved it as a food additive in 1961. BHA is primarily used to prevent fats and oils from spoiling and can be found in products like frozen foods, breakfast cereals, cookies, ice cream, and certain meat items.
The FDA has stated that it will initiate a new safety review of BHA, addressing long-standing concerns regarding its potential carcinogenic effects in humans.
In the 1990s, the National Toxicology Program identified BHA as “reasonably expected to be a human carcinogen” based on animal studies. Moreover, it is recognized as a known carcinogen under California’s Proposition 65, which can be viewed here.
Although studies linking BHA to cancer primarily focus on animal data from the 1980s and 1990s, there are not many studies involving human subjects.
As part of its review, the FDA is issuing information requests, inviting both the public and industry to submit data regarding the use of BHA and its safety profile.
Health and Human Services Secretary Robert F. Kennedy Jr. stated, “This reassessment signifies the end of the ‘trust us’ era in food safety.”
This review is consistent with President Kennedy’s “Make America Healthy Again” policy, which aims to reduce harmful chemicals present in the food supply.
Last year, President Kennedy announced intentions to eliminate all artificial colors from the food supply by the year’s end, citing claims that these colors contribute to behavioral issues in children, including hyperactivity. The FDA notes this connection is monitored but not established.
In response, the FDA has approved more extensive use of “natural” dyes such as beetroot red and spirulina extract, a color additive sourced from algae.
Marion Nestle, a professor emeritus at New York University specializing in nutrition and public health, expressed her desire to understand how the FDA plans to assess the safety of BHA.
Nestle noted that previous toxicity studies on BHA largely depended on laboratory tests and animal studies, which may not effectively translate to human health outcomes.
She added that conducting research directly on human subjects would be impractical, costly, and ethically challenging.
Despite these challenges, Nestle commended the FDA’s decision to initiate a new safety review of BHA, highlighting that it has been on the public interest bureau’s “avoid” list for years, an organization that tracks food safety.
“It’s time for the FDA to address it,” said Nestle. “It will be intriguing to see what the reviewers conclude.”
As of now, the Consumer Brands Association, an industry group, has not responded to requests for comment.
On Thursday, the Environmental Protection Agency (EPA) is set to repeal the legal framework that empowers it to regulate greenhouse gas emissions.
“President Trump and Secretary Lee Zeldin will officially rescind the 2009 Obama-era endangered status designation,” said White House Press Secretary Caroline Leavitt during a press briefing on Tuesday. “This marks the largest deregulatory initiative in American history, projected to save Americans $1.3 trillion from regulatory burdens.”
The EPA’s 2009 decision, known as the Endangered Findings, identifies greenhouse gases such as carbon dioxide and methane as key contributors to global warming, which poses risks to public health and welfare. This finding is crucial for establishing regulations under the Clean Air Act. It also underpins mandatory emissions reporting for fossil fuel companies, among other regulations.
If upheld against anticipated legal challenges from environmental groups, this measure could dismantle a majority of U.S. policies aimed at mitigating climate pollution.
Details of the rule that revokes this certification have not yet been released. However, in a draft rule issued in August, the EPA proposed eliminating all greenhouse gas emissions standards for vehicles. Leavitt indicated that this deregulation would lower the prices of cars, SUVs, and trucks, hinting that the final version might also reduce vehicle emissions requirements.
Additional climate regulations may also face repeal: In June, EPA Administrator Lee Zeldin proposed a rule to revoke carbon dioxide standards for power plants. The EPA is also re-evaluating other policies linked to endangerment findings, including methane regulation, a potent greenhouse gas.
In 2025, EPA Administrator Lee Zeldin participated in an event at the White House. Jacqueline Martin File / AP File
In a briefing last month prior to the EPA’s announcement, Manish Bapna, President and CEO of the Natural Resources Defense Council, labeled the expected repeal as “the largest assault on federal authority to combat the climate crisis in U.S. history.”
“From the devastating floods in Texas and North Carolina to the catastrophic fires around Los Angeles and the unprecedented heat waves every summer, more individuals are experiencing the consequences of human-induced disasters,” Bapna remarked. “A ruling negating endangered studies would represent a complete denial of these incidents and the reality of climate change.”
Conversely, the Heartland Institute, a conservative think tank, commended the impending regulatory changes.
“The Obama administration’s assertion that carbon dioxide endangers human health is scientifically flawed and is pure political maneuvering,” claims the think tank’s president, James Taylor.
The endangerment study conducted during President Barack Obama’s first term is now under scrutiny, with the EPA stating that it “improperly analyzes the scientific record” and that its scientific basis is overly pessimistic and unsubstantiated.
In a preliminary draft of the rule, the EPA argued that the endangerment study amplifies the risk of heat waves, overpredicts warming trends, and overlooks the benefits of increased carbon emissions, such as enhanced plant growth. Many scientific organizations refute these claims.
The agency has also noted that court rulings since 2009, like West Virginia v. EPA, have already curtailed its ability to regulate greenhouse gases. This Supreme Court decision stated that the EPA lacks broad authority to transition energy production from coal to cleaner alternatives.
Much of the discussion surrounding the interim rule is based on a contentious report ordered by Energy Secretary Chris Wright. Recently, a judge determined that Wright and the Department of Energy violated transparency laws in creating and managing the working group involved.
It remains unclear whether the final rule will maintain the same rationale or modify its justification based on public feedback.
Scientific organizations opposing the EPA’s draft rule concentrated on a DOE report suggesting that rising carbon dioxide levels could promote a “greening” effect. The report also indicated that discernible trends in extreme weather events are lacking, complicating the attribution of such events to climate change due to various factors, including “natural climate variability and data limitations.”
“Human actions are altering the climate more rapidly than ever, leading to severe impacts on individuals and the ecosystems we depend on,” the union added, highlighting that greenhouse gas emissions are at their highest levels in the past 800,000 years.
“Climate change is a direct catalyst for rising global temperatures, heat waves, sea level rise, ocean acidification, and is intensifying extreme weather events such as hurricanes, floods, wildfires, and droughts.”
Additionally, a collective of 85 climate scientists released a report claiming that previous rebuttals to DOE reports illustrate a pervasive issue of misrepresentation, failing to meet appropriate standards for informing policy decisions.
According to Copernicus, the European Union’s climate monitoring service, last year was the third warmest on record. The last 11 years have marked the warmest period in modern recorded history.
During President Donald Trump’s administration, the EPA aggressively rolled back numerous environmental protections. Zeldin previously promised in a Wall Street Journal editorial that he was “putting a dagger into the heart of the religion of climate change.”
However, reversing the endangered status is likely to instigate a significant legal confrontation.
The Natural Resources Defense Council has vowed to battle the EPA “every step of the way.” David Doniger, an attorney with the agency, asserted that defending the rule change in court would be “impossible” given the overwhelming evidence indicating that greenhouse gas pollution is exacerbating climate change and intensifying disasters like wildfires, floods, and heat waves.
On Thursday, President Donald Trump declared that the Environmental Protection Agency (EPA) is revoking a critical certification that has been in effect for almost 20 years, aimed at reducing heat-trapping pollution from vehicles, refineries, and factories.
This significant reversal of the so-called endangered finding could drastically alter U.S. policies designed to combat climate change.
The 2009 EPA study indicated that global warming, driven by greenhouse gases like carbon dioxide and methane, threatens the health and welfare of both present and future generations.
“We are officially ending the so-called endangered study, a catastrophic Obama-era policy,” President Trump stated during a press conference. “There was no factual or legal basis for this decision. Fossil fuels, in fact, have saved millions of lives and lifted billions out of poverty globally.”
Prominent environmental organizations are challenging the government’s revocation of the endangered status designation and are gearing up for legal action.
Traffic moves along a road near Royal Dutch Shell and Valero Energy’s Norco refinery during a power outage caused by Hurricane Ida in LaPlace, Louisiana, in August 2021. Luke Charette/Bloomberg from Getty Images File
The findings substantiated the EPA’s capabilities in regulating greenhouse gas emissions from vehicles and power plants while mandating companies to report their emissions, advocating for climate change action consistent with the Clean Air Act.
The Supreme Court’s 2007 ruling affirmed the EPA’s authority to regulate greenhouse gases, highlighting the severe and well-recognized harms linked to climate change, and led to the 2009 endangered finding.
According to the White House and EPA, this reversal marks “the largest deregulatory action in U.S. history.”
This initiative is one of the Trump administration’s most significant efforts to unwind climate action, coinciding with the U.S. retreat from the 2015 Paris Agreement and its expected withdrawal from the United Nations Framework Convention on Climate Change.
President Trump has previously labeled climate change a “swindle” and cut nearly $8 billion in funding for renewable energy projects in October, though a court later found some cancelations illegal. Recently, the Department of Energy announced a $175 million investment to extend the lifespan of six coal-fired power plants, highlighting continued support for coal.
According to the European Union’s Copernicus Climate Change Agency, last year was the third warmest on record, and the past 11 years have been the hottest ever documented.
EPA Administrator Lee Zeldin engages with residents and business owners impacted by the Palisades fire in Los Angeles on February 4. Mario Tama/Getty Images
President Trump and EPA Administrator Lee Zeldin also announced the elimination of all greenhouse gas emissions standards for vehicles.
“We are reversing the unreasonable hazard findings and abolishing unnecessary emissions standards imposed on vehicle models and engines from 2012 to 2027 and beyond,” President Trump affirmed.
The EPA intends to continue regulating pollutants from tailpipe emissions that affect air quality, including carbon monoxide, lead, and ozone.
Former President Obama emphasized that failing to maintain these standards could make Americans “less safe, less healthy, and hinder efforts against climate change,” benefitting only the fossil fuel industry.
The U.S. Climate Alliance, headed by California Governor Gavin Newsom and Wisconsin Governor Tony Evers, criticized the repeal for being “illegal, dismissive of fundamental science, and disconnecting from reality.”
Multiple organizations, including the American Lung Association and the American Public Health Association, have pledged to sue in response to this unlawful repeal.
“As an organization dedicated to public health, we reject this unwarranted repeal,” they declared in a statement.
Manish Bapna, president of the Natural Resources Defense Council, remarked that the repeal is “a windfall for the fossil fuel sector” and that they are prepared for a legal fight.
“We will oppose this action because it lacks scientific support, is economically detrimental, and is illegal. We’ll see the government in court,” he stated.
This legal struggle could extend for years, as the government attempts to justify the repeals in the face of robust scientific evidence regarding climate change’s dangers.
Michael Gerrard, founder of Columbia University’s Sabin Center on Climate Change Law, noted that the future of this repeal could hinge on the Supreme Court, which may need to overturn 16 years of established precedent.
“The 2007 ruling was a 5-4 decision; all five justices in the majority are no longer in office. Of the dissenting justices, three are still serving,” Gerrard explained. “Typically, courts require a comprehensive explanation and supporting documentation when an agency makes such significant changes.”
Megan Greenfield, a partner at Jenner & Block who oversaw EPA rulemaking during the Biden administration, stated that the current administration may face challenges in court due to existing legal precedents and compelling scientific evidence highlighting climate change’s effects. She emphasized that the administration must demonstrate adherence to proper procedures when issuing regulations.
“Regulatory processes usually require around three years, but this rule was finalized in about a year,” she mentioned. “Only after rigorous compliance can more complex legal issues be addressed.”
As of 4 p.m. ET Thursday, the EPA had yet to publish the final text of the rule and did not respond to inquiries regarding its expected release.
The agency contended that a draft proposal released in August overstated the risks of heat waves, predicted accelerated global warming, and underestimated the advantages of increased carbon emissions, like enhanced plant growth. Most independent scientific organizations have dismissed these claims.
“EPA’s 2009 Endangered Findings stem from extensive research,” stated the American Geophysical Union on Thursday. “To override such a landmark scientific and legal determination is a denial of conclusive science, an ignorance of current struggles, and a direct threat to our collective future.”
The administration has also signaled plans to revisit other regulations reliant on endangered findings, including methane regulations, a potent greenhouse gas.
Interior Secretary Doug Burgum proclaimed on FOX Business that the findings’ reversal would breathe new life into the coal industry.
“CO₂” [carbon dioxide] “was never a pollutant; this whole situation is an opportunity to rejuvenate clean, beautiful American coal,” he stated.
Four new crew members, including two from the United States, received a warm welcome upon their arrival at the International Space Station (ISS) on Saturday.
The spacecraft, transporting NASA astronauts Jessica Meir and Jack Hathaway, European Space Agency astronaut Sophie Adenot, and Russian cosmonaut Andrei Fezyaev, docked with the ISS at 3:16 p.m. ET.
“Everyone arrived safely. We have been looking forward to this moment for a long time,” commented Sergei Kud Sverchkov, a current member of the Russian Federation’s Roscosmos crew on board.
The Dragon spacecraft was propelled into orbit by a SpaceX Falcon 9 rocket early Friday morning.
“We’re thrilled to be here and ready to get to work,” Meir said after meeting the ISS crew. “We made it. We’re here. We love you.”
Later, Adenot mentioned how much she enjoyed the journey.
“It was quite a ride, but it was a lot of fun,” she remarked. “Seeing the Earth from above is mesmerizing; you can’t distinguish any lines or boundaries.”
They arrived at an unusually quiet orbital laboratory.
Originally, the four crew members were expected to overlap in space with the departing team on Mission Crew 11. However, that group had to return to Earth early due to medical issues. (NASA has maintained privacy regarding the identities of the affected astronauts.)
The Crew-11 astronauts departed on January 14, leaving behind NASA astronaut Chris Williams and Russian cosmonauts Kudo Sverchkov and Sergei Mikayev on the ISS.
The four new arrivals will be designated Crew 12, increasing the ISS’s occupancy to seven astronauts.
“Floating in zero gravity is an incredible experience,” Hathaway said after greeting fellow passengers. “The journey was fantastic, shared with great friends from Crew 12.”
A time-exposure shot of a SpaceX Falcon 9 rocket launch from Pad 40 at Cape Canaveral Space Force Station on Friday. John Rau/AP
The crew launched from SpaceX’s Falcon 9 rocket at 5:15 a.m. ET from Cape Canaveral Space Force Station, Florida.
NASA delayed the launch by two days due to high winds affecting the flight path earlier in the week. The agency continuously monitors weather conditions for safe ascent and emergency scenarios.
Recently, a Falcon 9 incident during an unmanned mission to deploy SpaceX’s Starlink satellites prompted NASA to review safety findings before this launch.
Following the Feb. 2 incident, SpaceX paused launches for an investigation with the Federal Aviation Administration (FAA). The FAA later permitted SpaceX to resume operations, successfully deploying Starlink satellites thereafter.
NASA officials confirmed in a recent press conference that there have been no significant issues while the ISS has been understaffed, allowing a relaxed timeline for the arrival of new crew members.
“We anticipate additional support soon, but will launch when ready,” stated Dina Contera, NASA’s deputy director of ISS programs at the Johnson Space Center.
Crew-12 members, from left, Andrei Fezyaev, Jack Hathaway, Jessica Meir, and Sophie Adenot during a press conference at NASA. NASA
The Crew-12 mission members are slated to stay at the ISS for approximately eight months, where they will conduct scientific research including food production in space, examine how microgravity impacts blood flow, and study bacteria linked to pneumonia. NASA states these endeavors will enhance research for future missions to the Moon and Mars and will provide benefits for humanity on Earth.
This mission marks Hathaway and Adenot’s first spaceflight, while Fezyaev is on his second journey. Meir has previously spent 205 days aboard the ISS starting in July 2019 and made history with fellow astronaut Christina Koch during NASA’s first all-female spacewalk. They are also part of the Artemis II lunar orbit mission set to launch in March.
On Saturday, Meir expressed her surprise at the collaborative spirit that has turned the ISS into a beacon of human achievement.
“This represents a commitment from five nations, underpinned by trust, collaboration, and powered by science, innovation, and curiosity that has been upheld for decades,” she stated before entering the ISS. “Looking back at Earth from these windows, we are reminded that cooperation is not just possible, but essential. There are no borders in space, and hope transcends all.”
The **wet dress rehearsal** officially commenced on Tuesday evening and extended into Wednesday, with the team powering up both the rocket and spacecraft components while charging flight batteries. The crucial part of this test began on Thursday morning when mission managers approved the fueling of the **Space Launch System (SLS) rocket**.
At around **10:30 a.m. ET**, liquid hydrogen and liquid oxygen were initiated into the rocket’s core stage. The booster housed over **700,000 gallons of cryogenic propellant**, and mission managers executed a countdown leading up to a simulated launch time of **8:42 p.m. ET**.
The **refueling test** appeared to proceed smoothly, with NASA performing two walkthroughs during the last 10 minutes of the countdown. A pause occurred at approximately **T minus 1 minute and 30 seconds**, followed by a reset of the countdown clock to **T minus 10 minutes** near **T minus 33 seconds** for the final moments before liftoff.
These pauses were meticulously designed to demonstrate that the rocket’s systems were functioning as anticipated during critical countdown phases, when automated systems assume control of the booster. Additionally, these moments allowed mission managers to rehearse various scenarios, including resolving issues that necessitate investigation or aborting a launch due to technical difficulties or adverse weather conditions.
NASA announced significant findings on Thursday regarding a failed Boeing flight to the International Space Station (ISS) in 2024, which left two astronauts stranded for months.
The investigation outcomes were critical of both Boeing and NASA, highlighting issues such as inadequate testing, communication breakdowns, and leadership failures.
The report categorized these incidents as a “Type A disaster,” which is NASA’s highest classification, reserved for accidents that pose severe risks, including significant economic loss and potential fatalities. This designation was previously applied to the tragic loss of Space Shuttle Columbia and its seven crew members in 2003.
NASA Administrator Jared Isaacman, who assumed office in December, stated at a press conference, “We brought our crew home safely, but the path we took did not reflect the best of NASA.” He noted that this incident has fostered a “culture of mistrust.”
The Starliner mission, designed to last approximately eight days, aimed to validate Boeing’s Starliner spacecraft for transporting NASA astronauts to and from the ISS. Launched in June 2024 with astronauts Butch Wilmore and Suni Williams aboard, the mission quickly encountered issues.
Shortly after liftoff, mission managers identified a helium leak within the capsule’s propulsion system, leading to multiple thruster failures as the spacecraft attempted to dock with the ISS.
After extensive testing, NASA decided to return the Starliner capsule to Earth without crew. Consequently, Wilmore and Williams remained aboard the ISS for over nine months, awaiting an opportunity for recovery.
NASA astronauts Suni Williams and Butch Wilmore at Cape Canaveral Space Force Station, Florida, before boarding Boeing’s CST-100 Starliner in 2024.Miguel J. Rodriguez Carrillo/AFP – Getty Images File
NASA’s comprehensive report illustrates the growing distrust between NASA and Boeing, citing a “chaotic meeting schedule” during the mission and a willingness among managers on both sides to overlook risks.
While the investigation highlighted Boeing’s shortcomings in producing and testing the Starliner spacecraft, Isaacman emphasized that NASA’s civilian crew program also bears responsibility.
“While Boeing constructed the Starliner, NASA permitted and launched two astronauts into space,” he clarified, stating that NASA “must acknowledge our mistakes to ensure they are not repeated.”
NASA Deputy Administrator Amit Kshatriya further emphasized that both NASA and Boeing’s actions compromised the safety of Wilmore and Williams.
“The authorities have failed them,” Kshatriya asserted at a news conference. “We must recognize our responsibility to them and all future crews.”
In response, Boeing expressed gratitude for NASA’s thorough investigation, noting that significant progress has been made in addressing the technical challenges and cultural changes within the team since the incident.
To safely return Williams and Wilmore, NASA enlisted SpaceX, which transported them in a Dragon capsule alongside NASA astronaut Nick Haig and Russian cosmonaut Alexander Gorbunov, concluding their six-month mission on the ISS. They landed safely in March.
Boeing’s Starliner spacecraft successfully docks at the ISS on July 3, 2024.NASA (via AP)
Wilmore retired from NASA in August 2024 after 25 years, having spent 464 days in space. Williams announced her retirement last month after a remarkable 27-year career and 608 days in space.
In late 2024, NASA officials confirmed they were collaborating with Boeing to enhance the Starliner’s thrusters and that corrective actions would follow the investigation’s release.
Isaacman stated that NASA “will not allow new crew members aboard Starliner until the underlying technical problems are identified and resolved.”
Boeing developed the Starliner spacecraft as part of NASA’s Commercial Crew Program, initiated in 2011 to ensure safe civilian transport following the retirement of NASA’s space shuttles. Competing company SpaceX has been regularly flying its Crew Dragon spacecraft to the ISS since 2020.
The recent report is the latest in a series of challenges faced by Boeing. Prior to the Starliner crisis in 2024, the company dealt with issues concerning its 737 Max 9 planes, which saw critical failures leading to accidents and extensive scrutiny.
Boeing’s Starliner program experienced a difficult start; its unmanned debut in 2019 was aborted due to a software error that prevented docking at the ISS. Following delays caused by fuel valve issues, Boeing eventually demonstrated successful docking and return to Earth in 2022.
NASA is set to launch four astronauts on the highly anticipated Artemis II mission, scheduled for March 6. This groundbreaking flight will take astronauts around the moon, marking a historic return to lunar exploration.
The launch date was confirmed after NASA successfully filled the Space Launch System (SLS) rocket with over 700,000 gallons of cryogenic propellant and completed a comprehensive refueling test. This test simulated nearly every countdown step and launch-day procedures.
A successful wet dress rehearsal indicates that astronauts could be just two weeks away from visiting the moon for the first time in over half a century.
The Artemis II mission will be historic, as it will be the first time NASA’s Space Launch System rocket and Orion capsule carry humans. The mission is set to last 10 days, during which astronauts will journey farther from Earth than any humans have ever traveled.
Thursday’s extensive refueling test signaled significant progress for NASA. This was the second attempt at a wet dress rehearsal; the first was halted on February 2 due to a hydrogen fuel leak detected in the rocket’s rear. This issue led mission managers to abandon all launch windows for February.
Lori Glaze, acting deputy administrator for NASA’s Exploration Systems Development Mission Directorate, emphasized that the March 6 launch depends on completing necessary work on the launch pad and the thorough evaluation of the wet dress rehearsal results.
The mission team plans to hold a flight readiness review next week, where NASA managers and executives will officially certify the rocket and spacecraft for flight.
“Everything is set in front of us,” Glaze stated at a press conference on Friday. “If we can get through these final preparations, we are in a strong position to target March 6.”
In the interim between the first and second wet dress rehearsals, engineers addressed earlier leaks by replacing two seals in the fuel supply line and conducting repairs and tests on the launch pad. Artemis launch director Charlie Blackwell-Thompson reported that the seals are now “rock solid” after the recent repairs.
“Overcoming this wet dress rehearsal milestone was crucial for our progress,” she noted.
The Artemis II crew consists of NASA astronauts Reed Wiseman, Christina Koch, Victor Glover, and Canadian astronaut Jeremy Hansen. While they did not participate in the wet dress rehearsal, several crew members were present at Kennedy Space Center, Florida, during the test.
“I had the opportunity to speak with Reid Wiseman, Christina Koch, and Jeremy Hansen,” Glaze shared. “They are extremely enthusiastic about the possibility of a March launch.”
To ensure their health ahead of the mission, the astronauts will undergo quarantine in Houston starting Friday afternoon. They will arrive in Florida about five days before the launch and continue their pre-flight quarantine at Kennedy Space Center.
“While space travel serves as a backdrop, it is not central to the Star Trek narrative.” A scene from Star Trek: Deep Space Nine
Everett Collection Inc/Alamy
The current socio-political landscape in America is filled with contrasts. As I reflect on my day, thoughts arise concerning the potential call of construction workers to government projects. Meanwhile, dinner plans loom, prompting me to suggest to my partner that he pick up some fresh vegetables, all while he frets about being intercepted by U.S. Immigration and Customs Enforcement on his way home. I am meant to engage in scientific inquiry and broadcast the marvels of the universe, yet my focus often shifts to grim realities like children in detention camps. Despite attempts to slash NASA’s funding, it has managed to withstand the cuts, though the workforce has significantly dwindled over the years.
The very week this article circulates, NASA is poised to launch astronauts on an unprecedented mission around the Moon, part of the Artemis program leading to potential human landings on the Moon and beyond. This program is widely viewed as a crucial milestone towards sending humans to Mars. At a SpaceX event, with U.S. Department of Defense officials present, Elon Musk expressed his vision of sending humans to new planets, closely aligning with the aspirational themes found in the Star Trek universe. Enthusiasm is high, as we anticipate that these missions will propel us towards a utopia in space exploration.
What a captivating idea! However, the reality may be starkly different. In the realm of Star Trek, one might argue that many fans attending conventions deeply misunderstand the series, revealing an apparent disconnect with its core messages. If they truly grasped the themes of the Star Trek universe, they would recognize that the 2020s parallel a disheartening chapter in human history. The fictional 2024 Bell Riot reflects a rebellion against oppressive governance amid staggering wealth inequality, while the Trekkian outlook foresees humanity surviving another world war, where soldiers are coerced into committing atrocities.
Strikingly, the parallels between past fiction and current events resonate. In this narrative, the figures promoting militarized space endeavors are not the heroes, but rather the villains. Misplacing their roles, these proponents fail to understand that the core essence of Star Trek is not about reaching distant planets but about humanity’s journey towards self-improvement through collaboration, grappling with substantial ethical dilemmas, and fostering a society nurtured by principles similar to socialism, where the needs of all are attended to.
“
In Star Trek, the individuals advocating for militarized corporate strategies are depicted as the antagonists. “
Could venturing to Mars pave the way for this enlightenment? Perhaps, in another dimension, such endeavors would embody a quest to embrace “the infinite variety in infinite combinations,” a concept that resonates with the Vulcan philosophy. We have successfully dispatched numerous unmanned missions to Mars, unveiling a wealth of astonishing discoveries about the planet’s past and the potential for other life forms.
Nonetheless, Mars presents challenges as a habitat for humans. It is inhospitable, cold, and dry, which poses formidable obstacles should we aim to establish a presence there. Even amidst the hopeful vision of a peaceful human expedition, it’s vital to acknowledge the harsh reality—Mars is fraught with dangers. The thin atmosphere makes breathing impossible, and any attempts to alter it could still prove hazardous. Dust and silica in Martian soil can inflict severe damage to human lungs, mirroring the afflictions experienced by miners.
Many might dismiss this, thinking, “I won’t be inhaling dirt!” However, Mars is notorious for its colossal dust storms that would infiltrate any human habitat. Such conditions would make it increasingly difficult to maintain a livable environment. The sheer volume of resources required to create a sustainable habitat on Mars is staggering, as launching these supplies into space is a monumental task.
In conclusion, the pursuit of colonizing Mars may not be a practical endeavor. Instead, let us cherish our own remarkable planet, Earth. While we may not have treated it with the respect it deserves, there is still time for change. This vision is at the heart of Star Trek: not about fleeing to a technologically advanced future, but about cultivating the capacity to honor the extraordinary vessel we call home.
What I’m Reading I found Farah Daboiwala’s “What is Free Speech? A History of Dangerous Ideas” fascinating.
What I See I admire Gina Yashea and Kelis Brooks’ work titled “Star Trek: Starfleet Academy.”
What I’m Working On Currently, we’re navigating the complexities of daily life amidst governmental turbulence.
Chanda Prescod-Weinstein is an Associate Professor of Physics and Astronomy at the University of New Hampshire, and the author of Turbulent Universe as well as the upcoming book The Ends of Space and Time: Particles, Poetry, and the Boogie of Cosmic Dreams.
How can I ensure my data is protected? As a young Black physician engaged in clinical research, this question arises frequently in discussions with Black communities in Africa and the Caribbean regarding genetic research participation. The roots of mistrust are not hard to find.
Consider the notorious Tuskegee syphilis study where Black men were left untreated to observe disease progression, even after effective treatments were available. Additionally, Henrietta Lacks’ cells were taken without her consent, fueling extensive research worldwide and generating profit without compensating her family for healthcare needs. This historical context has contributed to the perception of Black individuals as mere research subjects.
In research, it’s understood that quality data is crucial for effective medicine. Unfortunately, Black individuals, along with other underrepresented populations, including non-Europeans and older adults, are often underrepresented in clinical studies. Comprehensive disease understanding requires research across all affected groups to develop inclusive tests and treatments.
Looking ahead, the medical system is shifting towards a genetics-centered approach in patient care. This precision medicine paradigm opts for individualized treatment based on genetic information to enhance prevention and therapeutic efficacy.
However, institutional initiatives from institutions like the University of Exeter and Queen Mary University of London reveal significant gaps in our genetic understanding, particularly in relation to non-European populations. Their findings suggest certain genetic traits in Black people could hinder the accuracy of standard diabetes diagnostic tests, potentially delaying treatment. To bridge this gap, it’s essential to foster trust and increase Black participation in research.
Current research frameworks often unintentionally exclude certain demographics. For instance, if recruitment materials are only available in English or if hiring occurs solely during conventional business hours, valuable contributors may be overlooked. Additionally, relying exclusively on hospitals and universities ignores community hubs like churches and barbershops where people congregate. Recognizing social contexts is vital for effective outreach.
Academic institutions now acknowledge that varying communities necessitate tailored approaches that merge cultural proficiency with scientific rigor. This balance empowers communities and enables research to translate into actionable changes through informed policy and accessible healthcare. It’s essential for researchers to resonate with the communities they serve, fostering trust and relevance through shared experiences.
To address these challenges, researchers must prioritize community involvement from inception rather than merely soliciting input at the end of the process. Funding organizations should integrate community engagement into their budgets, ensuring that incorporating patients and communities becomes a staple in research. This participatory approach can enhance representation among underrepresented groups and ultimately benefit public health. Moreover, researchers must demonstrate reciprocity by contributing to community wellbeing through shared resources and programs.
If you’re interested in participating in research, there are many ways to get involved, from clinical trials to surveys. Every contribution counts.
Dr. Drews Adade – Clinical researcher based in London.
The Beauty From Ryan Murphy and Matthew Hodgson, exclusively on Disney+/FX
The series Beauty (Disney+/FX), created by acclaimed producer Ryan Murphy and co-creator Matthew Hodgson, reveals its intentions from the very first scene. Amidst glamorous models on the Paris catwalk, one character, Ruby (played by Bella Hadid), becomes dangerously desperate for hydration, resorting to shocking measures to quench her thirst.
This plot twist may intrigue some viewers, but may also deter others. Murphy’s established fame for groundbreaking shows like Glee and American Horror Story sets a high expectation for this series. In Beauty, FBI agents uncover a deadly drug and a new sexually transmitted disease within the fashion industry’s glamorous facade. However, the series ultimately falls short.
Murphy’s work has long been associated with body horror, revealing uncomfortable truths hidden within its provocative themes. Unfortunately, Beauty merely glosses over these issues, reducing its critical commentary to superficial critiques, especially regarding the use of medications like Ozempic.
The series struggles to embody the transgressive essence of body horror. Its unoriginality stems not only from its comic-book origins but also from its predictable narrative.
Comparisons can be drawn between Beauty and David Cronenberg’s iconic film The Fly, despite their differing storylines. In The Fly, scientist Seth Brundle (played by Jeff Goldblum) embarks on romantic and scientific pursuits, ultimately leading to horrifying transformations.
Jeff Goldblum as Seth Brundle in David Cronenberg’s The Fly
Photo: 20th Century Fox/Album/Alamy
The Fly masterfully explores themes of intimacy and horror, deftly blending romance with the grotesque, while also addressing underlying societal issues. In contrast, although Beauty attempts to engage with similar themes, its execution often felt forced and lacking in depth.
Characters in Beauty navigate discussions about health and identity, reminiscent of Seth Brundle’s plight, yet the messaging comes across as overly didactic.
In conclusion, while Beauty touches on vital topics, it lacks the profound narrative power found in Cronenberg’s work, ultimately emphasizing the necessity of original storytelling in tackling contemporary issues.
Recommended: Material…
Material Coralie Ferguito
While I had mixed feelings about this film, a standout scene featuring Elizabeth Sparkle (Demi Moore) transforming through the titular drug makes it worth a watch. Despite its shortcomings, it revitalized themes of beauty that were otherwise faltering.
Bethan Ackerley is an associate editor at New Scientist. Passionate about science fiction, comedy, and all things spooky. Follow her on Twitter @inkerley
Cornflowers and Poppies: Once Regarded as ‘Nuisance Weeds’
Credit: Heather Drake/Alamy
One prevalent myth in traditional gardening is that weeds thrive only in poor soil. The belief is that enhancing soil fertility will banish weeds, offering a simple solution for gardeners—just enrich the soil with nutrients. This notion is appealing; however, let’s examine the facts.
Firstly, what is the actual definition of “weed”? The term “weed” encompasses any plant species growing in undesirable areas, rather than a specific group of related plants. This classification can seem arbitrary and culturally influenced.
Many infamous weeds serve dual purposes, being both valued plants in certain contexts and unwanted ones in others. Take dandelions, for example. They are the most recognized species on herbicide labels in the UK, yet in Singapore, where they are deemed invasive, seeds can fetch nearly $100 in online auctions.
In fact, many of the world’s most invasive plant species were initially introduced as ornamental garden plants. This overlap complicates the clear distinction between “weeds” and decorative plants, suggesting that the term may be losing its relevance.
Commonly recognized weeds often share a vigorous growth pattern. Their rapid establishment, easy reproduction, and adaptability to diverse conditions enable them to flourish in unwanted places. These traits often make them the first colonizers in disturbed or neglected soils, where other species struggle to establish themselves. However, thriving in poor environments doesn’t mean they prefer it.
So, where does the idea that weeds signify poor fertility originate? Like many gardening myths, there’s a kernel of truth here. Enhancing soil fertility can allow for a broader variety of plants to thrive, diminishing the competitiveness of resilient pioneer species. This was notably observed in European farmlands during the 20th century, when synthetic fertilizers boosted grass growth, driving out troublesome weeds like cornflowers and poppies, leading some of these species to the brink of extinction, as seen in England. Ironically, these same plants are now cherished as attractive wildflowers.
So where does this perspective leave us? Given our ever-evolving views on plants, it’s clear that weeds are not reliable indicators of soil quality but rather reflect human preferences and societal trends.
James Wong is a botanist and science writer with a focus on food crops, conservation, and environmental issues. With training from the Royal Botanic Gardens in Kew, London, he has over 500 houseplants in his compact apartment. Follow him on X and on Instagram @botanygeek.
This rewrite maintains the original HTML structure while optimizing for SEO by enhancing keywords, improving readability, and making the content more engaging.
The realm of personalized medicine has witnessed considerable hype but minimal tangible benefits. Numerous companies aim to analyze your biomarkers and suggest tailored nutrition plans, all at a premium price. However, genuine advancements in personalized medicine are still on the horizon.
Despite this, the concept holds significant potential. Each individual possesses unique genetics and microbiomes, influencing health outcomes widely. Additionally, personal habits play a critical role in overall wellness.
This week’s articles highlight two pertinent examples. Nearly everyone encounters the Epstein-Barr virus during their lifetime. However, as our reports indicate, certain genetic mutations inhibit some individuals from effectively clearing the virus, potentially linking it to autoimmune conditions like multiple sclerosis. Concurrently, some people show resistance to protein misfolding associated with Alzheimer’s disease.
“
Identifying individuals most likely to respond to treatment is crucial. “
Grasping these disease mechanisms necessitates a comprehensive understanding of human biological diversity. This involves gathering extensive data, ranging from DNA analysis to immune responses, to unveil the underlying mechanisms affecting various individuals.
Furthermore, precision in clinical trial planning is essential. A one-size-fits-all approach to treatment is no longer feasible, as patient reactions can vary significantly. Therefore, pinpointing those who are most likely to benefit from specific treatments is paramount.
Progress is already being made in cancer treatment. Although we generally label tumors as “cancer,” they are distinctly different and require tailored treatment strategies. There isn’t a singular “cure for cancer”; multiple solutions exist.
Although these challenges are considerable, now is the opportune moment to tackle them for the advancement of treatments for diseases like Alzheimer’s and multiple sclerosis.
Ancient Inuit Circular Tents Found on Isbjørne Island
Credit: Matthew Walls, Marie Christ, Pauline Knudsen
4,500 years ago, early humans embarked on a historic journey to a remote island off Greenland’s northwest coast. This daring expedition entailed crossing over 50 kilometers of open sea, marking one of the longest maritime voyages by Arctic indigenous peoples.
Archaeologists assert that these intrepid sailors were the first to reach these isolated islands. Notably, John Derwent from the University of California, Davis, contributed insights but was not involved in this study.
In 2019, Matthew Walls and a team from the University of Calgary, Canada, explored the Kittisut Islands, also known as the Carey Islands, located northwest of Greenland. These islands lie within the Pikiarasorsuaq polynya—an open ocean region surrounded by sea ice, which has been present for approximately 4,500 years.
The research focused on three main islands: Isbjörne, Mellem, and Nordvest, revealing five sites with a total of 297 archaeological features. The most significant findings were at Isbjörne beach terraces, where they uncovered the remnants of 15 circular tents, each with a central hearth and divided by stones. These distinctive “bilobed” structures are emblematic of the Paleo-Inuit—the first settlers of northern Canada and Greenland.
Radiocarbon dating of a long-billed murre’s wing bones found within one of the tent rings indicated they are between 4,400 and 3,938 years old. This confirms that humans occupied the Kittisut Islands shortly after the formation of the polynya.
“We have nesting colonies of long-billed murres,” Walls noted. The early settlers likely harvested their eggs and hunted them for food, and they likely pursued seals as well.
The Old Inuit had already reached Greenland at this time and likely journeyed to Kittisut from the west, covering a minimum distance of about 52.7 kilometers. However, due to prevailing winds and currents, they most likely set sail from a more northerly location, resulting in a longer, safer journey. To the west of Kittisut lies Ellesmere Island, which is further but presents challenging navigational conditions.
The only comparable journey known in Arctic prehistory was the 82-kilometer crossing of the Bering Strait from Siberia to Alaska, likely first accomplished over 20,000 years ago, with the Diomede Islands serving as a midway stopping point.
“Crossing that expanse required advanced watercraft,” Derwent emphasizes. The population on Kittisut likely necessitated larger vessels rather than single-person kayaks. “You can’t transport children and the elderly safely in a kayak,” he explained. The Old Inuit likely used larger boats capable of carrying nine or ten individuals.
Despite extensive studies, no boat wrecks have yet been uncovered on Kittisut Island, and few such finds exist in the Arctic region. “Their vessels would have been skin-on-frame designs similar to those utilized by later Inuit communities,” noted Walls.
The initial Paleo-Inuit settlers likely played a vital role in shaping the Kittisut ecosystem. By transporting marine nutrients onto land, they fertilized the barren soil, fostering plant growth on the islands. “There’s initially a diverse plant life there, reliant on human involvement in nutrient cycling between marine and terrestrial systems.”
Arctic Cruise with Dr. Russell Arnott: Svalbard, Norway
Join marine biologist Russell Arnott for an unforgettable ocean expedition to the North Pole.
Even the smoothest surfaces can exhibit friction due to electron interactions. However, recent advancements present a technique for reducing or completely eliminating this electronic friction, empowering the development of more efficient and durable devices.
Frictional forces, in various contexts, can hinder movement, waste energy, and can be beneficial in everyday tasks like walking or striking a match. In mechanical systems, such as engines, friction not only expends energy but also accelerates wear, necessitating the use of lubricants and surface treatments. Nevertheless, as every object harbors numerous electrons that interact, some degree of friction may always exist regardless of mitigation strategies.
According to Xu Zhiping, researchers from Tsinghua University in China have developed an innovative method to manage this “electronic friction.” Their apparatus consists of dual layers of graphite paired with a semiconductor crafted from molybdenum and sulfur or boron and nitrogen.
These materials excel as solid lubricants, showcasing near-zero mechanical friction when in motion against each other. This focus allowed researchers to explore a less apparent factor: electronic friction, which contributes to energy loss during the layers’ movement. Xu elaborated, “Even with entirely smooth surfaces, mechanical activity can disturb the ‘sea’ of electrons within the material.”
To confirm their focus on electronic friction, the team initially analyzed how the electronic state of the semiconductor reacted to energy depletion during sliding. They subsequently explored various methods for controlling this phenomenon.
By applying pressure to their device, they succeeded in halting the ocean of electrons by allowing the electrons between layers to share states, minimizing energetically costly interactions. Additionally, introducing a “bias voltage” enabled them to fine-tune the motion of these electrons.
By adjusting the voltage across different segments of the device, researchers could influence electron flow, effectively reducing electronic friction and allowing for a dynamic control mechanism instead of a simple on-off switch.
Jacqueline Krim noted that the initial study on electron friction dates back to 1998 when her North Carolina State University team utilized superconducting materials—perfect electrical conductors at extremely low temperatures—to observe energy loss. Research has since evolved, offering new avenues for modulation without necessitating material replacement or additional lubricants, she commented.
Krim envisions a scenario akin to adjusting the friction of your shoe soles via a smartphone app when transitioning from icy sidewalks to carpeted rooms. “Our objective is real-time remote control, eliminating downtime and material waste. Achieving this goal necessitates materials that react to external magnetic fields producing the desired levels of friction,” she explained.
Xu acknowledged the complexities involved in managing all forms of friction within a device, noting that a rigorous mathematical model correlating these frictions is yet to be established. Nevertheless, he expressed optimism regarding their findings, suggesting that if electronic friction primarily drives energy waste and wear, their approach could hold considerable promise.
Striking Resemblance between Paul Erdős and Jeff Goldblum
Public domain; Matt Baron/BEI/Shutterstock
In my latest mathematics column, I present an exciting idea: Hollywood should create a comedic biopic about Paul Erdős, one of history’s greatest mathematicians.
Why does Erdős, pronounced “air-dish,” deserve such recognition? With approximately 1,500 published papers, he is arguably the most prolific mathematician of all time. Known for his innovative collaborations, Erdős made significant contributions to various mathematical fields, including probability, number theory, and graph theory.
Born in Hungary in 1913, Erdős had a nomadic lifestyle, often traveling without a permanent residence. Following the rise of Nazism in Europe, he relocated to the United States in 1938. However, due to his connections to communist sympathizers, he faced entry issues in the 1950s and 1960s. He famously carried a suitcase of his belongings and visited fellow mathematicians, offering to collaborate with the phrase “My brain is open.” His unique approach allowed him to work on groundbreaking mathematics.
Many fascinating stories about Erdős are chronicled in A Man Who Loved Only Numbers, a biography by Paul Hoffman. I first encountered this book as a teenager and believe its potential to captivate a broader audience is unfortunately overlooked. Therefore, I’m launching a campaign to cast Jeff Goldblum in the lead role.
Why Goldblum? Both he and Erdős have striking similarities, and Goldblum has successfully portrayed mathematician Ian Malcolm in the Jurassic Park franchise. More than that, Goldblum’s quirky eccentricity aligns perfectly with Erdős’ unique lifestyle.
Erdős had unconventional views on religion; he described himself as an atheist yet often spoke about God, referring to Him as “the best fascist” or “science fiction.” He sought to uncover the evidence of a magical book that he believed contained proofs for every mathematical theorem.
His linguistic quirks were equally captivating. He called children “Epsilon,” a nod to the Greek letter representing small quantities in mathematics. Friends who left mathematics were, in his eyes, “dead,” while those who actually passed away were simply “gone.” He humorously remarked, “A mathematician is a device that turns coffee into theorems,” a quote borrowed from colleague Alfred Rényi. I can easily envision Goldblum delivering those lines.
An intriguing aspect of Erdős’ legacy is the concept of the “Erdős number.” This measure indicates the collaborative distance between mathematicians, where those who co-authored with him have an Erdős number of 1, and others have higher numbers based on collaboration distance. My Erdős number is 3, having quoted Terrence Tao from UCLA in my writing.
This concept mirrors the “Six Degrees of Kevin Bacon” game. Goldblum also holds a Bacon number of 1 because they both appeared in the mockumentary Tour de Pharmacy. I only discovered this connection while advocating for my biopic project.
Some individuals hold both Erdős and Bacon numbers, bridging the worlds of mathematics and film. The minimum recorded Erdős-Bacon number is 3, held since 1997 by mathematician Daniel Kreitman, who appeared in Good Will Hunting.
While Erdős’ eccentricities paint a charming picture, it’s important to acknowledge his flaws. A Man Who Loved Only Numbers touches upon his problematic attitudes towards gender, as he often referred to women and men in derogatory ways. However, he was more than willing to collaborate with female mathematicians.
While dreaming of an Erdős biopic raises the concern of reinforcing the “absent-minded professor” stereotype, I argue that current mathematical biopics, like A Beautiful Mind, are serious dramas. A comedic portrayal has yet to be attempted.
Moreover, Erdős left behind numerous open mathematical problems, many offering monetary rewards for solutions. A film could inspire a new generation of puzzle enthusiasts and spark interest in mathematics—an endeavor Erdős would surely endorse. Jeff, if you (or your agent!) are reading this, let’s connect. I’m ready to collaborate on this exciting project!
Cognitive ‘speed training’ can reduce the risk of a dementia diagnosis by 25%, according to a groundbreaking randomized controlled trial. This study is the first of its kind to assess the effectiveness of an intervention for dementia.
“Skepticism surrounded brain training interventions for years, but this study provides clear evidence of their benefits,” says Marilyn Albert from the Johns Hopkins University School of Medicine.
The brain training sector has faced controversy, especially after companies overstated claims about cognitive decline prevention. In 2014, around 70 scientists signed an open letter stating no conclusive evidence existed that brain training leads to significant real-world changes or enhances brain health, echoing sentiments later supported by another letter signed by over 100 scientists.
Now, a comprehensive 20-year study with 2,832 participants aged 65 and older indicates that specific cognitive exercises may yield tangible benefits.
Participants were divided into three intervention groups and a control group. One group underwent speed training with a computer task called “Double Decision,” where cars and road signs briefly appeared, challenging participants to recall details after they disappeared. This adaptive task increases in complexity as users improve.
The other two groups focused on memory and reasoning training aimed at enhancing cognitive skills.
Each group completed two sessions per week for five weeks, with about half receiving booster sessions and additional training at one-year and three-year intervals.
After twenty years, evaluations of U.S. Medicare claims revealed that participants who completed speed training with booster sessions had a 25% lower risk of an Alzheimer’s diagnosis or related dementias than those in the control group. Other groups without boosters showed negligible changes in risk, which Albert describes as “truly amazing.”
“The study’s rigorous methodology is commendable,” notes Torkel Klingberg from Karolinska Institutet, Stockholm. “The impressive 20-year follow-up and the significant reduction in dementia risk are crucial findings.”
However, Walter Boot from Weill Cornell Medical College cautions that measuring numerous outcomes over two decades can lead to coincidental findings. “While the results may suggest significance, they should be interpreted cautiously,” he adds.
Double Decision: A Cognitive Training Program
BrainHQ
The mechanism behind the effectiveness of speed training is still being explored. One theory suggests it relies on implicit learning, which can entail long-lasting changes without conscious effort, according to Albert.
Etienne de Villers Sidani from McGill University explains that brief, intense experiences can lead to significant, enduring changes in the brain—much like how a traumatic event can instill lasting fears.
This training may enhance the brain’s cognitive reserve, a potential buffer against cognitive decline. Albert notes that enhanced brain connectivity could improve attention division, facilitating daily activities and fostering physical activity and social engagement—key factors for sustained brain health.
The authors propose that results from the booster sessions suggest a dose-dependent effect of speed training. Bobby Stoyanowski from the Ontario Institute of Technology emphasizes the need for future research into optimal training levels: “What is the right amount of training to maximize benefits?”
In summary, Andrew Budson from Boston University advises against isolating oneself to play speed training games endlessly. Instead, engaging in activities that promote implicit learning—like learning new skills or sports—may provide long-term cognitive benefits while being enjoyable.
Recent studies show that many pet foods, especially fish-based varieties, contain concerning levels of PFAS (per- and polyfluoroalkyl substances) that exceed safety limits advised by the European Health Organization for human consumption.
The research highlights the urgent need to enhance monitoring of harmful contaminants in pet products and better understand the associated risks to our furry companions, as emphasized by Kei Nomiyama from Ehime University, Japan.
“While we don’t suggest an immediate health crisis, our findings reveal significant knowledge gaps,” Nomiyama states. “Pet owners should focus on ingredient composition and consider diversifying protein sources to mitigate potential exposure risks.”
PFAS are synthetic chemicals widely used in various products and can remain in the environment for extensive periods, sometimes for hundreds or thousands of years. Studies indicate that individuals repeatedly exposed to PFAS may face increased risks of liver damage, certain cancers, and other serious health conditions. Although the impact on pets remains an underexplored area, existing research on cats has linked certain PFAS to liver, thyroid, kidney, and respiratory diseases.
Nomiyama and his team observed that persistent organic contaminants were prevalent in pet food. Given the ubiquity of PFAS worldwide, particularly in aquatic environments, they sought to identify the presence of these contaminants in pet foods.
To conduct their research, the team analyzed the PFAS concentration in 34 popular wet and dry pet foods (48 for dogs and 52 for cats) available in Japan between 2018 and 2020. Using the average food intake and body weight of dogs and cats, they estimated daily PFAS ingestion for each product.
Alarmingly, some products had moderate to high PFAS levels, frequently surpassing the daily intake limits for humans as established by the European Food Safety Authority (EFSA).
Among dog foods, the highest PFAS concentrations were noted in Japanese grain-based products, likely due to agricultural runoff and fish byproducts. Conversely, meat-based products generally had lower PFAS levels, with certain Japanese and Australian brands showing no detectable PFAS.
For cat food, fish-based items sourced from Asia, the U.S., and Europe, especially wet food from Thailand, exhibited the highest PFAS levels.
“The ocean often acts as a repository for numerous synthetic chemicals,” Nomiyama warned. “In essence, PFAS can accumulate and escalate through aquatic food webs.”
Regional variations may demonstrate historical and current PFAS production patterns, alongside raw material sourcing differences. Nevertheless, PFAS contamination is a global challenge. “A more harmonized global monitoring approach would be beneficial,” notes Nomiyama.
The EFSA refrained from commenting specifically on study results but indicated that proposed human safety limits should not be directly applied to other animal risk assessments.
Nomiyama concurs, stressing that the findings indicate alarmingly high PFAS levels that warrant further development of risk assessments for pets.
“Companion animals inhabit the same environments as us and serve as indicators of chemical exposure in numerous ways,” he explains. “Understanding contaminant levels in pet foods isn’t merely an animal health concern; it also aids in comprehending broader environmental contamination pathways. Ongoing evaluation of long-term exposure and species-specific toxicity in companion animals is crucial.”
Haakon Ostad Langberg, from Akvaplanniva, a Norwegian nonprofit research institute, stated that the results align with expectations. “These substances are distributed globally, with some PFAS known for their persistence and potential to bioaccumulate in food webs,” he stated.
“The more pressing issue is that PFAS are pervasive, exposing both people and animals from various sources,” added Langberg. “These compounds are present across all environmental media and numerous products, leading to cumulative exposure. This study offers significant data in addressing that widespread challenge.”
A newly discovered group of bacteria thriving in the gut microbiome of healthy individuals suggests their crucial role in maintaining overall health.
About 4,600 species of bacteria inhabit our gut, impacting a range of bodily functions from our immune response to sleep patterns and mental health risks.
Interestingly, around two-thirds of these species fall into the “hidden microbiome,” many of which cannot be cultured in laboratories or even named. We only identify them through genomic analysis. “Are these species merely bystanders, or do they contribute to human health?” questions Alexandre Almeida, a researcher at Cambridge University.
To delve deeper, Almeida and his team analyzed genetic markers of bacteria across a comprehensive study involving over 11,000 participants from 39 countries, primarily across Europe, North America, and Asia.
Approximately half of the participants were healthy, while the other half had one of 13 conditions, including obesity, chronic fatigue syndrome, and inflammatory bowel disease.
The analysis revealed that 715 bacterial species are linked to specific health conditions; 342 were more abundant in unhealthy individuals, while 373 were prevalent in those who were healthy.
Among these, a prominent genus named CAG-170 consistently correlated with better health outcomes. “Across various conditions, CAG-170 levels were markedly higher in healthy individuals compared to those with diseases,” Almeida explains.
In another aspect of the study, Almeida’s team explored bacterial species that indicate a healthy gut microbiome versus one characterized by dysbiosis.
“CAG-170 once again showed a significant correlation,” Almeida adds. “Higher CAG-170 levels corresponded with a balanced and healthier gut microbiome.”
To understand CAG-170’s role, the researchers examined its genome, identifying genes linked to metabolic pathways capable of producing elevated vitamin B12 levels and breaking down various carbohydrates and fibers.
While CAG-170 itself doesn’t utilize vitamin B12, Almeida suggests that other bacteria frequently found alongside CAG-170 likely benefit from it. “CAG-170 seems to adopt a collaborative role, providing metabolic support to its microbial companions.”
This study marks a vital step in understanding which components of the gut microbiome contribute to health and disease. Research led by Nicola Segata at the University of Trento recently characterized a healthy gut microbiome but didn’t thoroughly explore how these bacteria provide health benefits.
Determining whether high CAG-170 levels are a health cause or consequence remains challenging. Almeida emphasizes the need for further research to assess whether introducing CAG-170 can mitigate certain health risks.
“The human microbiome and body are intricately linked, and should be considered a unified complex system,” Segata states. “Instead of seeking direct causality, we need to explore the holistic relationship between microbial and bodily health, including diet’s role.”
Professor Segata advocates for follow-up studies incorporating nutritional clinical trials to evaluate the dietary factors that influence both microbiome composition and human health.
From Almeida’s perspective, CAG-170 holds potential in two ways: as a biomarker for gut health and as a foundation for new probiotics aimed at enhancing overall well-being.
The potential for CAG-170 as a probiotic candidate is promising, yet its laboratory cultivation remains a significant challenge. “Identifying optimal foods and prebiotic supplements to increase CAG-170 levels may be a more attainable goal than developing probiotic products.” Segata notes.
However, genomic insights offer guidance on practical applications. Since CAG-170 bacteria appear unable to produce arginine, supplementing with more amino acids might promote their growth and presence in the gut.
This image combines views from the Hubble and Keck II telescopes. The diagonal galaxy in the foreground serves as a gravitational lens, causing a distorted image of the background galaxy H1429-0028.
Credit: NASA/ESA/ESO/WM Keck Observatory
Astronomers have identified an unprecedented microwave beam, akin to a laser, emitted from two colliding galaxies. This discovery, the brightest and most distant recorded, marks a significant milestone in our understanding of cosmic phenomena.
The generation of laser light involves stimulating atoms into a high-energy state. When photons interact with these excited atoms, they induce the release of additional photons, leading to a chain reaction. The result is a coherent light beam with uniform frequency.
Similarly, during galactic collisions, compressed gas triggers star formation and enhanced luminosity. As light travels through dust clouds, it can excite hydroxyl ions composed of hydrogen and oxygen into a high-energy state. When these ions are stimulated by radio waves, potentially from a supermassive black hole, they can release concentrated beams of microwave radiation known as masers.
Recently, Roger Dean and researchers from the University of Pretoria discovered the brightest and most distant maser in galaxy H1429-0028, approximately 8 billion light-years from Earth. Gravitational lensing, caused by a massive galaxy, distorts the light from H1429-0028, acting like a cosmic magnifying glass.
Using the MeerKAT telescope—a network of 64 radio telescopes working collaboratively—Dean and his team searched for galaxies abundant in hydrogen molecules emitting distinctive frequencies. When they focused on H1429-0028, they detected an unusually strong radiation signal, indicating the presence of powerful masers.
“Upon checking the frequency of 1667 megahertz, we immediately recognized a significant signal. What was once a mere observation transformed into a record-breaking discovery,” Dean recalls.
These extraordinary light emissions could be classified as gigamasers, far exceeding the brightness of typical megamasers found closer to the Milky Way, with an intensity approximately 100,000 times that of an ordinary star, tightly concentrated in a minuscule region of space.
Future enhancements, including the development of the South African Square Kilometer Array, will be capable of detecting even more distant masers, poised to revolutionize our understanding of cosmic history. As Matt Jarvis from Oxford University notes, these masers may offer insights into the merger processes of some of the universe’s earliest galaxies.
“To acquire accurate data about these ancient galactic mergers, we require continuous radio and infrared emissions, primarily sourced from heated dust enveloping forming stars,” Jarvis explains. “The intricate physical conditions needed to produce masers originate from these galactic collisions.”
Explore Astronomy in Chile
Discover Chile’s astronomical wonders. Experience the world’s most advanced observatories while stargazing under unparalleled skies.
Artist’s Impression of the Black Hole Collision Producing GW250114
A. Simonette/Sonoma State University, LIGO-Virgo-KAGRA Collaboration, University of Rhode Island
The groundbreaking collision of two black holes provides an exceptional opportunity for scientists to validate Einstein’s theory of general relativity, demonstrating the accuracy of physicists’ predictions once more.
In 2025, an international team of gravitational wave detectors, featuring state-of-the-art laser arrays, identified a significant distortion in space-time known as GW250114. This event is attributed to the merger of two black holes.
These advanced detectors—such as the US Laser Interferometer Gravitational-Wave Observatory (LIGO) and Italy’s Virgo detector—have achieved unprecedented sensitivity since LIGO’s inaugural detection in 2016. Consequently, GW250114 offers the clearest and most detailed data on gravitational wave phenomena to date, serving as a unique testing ground for well-established physical theories.
Recently, researchers applied data from GW250114 to evaluate Stephen Hawking’s theorem, posited over half a century ago. This theorem claims that the event horizon of a merging black hole cannot be smaller than the total mass of its progenitor black holes. The findings confirmed Hawking’s prediction with near certainty.
Keefe Mittman and his team at Cornell University took this analysis a step further by assessing whether black hole mergers comply with Albert Einstein’s theoretical framework.
Einstein’s equations articulate how massive objects navigate space-time. By manipulating and resolving these equations for the merging black holes, researchers can visualize the dynamics: the black holes spiral together, accelerate, collide, release substantial energy, and subsequently resonate at distinct frequencies—akin to a bell chiming after a strike.
These frequencies, referred to as ringdown modes, were relatively faint in prior gravitational wave events, obscuring the complex structures foreseen by Einstein. However, GW250114 generated enough amplitude to effectively validate the predicted oscillation patterns. Mittmann and his colleagues utilized simulations based on Einstein’s equations to estimate the intensity and frequencies of the black hole’s oscillations. The actual measurements closely aligned with these predictions.
“The amplitudes of the data we measured align remarkably well with the predictions of numerical relativity,” Mittmann confirms. “Einstein’s equations may be complex to solve, yet the correlations observed at the detector validate general relativity.”
“The conclusion is clear: Einstein’s predictions still hold true,” states Laura Nuttall from the University of Portsmouth, UK. “All observations correspond to Einstein’s assertions regarding gravity.”
Despite the impressive amplitude of GW250114, the frequencies remain faint enough that Mittmann’s team couldn’t dismiss a variance from Einstein’s predictions of less than 10 percent. This limitation primarily results from current detector sensitivities and is likely to lessen as gravitational wave detection technology evolves. Any deviations from Einstein’s theory would manifest as persistent discrepancies.
“As we catalog more events or observe larger singular events, the measurement error margins can approach zero—or diverge,” Mittmann notes. “A divergence would be considerably more intriguing.”
Discover the Mysteries of the Universe: Cheshire, England
Join a weekend experience with some of science’s leading minds and delve into the mysteries of the universe, featuring an immersive tour of the iconic Lovell Telescope.
Updated on February 11, 2026
Amended information regarding the characteristics of ringdown modes in prior gravitational wave events.
The trigeminal nerve is a critical target in migraine treatment.
Jitendra Jadhav/Alamy
There is a new wave of migraine treatments on the horizon, focusing on a previously overlooked neural pathway that may provide relief. Understanding various migraine mechanisms is essential, given that migraines affect over 1 billion people globally, especially those who do not respond to standard therapies.
Despite past failures in drug trials, skepticism about this neural pathway’s significance is fading. Recent placebo-controlled studies call for a reevaluation of earlier assumptions about its role in migraine treatment.
Mehsud Ashina and his team at the University of Copenhagen investigated substance P, a neuropeptide linked to migraines. This crucial molecule, released by the trigeminal nerve, leads to pain through blood vessel dilation and inflammation in the meninges, thus amplifying pain signals.
Recent findings show that substance P injections induce headaches, with 71% of non-migraine individuals exhibiting dilation of the superficial temporal artery, a response similar to that seen in migraine sufferers, validating substance P’s role in these conditions.
Following the late 1990s dismissal of substance P as a viable target for migraine drugs, largely due to previous drug failures, Ashina’s team proposed that simplicistic targeting of a single receptor, the neurokinin-1 receptor (NK1-R), was misguided. It is known now that substance P interacts with multiple receptors, including MRGPRX2, enhancing pain signals.
“Previous trials failed because they targeted NK1-R alone,” Ashina explains. Michael Moskowitz at Harvard recognized the trigeminal nerve’s pivotal role in migraines. “Blocking substance P’s broad effects could open new therapeutic doors. With our evolving knowledge, it’s time to revisit this strategy.”
Current advancements allow for monoclonal antibodies that block substance P directly. These innovations have already proven effective against another migraine target, calcitonin gene-related peptide (CGRP), while also exploring pituitary adenylate cyclase-activating polypeptide (PACAP).
Recently, Danish pharmaceutical company Lundbeck presented initial findings from a randomized controlled trial on an anti-PACAP monoclonal antibody called Bocnevert, which reportedly decreased monthly migraine days compared to a placebo. “This data is a positive development,” says Lars Edvinson from Lund University. Full results are expected to be shared at an upcoming conference.
With this shift in focus, there’s potential to reduce reliance on CGRP inhibitors, which have transformed migraine management since their U.S. approval in 2018, effectively halving migraine days for many. However, 40% of users still struggle.
“While CGRP drugs are effective for many, they are not universal,” says Peter Goadsby from King’s College Hospital, who collaborated on CGRP research in the 1990s. “Finding new solutions for the millions still underserved remains a pressing challenge.”
Further research is expected on the impact of inhibiting these peptides. “Substance P, CGRP, and PACAP interact with the meningeal vessel wall but do so uniquely, so there is room for optimism,” Moskowitz adds. A combination approach targeting multiple pathways may enhance treatment efficacy for non-responders.
However, it is uncertain whether drugs targeting substance P and PACAP will eclipse the effects of CGRP antagonists, which are released in higher quantities from the trigeminal nerve. “I do not believe that these alternatives can fully replace CGRP’s impact,” Edvinsson states.
Currently, I am reading Big Oyster: The History of the Half Shell, a captivating account that chronicles New York City’s rich relationship with oysters through the lens of a renowned oyster farm. As a local resident, I was only vaguely aware of how significant the oyster population was to the city and the restoration efforts that are underway.
Upon the arrival of Europeans in the early 1600s, they were astonished by the oysters, which were reportedly the size of their feet. The Lenape Indians consumed so many oysters that they created massive shell heaps, referred to by archaeologists as middens.
Even today, construction workers frequently encounter these ancient shell mounds while excavating for subway tunnels and railroads.
In his book, journalist Mark Kurlansky intricately weaves together historical narratives, archaeological findings, and urban records, illustrating New York City’s transformation from a natural haven to a bustling concrete metropolis. This new perspective has profoundly altered my view of the city.
The Impact of Climate Change: Increased Frequency of Disasters
Source: Associated Press/Alamy
Over a decade post the 2015 Paris Climate Conference, it appears we remain stagnant in climate action efforts. While the rise of electric vehicles and the dominance of renewable energy over coal present positive trends, fossil fuel companies are still expanding and global emissions exceed 41 gigatonnes of CO2 annually.
At the Paris Conference, a hopeful vision emerged: nations committed to restricting the increase in global temperatures to 1.5 degrees Celsius above pre-industrial levels. Despite this ambition, little has changed in a decade. The framework to determine when we exceed this temperature threshold may not be confirmed until 2040, long after it’s already transpired.
The crucial 1.5°C threshold has become synonymous with dangerous climate change, significantly influencing global climate policy. Warnings about exceeding this limit’s risks have not translated into the aggressive emissions reductions that science necessitates.
But why the inaction? The core issue is the misconception that 1.5°C is a target to aim for instead of a limit we must prevent crossing. In 2015, global average temperatures had only risen by 1°C, suggesting ample time to react. This false sense of security allowed governments and fossil fuel industries to argue for a status quo while still contributing 37 gigatonnes of CO2 to our atmosphere.
As we inch closer to the 1.5 degrees Celsius mark, debates continue about alternative indicators to measure our progress. Options like the rate of renewable energy adoption have been proposed, but the most pressing indicator remains the global temperature rise — a crucial standard that reflects climate system responses and allows for comparisons to historical episodes of rapid warming.
Some advocate for considering 1.6°C or 1.7°C as new thresholds, as every fraction of a degree is critical. However, this approach is flawed; it risks becoming another target rather than a limit, and given the current rate of temperature increase (0.27°C per decade), we might surpass these figures as soon as the mid-2030s. Swift action on emissions is unlikely to keep us below these revised limits.
The reality is that premature restrictions could worsen the scenario, linking policy to restrictive measures that could lead to further failures. Instead, we should focus on impactful methods for tracking the rise in average global temperatures, providing clear visibility. First, we need a reliable methodology that allows us to track this figure in real-time without a decade-long wait. Career scientist Richard Betts and his colleagues from the Met Office have already developed an effective approach.
Next, we require a visual representation that resonates with the public. Imagine a global thermometer that updates annually, akin to the Bulletin of the Atomic Scientists’ Doomsday Clock. Such a periodic event could highlight the gradual increase in global temperatures, emphasizing crossing or approaching critical thresholds, thereby communicating the urgent need for action against escalating climate threats.
Bill McGuire serves as an Emeritus Professor of Geophysics and Climate Hazards at University College London. His forthcoming book, The Fate of the World: The History and Future of the Climate Crisis, will be published by HarperNorth in May.
Both artists and astronomers play a crucial role in transforming our observations of the universe into compelling narratives. The exhibit Cosmos: The Art of Observing the Universe at the Royal West of England Academy in Bristol, UK, explores this fascinating process.
“We recalibrate our perceptions through prolonged gazing,” says the exhibition curator, artist Ione Parkin. This exhibition, running until April 19, invites visitors to dive into their own observational journey, merging art and science in unique insights.
The image above illustrates how Janet Kerr collaborated with communities in Iceland, Greenland, the Shetland Islands, and Somerset to create stunning solar graphs that capture the sun over months of exposure.
This work by Alex Hartley intricately intertwines solar panels with photographs of Neolithic standing stones, illustrating the continuity of solar technology from ancient to contemporary times.
Parkin’s vibrant paintings swirl in red, orange, and bright white, evoking the dynamic nature of superheated plasma from the sun’s surface.
Finally, Michael Porter’s Impossible Landscape explores the realms beyond empirical knowledge, blending familiar geological textures with otherworldly aesthetics, prompting viewers to dream beyond the observable universe.
Understanding Ultra-Processed Foods: A Key to Preventing Premature Aging
In recent months, I attempted to coin a new term to describe the modern influences accelerating aging, such as obesity, stress, heatwaves, and environmental pollution. I suggested labeling our current situation as an “aging environment,” inspired by the commonly understood concept of an “obesogenic” environment. Unfortunately, my term hasn’t gained traction, but there’s another critical aspect that requires attention—ultra-processed foods (UPFs).
What Are Ultra-Processed Foods?
For those unfamiliar, ultra-processed foods are pre-packaged items that undergo extensive manufacturing, commonly containing refined ingredients like sugars, fats, and proteins, along with potentially harmful synthetic additives such as dyes and preservatives. Typically low in essential nutrients like fiber and vitamins, these foods are high in fat, salt, and sugar. Common examples include:
Microwave meals
Salty snacks
Mass-produced breads
Sugary drinks
Instant noodles
Ice cream and candy
Baked goods
Processed meats
Condiments like mayonnaise and ketchup
Rising Consumption of Ultra-Processed Foods
Over the past five decades, UPFs have increasingly dominated Western diets. In high-income countries, including the UK, over half of caloric intake now comes from these harmful foods. While the trend has plateaued in recent years, global demand for UPFs remains high, largely due to their convenience and affordability.
Health Risks Linked to UPFs
Research has consistently shown that a high intake of UPFs correlates with a range of chronic health issues, including:
Obesity
Cancer
Type 2 diabetes
Cardiovascular disease
Inflammatory bowel disease
Fatty liver disease
Kidney disease
Moreover, a growing body of evidence indicates that high UPF consumption increases overall mortality risk. Studies conducted in Spain, France, and the US found that individuals with the highest UPF intake were significantly more likely to die compared to those with lower consumption.
UPFs and Premature Aging
Recent research points to a strong connection between UPFs and premature aging. A 2024 study examined the diets of 16,055 U.S. adults aged 20 to 79, revealing that a higher percentage of calories from UPFs corresponded to accelerated biological aging. Specifically, every 10% increase in caloric intake from UPFs was associated with a 0.21-year increase in biological age.
Though skeptics may question the accuracy of biological age measurement, it is crucial to note that these studies compare groups rather than individuals, mitigating measurement biases. Even modest increases in biological age have been linked to higher risks of chronic disease and mortality.
Implications and Future Research
While studies like NHANES primarily snapshot dietary impacts, they suggest that UPFs contribute significantly to the aging environment alongside other factors like obesity and environmental stressors. Researchers debate whether it’s the poor nutritional quality of UPFs or the processing methods that cause accelerated aging.
Despite the unknowns, two substantial studies across diverse populations consistently link high UPF consumption to accelerated aging. The takeaway is clear: if possible, avoid ultra-processed foods.
While navigating a world saturated with UPFs is challenging, prioritizing whole, real foods remains beneficial. Let’s raise awareness and combat the aging environment we live in.
Exoplanet K2-18b has generated immense intrigue due to hints of potential life; however, an extensive analysis of radio signals revealed no evidence of an advanced civilization.
In 2025, Nick Madhusudan and researchers from the University of Cambridge claimed that K2-18b, located 124 light-years away, may exhibit traces of dimethyl sulfide (DMS) molecules in its atmosphere. Given that a significant amount of DMS on Earth is produced by biological processes, Madhusudan and his team suggested these signals might indicate signs of life on K2-18b.
Further observations, however, indicated that the DMS signal could be attributed to non-biological sources. Current scientific consensus holds that K2-18b is abundant in water, potentially featuring oceans or a water-laden atmosphere.
Madhusudan and fellow researchers are now exploring the possibility of extraterrestrial intelligent life on K2-18b by searching for radio signals, akin to those humans have been broadcasting since the 1960s.
Utilizing the Very Large Array Telescope in New Mexico and the MeerKAT radio telescope in South Africa, researchers observed K2-18b in multiple orbits around its star, focusing on radio frequencies similar to those emitted from Earth. They might have detected a signal from a transmitter with strength comparable to that of the now-defunct Arecibo radio telescope in Puerto Rico.
After meticulously eliminating potential terrestrial interference sources, researchers found no signals indicating that K2-18b possesses a powerful radio transmitter. Select researchers remained unavailable for comments to New Scientist regarding their findings.
“If a beacon akin to Arecibo were continuously transmitting from K2-18b, we likely would have detected it,” said Michael Garrett from the University of Manchester, UK.
“Of course, a lack of detection does not imply the absence of life; it simply restricts a specific and likely rare type of signal: a continual, relatively narrowband radio transmitter operating within the observed frequency range,” Garrett explained. “Civilizations, should they exist, might not utilize radio technology in this manner or may transmit intermittently, directionally, or at lower power levels. Furthermore, in aquatic environments, very low-frequency radio waves could be more common.”
Garrett posits that while alien water worlds may support simple life forms, the absence of exposed land could complicate the evolution of advanced intelligent life capable of developing technology. “The pathway to establishing complex infrastructure could vastly differ from what we have experienced on Earth.”
Explore Mysteries of the Universe in Cheshire, England
Join some of science’s brightest minds for a weekend exploration of the universe’s mysteries. The program includes a tour of the iconic Lovell Telescope.
Used electric vehicle (EV) batteries have the potential to fulfill two-thirds of China’s grid storage requirements by storing energy when renewable sources are plentiful and delivering power during peak demand periods.
During times when the wind isn’t blowing and the sun isn’t shining, the generation of renewable energy may decline, risking supply shortages, particularly during peak demand times in the mornings and evenings, as well as in winter. Typically, natural gas and coal plants compensate for this gap. Countries like China, the USA, the UK, and Australia are constructing large-scale battery-based grid storage solutions to harness renewable energy for later use.
As electric vehicle adoption rises, experts like Ma Ruifei from Tsinghua University argue that repurposed EV batteries can be integrated into the power grid, accelerating the transition to a carbon-neutral power system more affordably. Their research indicates that used batteries could meet 67% of China’s power grid storage needs by 2050, while simultaneously reducing costs by 2.5%.
EV batteries naturally degrade over time with repeated charging and discharging cycles and are often discarded once they reach about 80% of their original capacity. Although this degradation impacts the vehicle’s range and acceleration, it has minimal effect on grid storage applications, where multiple batteries are charged and discharged over extended periods.
“It still retains ample power, and when utilized for storage, its degradation is relatively slow,” says Gil Lacy from Teesside University, UK.
“Materials that are costly to mine and process for batteries should not be wasted when the cells still have 80% usable capacity,” asserts Rhodri Jarvis from University College London. “There’s significant interest in utilizing second-life battery packs, not only for cost reduction but also for enhancing sustainability.”
In a related study, researchers have drawn differing conclusions regarding whether energy storage using used batteries is more cost-effective than new lithium-ion batteries, whose prices are steadily decreasing.
However, with the increasing popularity of electric vehicles, used batteries may become a more economical option. Over 17 million electric vehicles are set to be sold in 2024, accounting for about 20% of global car sales, with nearly two-thirds being purchased in China.
The study projects that in a scenario where various battery chemistries are procured across China and utilized at 40% of their original capacity, second-life grid storage will grow significantly after 2030, as the demand for new batteries stabilizes. By 2050, total capacity is anticipated to reach 2 trillion watts.
In a contrasting scenario that relies solely on new batteries and pumped hydro storage (where water is pumped into a reservoir and released to drive turbines), the total capacity would only reach about half of this figure.
Second-life battery storage remains largely untested; however, US startup Redwood Materials has implemented a 63-megawatt-hour project using 10-year-old car batteries to power a data center in Nevada. The company claims its system is priced under $150 per kilowatt-hour and can deliver power for over 24 hours, exceeding the capabilities of new lithium-ion batteries.
Nonetheless, sorting and grouping used batteries by similar capacity levels is essential. If not, the management system must bypass individual batteries; otherwise, the group will cease to charge once the weakest battery reaches capacity.
Furthermore, damaged batteries need to be identified, and every several hundred cells must be equipped with temperature and voltage sensors. Overheating can result in significant fire hazards.
“The risks are obviously elevated, so ensuring safety, isolation, balance, and implementing robust risk-reduction measures is crucial,” Lacey emphasizes.
Time crystals present a remarkable concept in quantum physics. New research indicates that these intriguing materials could play a pivotal role in the development of ultra-accurate clocks.
All crystals are characterized by a repeating structure. Traditional crystals consist of atoms organized in a repeated pattern, while time crystals exhibit structures that repeat over time. Observing a time crystal reveals a consistent repetition of configurations. This cyclical behavior occurs naturally, not because the material is forced, but because it represents its lowest energy state, much like ice is the stable phase of cold water.
Ludmila Viotti and a team from Italy’s Abdus Salam International Center for Theoretical Physics have demonstrated that time crystals could serve as excellent components for precise quantum timekeeping devices.
The researchers performed a mathematical analysis of systems with up to 100 quantum mechanical particles. Each particle displayed two states defined by its quantum spin properties, akin to how a coin has two sides. The specific spin system they investigated can exist as either a time crystal or a conventional phase that lacks spontaneous time oscillation, providing potential for clock functions in either form. The study compared the accuracy of timekeeping using spins in both the time crystal and normal phases.
As Viotti explains, “In the normal phase, seeking finer temporal resolutions results in exponentially decreased accuracy. However, the time crystal phase offers significantly improved precision at the same resolution.” For instance, standard spin-based clocks tend to lose accuracy when measuring seconds over minutes, a challenge that could be mitigated with time crystal configurations.
Mark Mitchison, a researcher at King’s College London, acknowledges the promising applications of time crystals in horology but notes that rigorous evaluations of their advantages have been scarce. His research group has previously established that random sequences can function as clocks. However, systems that maintain self-sustaining oscillations inherently possess a more clock-like nature.
“While time crystals have been theorized for nearly a decade, the methods to utilize them remain unclear,” remarks Krzysztof Sasha from Jagiellonian University in Poland. “Just as regular crystals find diverse applications in both jewelry and computing, we anticipate that time crystals will pave the way for similarly innovative technologies.”
While time crystals may not surpass the accuracy of today’s leading atomic clocks, they could offer viable alternatives to satellite-based timekeeping systems like GPS, which are vulnerable to interference. Additionally, clocks based on time crystals may lay the foundation for sensitive magnetic field sensors, as minor magnetic disruptions can affect clock performance, according to Mitchison.
Despite the potential, Viotti emphasizes that extensive research is needed before practical implementation. She indicates that their spin system should undergo comparisons with other accurate clock systems and require experimental validation involving real spins.
Recent findings highlight the emergence of early mining and hunting tools.
Raul Martin/MSF/Science Photo Library
Subscribe to Our Human Story, a monthly newsletter exploring revolutionary archaeology. Sign up today!
In headlines about human evolution, terms like “oldest,” “earliest,” and “first” dominate. I’ve authored numerous articles featuring these phrases.
This isn’t just an attention-grabbing tactic; it serves a purpose. When researchers identify evidence suggesting a species or behavior predates previous estimates, it elucidates our understanding of timelines and causations.
For instance, it was once believed that all rock art originated no earlier than 40,000 years ago, attributed solely to Homo sapiens, as Neanderthals were thought to have vanished by then. New evidence suggests that some prehistoric art predates this threshold, indicating Neanderthal artistic expression.
The past month has unveiled a flurry of “earliest” discoveries, prompting reflections on the reliability of such timelines. How can we ascertain the true age of early technologies?
Let the Exploration Begin!
During excavations in southern Greece, archaeologists unearthed two wooden tools estimated to be about 430,000 years old—possibly the oldest known wooden tools. One is believed to be a drilling rod, while the function of the other remains uncertain.
These tools are closely dated to the previous record holders, including the Clacton spear from Britain, approximated at 400,000 years old, and wooden spears found in Schöningen, now reassessed to nearly 300,000 years old.
Bone tools also emerged in Europe during this epoch. For instance, in Boxgrove, England, remnants from an elephant-like creature, possibly a steppe mammoth, were fashioned into hammers. These elephant bones date back 480,000 years, marking the oldest known utilization of elephant bone in Europe. However, in East Africa, ancient humans were crafting tools from elephant bones over 1.5 million years ago—perhaps much earlier.
Shifting our chronological lens, a recent discovery in Xigou, central China, reported a collection of 2,601 stone artifacts dating between 160,000 and 72,000 years ago, featuring composite tools attached to wooden handles—possibly the earliest evidence of such technologies in East Asia.
Moreover, an archaeological revelation in South Africa indicated that 60,000 years ago, early humans employed poisoned arrows for hunting, as evidenced by five arrowheads lined with toxic plant fluids.
Each of these findings carries deeper implications.
Examining the Past
Traces of plant toxins discovered on arrow points
Marlize Lombard
The oldest verified wooden tools we have may not represent the absolute earliest. Preservation issues plague prehistoric wooden artifacts; they tend to decay, leading to gaps in the historical record.
According to Katerina Harbati, who directs the wooden tools excavation, people likely used such tools well before 400,000 years ago, but prior examples remain undiscovered.
Woodworking is simpler than stone crafting, and since chimpanzees can fashion rudimentary wooden tools, it is plausible that wooden tools represent humanity’s earliest technological forms. An unexpected finding of a million-year-old wooden tool, though astonishing, would not be entirely improbable.
Consequently, significant narratives on human technological advancements shouldn’t solely pivot on the age of the earliest wood tools. Confidence in tool usage timelines necessitates rigorous investigation into various age groups.
As for poisoned arrows, these are recognized as the earliest validated forms of poisoned arrowheads. Nonetheless, designs akin to contemporary poisoned arrows have been identified from tens of thousands of years ago. Like wood, poison’s organic nature leads to rapid decay.
We should be cautiously assured. Poison arrows exemplify composite technological advancements and emerged later in the evolutionary timeline, possibly not even tracing back to early hominids such as Ardipithecus or Australopithecus.
Turning to prehistoric art, we find a wealth of complexity.
Exploring Prehistoric Graffiti
Hand stencils from a cave in Indonesia
Ahdi Agus Oktaviana
While cave paintings are iconic, other forms like carvings and engravings offer their own challenges in dating. If a sculpture is buried in sediment, its age can usually be determined based on sediment analysis. However, dating cave art proves trickier. Charcoal-based works that are less than 50,000 years old offer more reliable carbon dating, whereas those beyond this window yield inconclusive results.
Recently, hand-painted stencils found in caves on Sulawesi island were dated to at least 67,800 years, competing with a similar stencil in Spain attributed to Neanderthals, arguably the oldest rock art known.
Notably, the phrase “at least” matters significantly in this context. Dating relies on surface rock layers created through mineral deposits, which are only minimally informative. The artworks beneath could be much older.
The goal here isn’t to assert that we lack all knowledge, but rather, we possess a wealth of understanding, much of it newly uncovered in the last two decades. We must strive for a coherent timeline in human evolution and cultural development while acknowledging uncertainties.
In paleontology, having numerous specimens enhances reliability. Instead of studying charismatic prehistoric animals like dinosaurs, paleontologists often focus on smaller organisms that leave abundant fossil records, enabling deeper insights into their evolutionary progress.
However, in human evolution, the fossil record is uneven. Individual hominid species may number in the dozens, yet the early specimens remain scarce, hindering our understanding of their longevity and geographical spread. The relationship between evolved species also eludes clarity amidst possible complicated derivations.
Conversely, stone tool records are extensive, dating back to the 3.3 million-year-old Lomekwean stone tools in Kenya. We might encounter even older tools. Early humans like Ororin (6-4.5 million years ago) and Ardipithecus (5.8-4.4 million years ago) likely spent most time in trees, making their tool-making unlikely.
Wooden tools present their own challenges. Our knowledge remains limited and fragmented, largely due to preservation issues. A reliable timeline for the evolution of wooden tools seems elusive.
When it comes to ancient art, the challenges are primarily technical. Preserved artworks are available, yet accurate dating techniques are limited. Creating a chronology for artistic development poses immense challenges, although advancements in technology may facilitate progress over time. With any luck, by retirement, I hope to have a clearer understanding of the evolution of ancient human artistic practices.
In essence, all narratives about human evolution are, to some degree, provisional. This holds true across paleontological studies, especially for narratives with more uncertainty. The timeline of non-avian dinosaur extinction is quite clear-cut; however, human evolution allows for more variability. Further excavations and improved dating methods should refine our understanding, but some uncertainties may remain.
Neanderthals, the Origins of Humanity, and Cave Art: France
From Bordeaux to Montpellier, embark on a fascinating journey through time as you explore southern France’s significant Neanderthal and Upper Paleolithic sites.
A newly photographed newborn marsupial, weighing less than a grain of rice, is seen crawling towards its mother’s pouch for the first time. This remarkable observation highlights the unique gestation and development process of marsupials.
Unlike placental mammals, which give birth to more developed young, marsupials experience a brief gestation period before their young must navigate to the mother’s pouch to continue their growth.
According to Brandon Menzies from the University of Melbourne, this remarkable process remains largely unknown for many of Australia’s rare marsupials, even for those in captivity. Menzies and his team care for several hundred fat-tailed dunnarts (Smithopsis crassicaudata) and aim to work with Colossal Biosciences to potentially resurrect the extinct Tasmanian tiger.
Despite establishing the colony decades ago and monitoring female fertility closely, the exact details of how marsupials give birth and the young’s attachment to the teats have never been documented before now.
Menzies explained that this phenomenon is difficult to observe due to the lack of pregnancy tests for this species, their nocturnal habits, and the fact that births occur at night. During a 12 to 24-hour period, a swarm of newborns is born, taking just 30 minutes to reach the pouch.
Adult Fat-tailed Dunnarts
Emily Scicluna
In 2024, researchers noted blood in an enclosure. An examination revealed tiny newborns, just 5 milligrams each, making their way towards their mother’s pouch.
“We observed the pouch waving, crawling, and wriggling,” Menzies stated. “It’s a freestyle swim type of crawl, similar to a commando crawl.”
Young Dunnarts in Their Mother’s Pouch
Emily Scicluna
Realizing this was a groundbreaking moment, Menzies captured 22 seconds of footage before carefully returning the mother to her enclosure. The team believes gravity plays a crucial role in guiding the young towards the pouch.
Researchers estimate the newborns achieved around 120 movements per minute while crawling.
Reaching the nipple is just the first challenge. Many marsupials, including fat-tailed dunnarts, produce more offspring than nipples available for nursing. While they can carry up to 17 pups, they can only care for 10, contrasting with the Tasmanian devil that has the capacity to produce 30 pups with just four nipples.
Menzies expressed amazement at how fat-tailed dunnarts can give birth to such mobile pups merely 14 days after conception. It was previously believed these tiny babies couldn’t independently enter the pouch without maternal assistance.
“The ability to crawl independently into the pouch underscores the remarkable developmental capabilities of this species,” he remarked. “Just a week ago, these were fertilized eggs consisting of mere cells.”
Fossil Hunting in the Australian Outback
Embark on an extraordinary adventure through Australia’s fossil frontier. Once a shallow inland sea, eastern Australia has transformed into a fossil hotspot. Over 13 memorable days, journey deep into the hinterland and uncover secrets from Earth’s ancient past.
Discover the Massive Solar Power Plant of Port Augusta, South Australia
Brooke Mitchell/Getty Images
As South Australia progresses toward achieving its ambitious goal of operating entirely on solar and wind energy, electricity prices have decreased by one-third over the past year, making them the lowest in Australia. This state serves as a leading example of the economic advantages linked to large-scale grid decarbonization.
“Tim Buckley, an independent energy analyst at Climate Energy Finance, an Australian think tank, highlights that South Australia has emerged as a global leader in renewable energy transition. While this transformation comes with challenges, we take pride in our successes. South Australian consumers are now reaping the benefits of consistently low electricity prices,” he states.
In the final quarter of 2025, South Australia generated an impressive 84% of its electricity from solar and wind sources. This exceptional percentage makes it the highest among major global power grids, with plans to reach 100% by the end of next year.
This renewable energy push is driving electricity prices down. The Independent Australian Energy Market Operator (AEMO) reported that South Australia’s average wholesale electricity price fell by 30% in the last quarter of 2025 compared to the previous year. Consequently, the state now shares the title of Australia’s cheapest state with Victoria, which also boasts a significant share of renewable energy.
This is a significant achievement for the South Australian government, previously criticized for rising electricity prices due to rapid renewable energy implementation. Occasionally, electricity bills surged as the state relied on costly gas energy when wind and solar outputs dwindled. Gas producers charged high prices to meet sporadic demand, a challenge exacerbated by rising petrol prices in Australia, which surged by 500% following Russia’s invasion of Ukraine.
To reduce price fluctuations, South Australia has developed seven extensive batteries comparable to football fields. These batteries harness energy generated by nearby solar and wind farms on sunny, windy days, providing backup power when conventional sources fall short. The introduction of two new batteries in 2025 significantly contributed to lowering energy costs.
The success of these batteries has motivated other Australian states to invest in similar technologies. Last week, consultancy Rystad Energy noted in a report that utility-scale batteries are no longer merely supplementary technology in Australia’s energy landscape, actively replacing gas fueled generation across several states. Consequently, Australia is becoming a global showcase for the effectiveness of battery technology in energy storage.
A new massive wind farm in South Australia, named Goyder South, which opened in October, is also contributing to lower electricity prices. With a capacity of 412 megawatts, it is the largest in the state and is expected to boost wind energy generation by 20%. Buckley emphasizes, “Basic economics indicates that increasing supply leads to reduced prices.”
AEMO’s report also highlighted that wholesale electricity prices were negative for approximately 48% of the time in the past quarter, indicating that the state produced more electricity than consumed, resulting in incentives for electricity suppliers to halt production during excess generation periods.
For instance, in November, South Australia achieved a remarkable milestone when it met 157% of its electricity demand solely through renewable energy. In these instances, excess wind and solar power caused temporary disconnections from the grid, allowing batteries to store the surplus energy for later use or export to nearby Victoria.
Many households in South Australia are also reducing their dependency on grid electricity by generating their own power. Over half of the residences now have solar panels installed, harnessing daytime sunlight for energy. Additionally, nearly 50,000 households have invested in home storage batteries, charged during the day and utilized after sunset. Since the launch of a 30% rebate on household batteries in July 2025, South Australia leads the nation in per capita home battery installations.
In December, the state concluded agreements to construct two additional large-scale wind farms, vital steps toward achieving its goal of 100% net renewable energy by next year. “We are well on our way to achieving this target, and the new wind farms will play a critical role in this success,” Buckley stated.
Fossil Hunting in the Australian Outback
Embark on an extraordinary adventure through the fossil-rich landscapes of eastern Australia, a region that was once a shallow inland sea millions of years ago. Join us for 13 unforgettable days as we delve deep into the hinterland, follow the ancient trails of colossal prehistoric creatures, and explore the incredible secrets of Earth’s ancient past.
Innovative Gene Therapy Delivered as a Mist for Lung Cancer Treatment
Nico De Pasquale Photography/Getty Images
An innovative inhaled gene therapy targeting lung cancer is rapidly advancing toward potential approval following encouraging results from clinical trials.
Dr. Wen Wee Ma at the Cleveland Clinic highlighted the findings at a recent American Society of Clinical Oncology conference in Chicago, stating, “Very encouragingly, this proves our hypothesis that the lung tumor actually shrunk.”
This groundbreaking treatment employs a virus to introduce immune-boosting genes into lung cells, enhancing their natural ability to combat tumors. Unlike traditional gene therapies, which often replace defective genes, this method focuses on modifying existing lung cells.
The unique inhalation delivery method represents a significant advancement in cancer treatment. “This is a completely different approach to anti-cancer treatment,” said Ma. Directly targeting the lungs enhances the efficiency and effectiveness of the treatment, particularly since lung cancer is notoriously difficult to treat with standard oral or intravenous therapies.
The therapy utilizes a harmless, modified herpes virus to inject two critical genes into lung cells: interleukin-2 and interleukin-12. These proteins, naturally produced by the body, help inhibit tumor growth. Unfortunately, tumors often diminish their effectiveness, necessitating the need for gene therapy to restore their production.
Since 2024, clinical trials have been ongoing with patients suffering from advanced lung cancer who have exhausted all other treatment options. The therapy is administered via a fine mist inhaled directly into the lungs.
At the oncology conference, Ma reported that the gene therapy has successfully reduced lung tumor sizes in three out of eleven trial participants, while also halting growth in another five. Although some patients reported side effects, such as chills and vomiting, no severe safety issues were noted.
Based on these promising outcomes, the U.S. Food and Drug Administration recently granted “Regenerative Medicine Advanced Therapy Designation” to the gene therapy, facilitating expedited approval processes for patient access.
However, it is important to note that this gene therapy is specifically designed for lung tumors and does not address tumors that have metastasized to other body parts. To expand its efficacy, Ma and his team are exploring combinations with immunotherapy and chemotherapy in a trial involving approximately 250 patients.
Crystal Biotech, the developer of this gene therapy, previously introduced the first FDA-approved gene therapy targeting the skin, using a similar modified herpes virus to treat patients with recessive dystrophic epidermolysis bullosa, a rare skin condition. The company is also developing inhaled gene therapies for cystic fibrosis and alpha-1 antitrypsin deficiency, both inherited lung diseases.
The ancestors of Britain’s Bell Beaker people inhabited wetlands and heavily relied on fishing.
Sheila Terry/Science Photo Library
Analysis of ancient DNA has meticulously unveiled the origins of a fascinating group that emerged in Britain around 2400 B.C., nearly displacing the builders of Stonehenge within just a century.
This group is associated with the Bell Beaker culture, which emerged in Western Europe during the Early Bronze Age, named after the distinctive pots they left behind. While previously thought the culture stemmed from Portugal or Spain, recent research indicates that the people who populated Britain originated from the delta regions of Northwest Europe, across the North Sea. Remarkably, this resilient group maintained aspects of their hunter-gatherer lifestyle and ancestry for thousands of years, despite the spread of early farming communities across Europe.
David Reich and his team from Harvard University analyzed the genomes of 112 individuals who lived in present-day Netherlands, Belgium, and western Germany throughout the period of 8,500 to 1,700 BC.
“The Netherlands was once considered a mundane place, with every square inch traversed millions of times. Yet, it reveals itself as one of the most intriguing areas in Europe.”
The DNA sequenced in Reich’s lab indicates that this population emerged from the Rhine-Meuse delta, bordering the Netherlands and Belgium. This group derived from resourceful hunter-gatherer communities, thriving on fish, waterfowl, game birds, and diverse plant life found in the flooded wetlands surrounding these expansive rivers.
Originating in Anatolia, Neolithic farmers began to expand throughout Europe around 6500 BC, likely due to their agricultural advantage, allowing for larger family units compared to hunter-gatherers. This led to the near disappearance or significant dilution of hunter-gatherers’ genetic ancestry in regions where farmers settled.
However, research reveals that these wetlands served as zones where farmers’ genetic influx remained minimal for thousands of years. The dynamic, often flooded environments of rivers, swamps, dunes, and peat bogs posed significant challenges for early farmers, yet offered abundant opportunities for those adept at surviving in such terrains, as noted by Luc Amkreutz at the National Archaeological Museum in Leiden, Netherlands. “These hunter-gatherers charted their course from a position of strength.”
Genetic testing indicates that, despite their enduring hunter-gatherer lifestyle, the people of the wetlands engaged in gradual integration with farmers through intermarriage. While their Y chromosomes passed through male lineages, their mitochondrial DNA and X chromosomes displayed a steady influx of genetic contribution from farmers’ daughters. “This revelation was unexpected for us,” remarks Evelyn Altena of Leiden University Medical Center. “Without DNA, this knowledge would remain elusive.”
Reich posits that this interaction was likely peaceful, characterized by men remaining at homesteads while women migrated. Nonetheless, an aspect of conflict cannot be dismissed, although the extent of reciprocal exchange remains uncertain due to the preservation challenges of DNA from arid farmer regions.
Bell Beaker Pottery from Germany
Peter Endig/DPA Picture Alliance/Alamy
Archaeological findings indicate that, over time, these hunter-gatherers adopted pottery techniques, cultivated grains, and domesticated animals, yet they retained core aspects of their original way of life.
Then, circa 3000 BC, a nomadic group known as the Yamuna, or Yamnaya, began migrating west from the vast steppes of modern Ukraine and Russia. Their interactions with Eastern European farmers birthed the cord-shaped pottery culture characterized by decorative cord patterns. Although their descendants spread throughout much of Europe, they had minimal influence on the delta region.
Excavations revealed a skeleton from this era that bore the Yamnaya Y chromosome alongside pots, some evidently used for cooking fish. This exemplifies how wetland inhabitants creatively integrated foreign objects into their traditional practices, though overall, very few people bore steppe ancestry.
The dynamics shifted with the arrival of the Bell Beaker culture around 2500 BC. This group, characterized by a hybrid of steppe and farmer ancestry, introduced steppe genes into the DNA of the wetland peoples while retaining notable portions of both hunter-gatherer and early farmer genetics, approximately 13 to 18 percent. They may have begun to fade into history from that point onwards, yet the saga was far from over.
Human remains analyzed from Oostwoud, Netherlands
North Holland Archaeological Depository (CC by 4.0)
Recent studies reveal that those who arrived in Britain around 2400 BC bore an almost identical genetic mixture of Bell Beaker and wetland community ancestry. Within a century, they were largely or entirely replaced by Neolithic farmers who constructed Stonehenge. “Our model shows that at least 90 percent, and up to 100 percent, of original ancestry has vanished from Britain,” observes Reich.
It remains uncertain if this transition commenced with the influx of the Bell Beaker culture or if other groups preceded them. Before their arrival, Britons commonly cremated their deceased, resulting in minimal DNA preservation.
Regardless, the extent of change was “so dramatic that it defies belief,” according to Reich. The rapid populace replacement has captured archaeologists’ attention since its initial suggestion in a 2018 study. Reich theorizes that a plague-like disease, possibly affecting individuals in continental Europe, may have played a role. Conversely, the native population in the UK might have been more susceptible to such ailments.
Team members contend that religious fervor likely did not influence the transition, as indicated by Harry Fockens from Leiden University. “Monuments like Stonehenge and Avebury continued to see use and expansion even after their creators disappeared.”
Michael Parker Pearson from University College London is intrigued by the ways in which the new inhabitants adopted British monument styles, like henges and stone circles, whilst simultaneously introducing new lifestyles, including different pottery and clothing styles.
The Bell Beakers also introduced metalworking to Britain, with certain gold ornaments discovered in Beaker tombs in England bearing striking similarities to those found in Belgium.
Discover the Origins of Humanity: A Gentle Walk Through Prehistoric Times in South-West England
Immerse yourself in the fascinating early human eras of the Neolithic, Bronze Age, and Iron Age on this special walking tour.
Most everyone has tales of lost love or romantic rejection, and psychologist Paul Eastwick is no exception. As an undergraduate at the turn of the millennium, he fell for a student named Anna—a stunning, tall aspiring poet fluent in Russian. While he may have seen himself as more of a “6” to her “9,” they did spend some time together before he was “friend-zoned,” and ultimately she pursued relationships elsewhere.
Eastwick, who has coined a term “EvoScript” to describe a prevalent view in the dating world, explains that rejection often seems inevitable. In this “marketplace” of dating, individuals possess unique “mate values” based on various factors like looks, intelligence, and social status, selectively pairing with the highest-value partners for the best possible offspring. He notes, however, that navigating this marketplace often leads to a hierarchy of potential partners. Reflecting on his findings, he emphasizes, “Either find your place and stay put, or run wild like Icarus,” his observations now part of his role as a psychology professor at the University of California, Davis.
While Eastwick’s theory rests on psychological literature, it has become widely accepted in popular culture. In his informative new book, Bonds through Evolution: What We Get Wrong About Love and Connection, he refutes this narrative, asserting that it is fundamentally flawed.
“
Passion tends to fade merely weeks after potential romantic partners connect “
Many experiments supporting EvoScript evaluated mate value based on participants rating images of unfamiliar individuals. In these instant assessments, people often agree on attraction, suggesting an innate ranking based on genetic traits. However, this approach disregards the reality that first impressions can easily diminish after personal interaction. Although such studies require time and effort, Eastwick and his team demonstrated that as people genuinely connect, the perceptions of their mate value rapidly shift.
In essence, supposed mate value can be fleeting. As Eastwick summarizes, “Even if I find you attractive, there’s only a 53% chance that others will concur.” This could be disheartening news for those who consider themselves physically appealing. He continues by stating that “Potential romantic partners seem to lose their allure just weeks following their meeting.”
Eastwick proposes that compatibility ultimately plays a crucial role in determining who we love, albeit challenging to foresee. Although individuals can readily articulate preferences—such as being drawn to extroverted or adventurous people—his research indicates these traits have minimal impact on actual relationship choices. Intriguingly, we are more likely to be content with partners exhibiting three unrelated traits: being friendly, intelligent, and successful. “What truly counts,” he notes, “is not matching a worn-out checklist, but rather the feelings stirred within you,” which are fostered through chaotic conversations.
Similarly, Justin Garcia, executive director at the Kinsey Institute, reaches a comparable conclusion in his recent publication, Intimate Animals. Although Garcia employs the market-based vocabulary Eastwick challenges, he acknowledges that first impressions surrounding dating abilities can mislead. “We quickly judge partnerships appearing mismatched at first sight, yet the overall value of each partner is considerably more intricate than we assume,” he argues.
Both authors highlight the significance of “self-aggrandizement” in intimate relationships. Garcia emphasizes that personal growth, new experiences, and fresh viewpoints often prove attractive in partnerships.
These insights resonate with both seasoned and novice daters. While online dating has broadened the pool of potential partners, choices often stem from superficial evaluations that evolve once mutual acquaintance deepens. Consequently, many face disappointments prior to finding “the one” (or at least “the right one”).
Considering compatibility’s importance, Eastwick suggests giving most individuals at least three chances before forming a judgment about whether to continue dating. He states, “Third impressions generally offer a more reliable predictor than much of the currently tested information.” He also encourages creative encounters beyond traditional settings like dinners or drinks, urging couples to explore diverse activities such as roller skating, karaoke, or chocolate tastings as a means of assessing compatibility.
Continuing to nurture real-life friendships is equally important. Evidence shows that we are significantly more inclined to find love with someone we are familiar with rather than a total stranger. Social connections, at the very least, can yield numerous advantages, enhancing both physical and mental wellness.
For these reasons, Eastwick recommends maintaining a positive relationship with dating partners. Reflecting on his experiences with Anna, he realized that platonic relationships are indeed attainable. After a difficult period, his emotions for her faded, paving the way for friendship and an expanded social circle. “The joy of broadening your connections is incredibly fulfilling, and Anna appreciated that,” he concluded. It appears that the friend zone may not be such a negative space after all.
After numerous books advocating cynical strategies for “playing” the dating game, it’s refreshing to encounter two works that present evidence-based optimism regarding our chances of discovering love that resonates with our true selves. Embrace opportunities to connect with others, remain honest and respectful, and observe how feelings evolve. It’s straightforward, yet these simple strategies might just elevate your love life.
David Robson is the author of The Laws of Connection: 13 Social Strategies that Will Change Your Life.
3 Essential Reads on Relationships
Find Love: Navigating modern relationships and discovering your ideal partner by Paul C. Brunson
Is it increasingly challenging to find romance in the 21st century? Tinder’s scientific advisor elaborates on evolving ideals and highlights common pitfalls in our search for love.
This book provides evidence-based techniques for fostering mutual growth in long-term relationships, including strategies for enhancing communication and tackling inevitable challenges.
Single at Heart: Embracing the power, freedom, and joy of single living by Bella DePaolo
Society often emphasizes the need to pair up; however, as social psychologist DePaolo illustrates, an increasing number of individuals find joy in singlehood. This myth-busting exploration stands as a counter to the frenzy surrounding Valentine’s Day.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.