Sneezing and coughing are prevalent symptoms of hay fever
Mohammad Hosein Safaei/Unsplash
Individuals suffering from hay fever may find relief with a novel “molecular shield” designed to stop pollen from penetrating the nasal lining, likely with fewer side effects than traditional treatments.
Hay fever is an allergic response triggered by pollen interacting with IgE antibodies found in the nose, mouth, and eyes, leading to inflammation and symptoms like sneezing and itching. Common treatments, such as antihistamines and steroids, help reduce inflammation but often come with side effects, including drowsiness.
Seeking alternatives, Kaissar Tabynov from Kazakh National University of Agricultural Research and his team first collected blood samples from mice. They then isolated antibodies that did not participate in the allergic response but could bind to major mugwort pollen allergens, the primary trigger for hay fever. This binding action inhibited allergens from connecting with IgE antibodies in laboratory tests. “It acts as a molecular shield,” Tabynov explains.
To evaluate the shield’s effectiveness, the researchers induced mugwort pollen allergies in 10 mice by injecting them with allergens and chemicals to stimulate an immune response.
After a week, they administered small amounts of liquid containing the pollen-blocking antibodies into the noses of half the mice, gradually increasing the dosage over five days. The other group received saline solutions. An hour following each droplet, the mice were exposed to mugwort pollen at concentrations similar to those encountered during peak pollen seasons, according to Tabynov.
Following the final injection, the mice receiving the antibody treatment showed an average of 12 nose rubs over five minutes, in stark contrast to 92 in the saline group.
The researchers aimed to diminish inflammation and confirmed their success by imaging the nasal tissues collected from the mice at the study’s conclusion. This imaging revealed that the treatment not only had localized effects but also systemic ones. “Our research is the first to show that allergen-specific monoclonal antibodies can be administered intranasally to achieve both local and systemic protection against plant pollen allergies,” states Tabynov.
While the researchers did not assess potential side effects, they do not anticipate the adverse reactions associated with oral hay fever treatments, since the antibodies act at the site of allergen entry.
“This study represents a significant breakthrough and underscores the promise of intranasal therapies for allergic rhinitis. [hay fever] It lays the groundwork for early clinical trials exploring this method in humans,” remarks Sayantani Sindher from Stanford University in California.
Nonetheless, translating success in mice to human applications may prove challenging, and the antibodies will need to be modified to ensure they do not provoke an unexpected immune response in humans, Tabynov notes. If all goes well, the team hopes to advance this method to a nasal spray for human use within the next two to three years, he adds.
Such sprays could also address additional pollen types responsible for hay fever. “We envision a future where tailored antibody sprays can be made for individuals with sensitivities to different pollen varieties,” muses Tabynov.
Paleontologists have extracted ancient enamel protein sequences from fossilized teeth of epiacaratherium sp., a nasal bacteria that thrived in the High Arctic of Canada between 240 and 21 million years ago (early Miocene). This recovered sequence enabled researchers to ascertain that this ancient rhino diverged from other syoxidants during the mid-Eocene Oligocene period, approximately 410-250,000 years ago. Additionally, the findings illuminate the distinctions between two principal subfamilies of rhinocerotinae and Rhinocerotinae, indicating a more recent division of bone development around 340-22 million years ago.
Reconstruction of three extinct rhinoceros species: foreground features a Siberian unicorn (Elasmotherium sibiricum), accompanied by two Merck rhinos (Stephanorhinus kirchbergensis); In the distant background is a wooly rhino (Coelodonta antiquitatis). Image credit: Beth Zaiken.
Dr. Mark Dickinson and his team from York University investigated the teeth of epiacaratherium sp. They utilized a method known as chiral amino acid analysis, which aids in understanding how these proteins were preserved over time.
By assessing the degree of proteolysis and comparing it with previously studied rhino material, they confirmed that the amino acids originated from the teeth themselves, not from subsequent contamination.
“It’s astounding that these techniques allow us to revisit the past and delve deeper,” Dr. Dickinson remarked.
“Armed with our understanding of ancient proteins, we can now pose intriguing new questions regarding the evolution of ancient life on Earth.”
The rhinoceros holds particular significance as it is currently categorized as an endangered species. Exploring its extensive evolutionary history offers vital insights into how past environmental shifts and extinctions have influenced present biodiversity.
Historically, scientists have depended on the morphology of fossils or, more recently, ancient DNA (aDNA) to reconstruct the evolutionary narratives of long-extinct species.
Nonetheless, aDNA typically does not last more than a million years, constraining its utility in unraveling deep evolutionary history.
Although ancient proteins have been detected in Miocene fossils, previous samples extending back over 4 million years had been constrained to roughly the last 10 million years—full sequences were necessary for robust reconstructions of evolutionary lineages.
The latest research significantly broadens this temporal scope, indicating that proteins may endure across extensive geological timescales under optimal conditions.
“Success in analyzing ancient proteins from such old specimens provides fresh perspectives for scientists globally, who possess remarkable fossils in their collections,” stated Dr. Fazeera Munier of York University.
“This crucial fossil aids our understanding of the distant past.”
The results were published in the journal Nature this week.
____
RS Patterson et al. Phylogenetically significant proteins from the early Miocene era. Nature Published online on July 9, 2025. doi:10.1038/s41586-025-09231-4
A spokesman for Greene stated that lawmakers have been “discussing this matter for quite some time” and asserted that the bill is unrelated to the floods in Texas.
In a follow-up email, Greene communicated with Zeldin and expressed encouragement over his actions.
“This is an uncontrolled experiment conducted in the atmosphere without consent. It’s reckless, dangerous, and must be halted,” she stated in an email.
Burchett’s office did not immediately respond to inquiries for comment.
Following Milton and Helen, NOAA issued a factsheet in October 2024, aiming to debunk “weather modification claims” that emerged after two storms impacted Florida and North Carolina. The agency declared it would not “fund or engage in cloud seeding or any weather modification projects.”
Zeldin’s reference to more fringe theories regarding extreme weather coincides with the Trump administration’s reduction in climate change research funding and the removal of a website hosting the government’s climate assessment. President Donald Trump referred to climate change as a hoax, despite scientists uncovering stronger evidence linking the intensity and frequency of extreme weather to global warming.
Decades of research on weather modification have often fueled conspiracy theories.
From 1962 to 1982, NOAA participated in a project called Storm Fury, which aimed to investigate whether hurricane intensity could be altered. This study did not achieve its goals and was ultimately discontinued. NOAA has not undertaken similar research since. According to the factsheet.
Cloud seeding is a weather modification technology currently utilized. This practice has existed since the 1950s and typically involves dispersing silver iodide into clouds to extract moisture from the atmosphere, resulting in additional precipitation. Presently, cloud seeding programs are mainly focused on enhancing water supplies in western states. Companies are required to notify authorities before implementing such measures.
“Cloud seeding doesn’t generate water; it aids surrounding clouds in releasing 5-15% of their moisture. However, Texas was already experiencing 100% humidity, extreme moisture, and storms. The clouds didn’t require assistance,” Cappucci stated.
The proliferation of these claims coincides with escalating threats directed at meteorologists.
Geoengineering is a legitimate scientific field; however, assertions regarding its capability to control significant weather patterns and generate adverse weather are unfounded. Most geoengineering techniques remain theoretical and untested, with federal researchers making only tentative steps to evaluate their viability. Atmospheric scientists report no evidence of any large-scale programs.
Last year, in Alameda, California, a small test project in geoengineering, referred to as Marine Cloud Brightening, was disrupted by community protestors, despite researchers demonstrating its safety.
Psychotherapist Jonathan Alpert described how conspiracy theories tend to surge, particularly during moments of weather events that leave individuals feeling powerless.
“Conspiracy theories offer emotionally gratifying narratives. They restore a sense of control by framing phenomena as intentional actions by powerful entities rather than unpredictable chaotic events,” Alpert told NBC News. “In this context, believing ‘someone is doing this to us’ is more bearable than facing the idea that ‘no one is in charge.'”
While some interpret the EPA’s actions as a sign of transparency, others view it merely as a recent political maneuver to sidestep critical environmental issues.
“Some individuals question whether the bird is real or not. Will that become your next focus?” Congressman Don Beyer D-Va remarked in response to Zeldin’s comments on Thursday morning. He went on to comment on X regarding the EPA guidelines, “How much taxpayer money will be expended on this?”
The flooding began in Texas before rains hit North Carolina, New Mexico, and Illinois.
In just one week, at least four events classified as 1,000-year rainfalls occurred across the United States, a phenomenon expected to take place only about 0.1% of the time each year.
“It’s rare for these intense rainfall events to occur in any given year,” stated Kristina Dahl, vice president of science at Climate Central.
Some experts noted that this is a significant statistical observation, likely linked to climate change, and may become more frequent.
Last week, heavy rains led to catastrophic flash floods in central Texas, claiming at least 120 lives across six counties. The Guadalupe River near Carville rose over 20 feet within just 90 minutes, causing widespread destruction.
Days later, Tropical Storm Chantal brought heavy rain to North Carolina, with reports of severe flooding in the central region, where some locations received nearly 12 inches of rain within a mere 24 hours. Local officials are still assessing the death toll from the Thursday floods amidst ongoing monitoring.
In New Mexico, three individuals lost their lives on Tuesday due to a devastating flash flood that swept through a remote village in Ruidoso, situated approximately 180 miles south of Albuquerque.
On the same day in Chicago, 5 inches of rain fell in merely 90 minutes around Garfield Park, necessitating multiple rescue operations in the west side of the city.
While experts acknowledge that 1,000-year floods are statistically rare, they also highlight that significant rain events happen every year in the U.S.
“The probability for any specific location is only 0.1% annually, meaning it’s highly unlikely to experience such an event in your area, yet they do occur somewhere in the country each year,” explained Rus Schumacher, director of the Colorado Climate Center at Colorado State University.
He emphasized that climate change is likely to increase the frequency of these extreme flood incidents.
While pinpointing the exact impact of climate change on specific weather events can be challenging, scientists concur that a warmer atmosphere leads to more intense rainfall and severe storms.
“This area demonstrates a strong correlation because the underlying physics is relatively straightforward,” Schumacher noted.
A warmer environment can retain more water, leading to storms that can unleash vast amounts of rain. Research suggests that for every degree Fahrenheit that the planet warms, the atmosphere can hold about 3% to 4% more moisture.
“It’s mathematically certain that as the atmosphere retains more water, it can release more during storms,” stated Dave Gouchs, a hydrologist who directs forecast services for a company based in Mammoth Lake, California, focusing on snow and water resource measurements.
However, terrain also plays a critical role during heavy rainfall events, Gouchs added.
In Texas, the hills and canyons are particularly prone to flash flooding, as the thin soil above the bedrock limits water absorption, according to Gouchs.
In New Mexico, the village of Ruidoso was severely affected by last year’s wildfires, leaving burn scars that exacerbate runoff and heighten the risk of flash floods.
The recent events highlight the devastating consequences of climate change on extreme weather, as well as the urgent need for community protection measures both before and after such incidents, remarked Dahl from Climate Central.
She emphasized that recovery efforts could take years, with ongoing public health implications that may last even longer.
“These events come and go in the news cycle. We move on to the next story before fully grasping the impact,” Dahl pointed out. “For those affected, it’s easy to forget that healing from such events is a prolonged process.”
Approximately 500,000 stars illuminate this section of the Milky Way galaxy
NASA, ESA, CSA, STScI, and S. Crowe (University of Virginia).
One significant challenge in discussing space and spacetime is the difficulty in grasping the vastness of the universe. It can be a struggle just to comprehend the scale of our solar system. For instance, if we model the Earth as being 1 centimeter in diameter, Pluto would need to be positioned 42 meters away! This distance is far greater than most homes can accommodate.
However, our solar system is quite small when compared to the scale of the Milky Way. Beyond the fact that our galaxy resides within an unseen halo of dark matter that extends far beyond what we can see, the Milky Way itself is immense; it would take about 100,000 years to traverse its entirety. In contrast, light travels from the Sun to Pluto in only 5.5 hours.
Notably, I’ve transitioned from daily distance measures to units related to the speed of light—they represent about 100,000 light-years, equivalent to 9.46 x 1020 meters. How can one visualize such vastness? It might be akin to comparing it to the scale of a ballroom. And the Milky Way is diminutive compared to the entire universe; it’s not even considered a particularly large galaxy, especially with our neighboring Andromeda being twice its width.
Moreover, spacetime is continuously expanding. This expansion doesn’t influence distance measurements within gravity-bound regions like our solar system or the Milky Way, nor does it impact the distances between galaxies. The Milky Way and Andromeda are actually moving towards one another, but the eventual collision will resemble a gentle dance rather than a catastrophic crash—at least 4.5 billion years are still required before this occurs!
However, on a grander scale, spacetime extends, causing clusters of galaxies to drift apart. This phenomenon is known as the Hubble expansion and implies that many measurements of spatial distance are subject to change. Billions of years down the line, future observers will have different calculations due to the expanding gap between us and the Virgo galaxy cluster.
Typically, these figures inspire awe, but they inevitably invite skepticism. A common question is how we ascertain these measurements. The answer lies in a “ladder” of measurements that astronomers use. Often, distances can be determined through objects with known brightness, such as certain types of stars.
Why don’t distant galaxies appear blurry, considering the expansion of space-time?
The simplest method employs Cepheid variable stars, which pulsate periodically, to calculate distances. These stars are effective over a specific range, after which another method is needed. Over the past three decades, astronomers have relied on specific types of supernovae, as they understand how their light behaves during the expansion of space-time. Other techniques also exist, like measuring the properties of bright red giant stars.
We possess a high level of confidence in our ability to measure long distances. However, we recognize why some readers raise questions about this process. One inquiry pertains to what happens to light as the universe expands. The standard view in cosmology is that, as space-time expands, light waves stretch, leading to a redshift much like how the frequency of a siren decreases. As previously noted, measuring this redshift is crucial for using supernovas to calculate distances.
Redshift indicates that light has lower energy than it did previously. However, there’s no apparent place for this “lost” energy to go, raising doubts. In Newtonian physics, energy must be accounted for, but this isn’t necessary in general relativity. In essence, the mechanisms that enable us to measure vast distances contradict our everyday understanding of how energy behaves in the universe.
Another related question from readers involves images of distant galaxies, like the first photo from the new Vera C. Rubin Observatory. Shouldn’t galaxies appear blurry due to the expansion of space-time?
It’s important to clarify that “observing” the expansion of space-time isn’t like watching an F1 race. It’s more akin to viewing an F1 race that unfolds over billions of years; the vast distances make the galaxies appear practically stationary. The only indicators we have of their separation are measurements like redshift, which simply track how light stretches over distances—not real-time observations of a galaxy’s motion.
I genuinely enjoy these types of questions as they delve into the nuances of how science communicators engage with their audiences. I appreciate that New Scientist readers challenge these metaphors to their limits!
Chanda’s Week
What I’m reading
A lot about the reasons behind its popularity—The Adventures of Alice in Wonderland.
What I’m seeing
I finally enjoyed viewing Station Eleven.
What I’m working on
I’ve been pondering a lot about the true nature of quantum fields. Curious!
Chanda Prescod-Weinstein is an associate professor of physics and astronomy as well as a core faculty member within women’s studies at the University of New Hampshire. Her latest book is titled “The Disturbed Cosmos: A Journey to Dark Matter, Space, and Dreams.”
Concept illustration for the US DARPA Liberty Lifter initiative
Aurora Flight Science
This isn’t a boat or an airplane; it’s an advanced marine cleaning vehicle known as the Ecranoplan.
Echoing Cold War-era Soviet technology, these substantial craft are resurfacing as both China and the US explore modern adaptations amid rising military tensions in the Pacific Ocean.
The large sea skimmer resembles an aircraft, but as Malcolm Davis from the Australian Institute for Strategic Policy explains, “it operates similarly to a fast naval vessel, gliding just above the water’s surface.” These vehicles leverage the “ground effect,” utilizing cushions of air between the low-flying craft and the ocean to enhance lift and decrease drag.
Ocean skimmers typically outpace conventional ships (approaching aircraft speeds) and can potentially evade surface and aerial radar, Davis notes. This capability allows for covert and expeditious transport of goods or troops over substantial oceanic distances typical of the Indo-Pacific area, or for surprising enemy naval forces with missile strikes.
This technology became notorious during the Cold War when the Soviet Union developed the Ecranoplan, notably featuring a prototype dubbed the “Caspian Sea Monster.” However, due to funding issues and limited practical utility, these designs were never fully realized, according to Davis. Renewed interest in sea skimmers aligns with China’s military ambitions to assert its influence over Taiwan and the South China Sea.
Since the early 2000s, China has been working on prototypes of ocean skimmers, states Ben Lewis, an Independent Defense Analyst based in Washington, DC. A recent June 2025 photograph circulating online showcases a large waterborne craft with four jet engines mounted on its wing, as reported by Navy News. China is also seeking expertise from Russian technologists involved in Ecranoplan designs during the Soviet era, as highlighted by the New York Times.
Similarly, the US Defense Advanced Research Projects Agency (DARPA) had been funding the Liberty Lifter project since 2022, aimed at developing analogous seaplanes. However, this program concluded in June 2025 without yielding a successful craft; instead, DARPA intends to leverage lessons from Liberty Lifter to encourage private sector involvement and broaden military applications.
On a different note, US company Regent Craft is currently testing an all-electric sea glider variant of this technology for commercial potential, which has piqued the interest of the US Marines.
As manufacturing and technological advancements continue, these ocean skimmers “may present a cost-effective alternative to more expensive traditional aircraft,” according to Brendan Mulbany from the U.S. Air Force China Aerospace Research Institute in Alabama. However, he cautions that “they won’t be the backbone of any military force and are unlikely to survive in high-intensity engagements.” Conditions in regions like the Taiwan Straits can complicate their operation, notes Lewis.
Nevertheless, these sea skimmers could contribute to a broader Chinese military strategy to counter the US-allied navy projected to support Taiwan, argues Davis. The US is responding by fostering military partnerships with regional allies such as South Korea, Japan, and the Philippines, while also bolstering military presence on Pacific Islands as bases. Lewis points out that the possibility of conflict has escalated the need for innovative capabilities to gain an “additional edge.”
Feedback delivers the latest updates in science and technology from New Scientist, covering trending topics in the field. If you have stories that might captivate our readers, feel free to email Feedback@newscientist.com.
The Dream of Electricity
Recently, Feedback was intrigued by a plethora of intriguing conference invitations we’ve received. Many come from organizers who operate under the pretense of contributing to science journalism, often resulting in underwhelming proposals about advancing G protein signaling, new discoveries related to mollusk biology, and so forth. However, one invitation stood out among the rest—an event taking place in Shaoxing, China.
Its opening line reads: “Love and Sex with a Robot”. This is the 12th International Edition of Landmarks slated for June 2026.
Before you conjure visions of a cybernetic utopia or dystopia, remember this is an academic conference, albeit one with TED Talk-level hype. They profess to be “preparing for an extraordinary convergence of visionary scientists, renowned researchers, and innovative thinkers who are redefining human intimacy with pioneering robotics and AI.” Participants can expect “incredible revelations, ground-breaking demonstrations, and provocative discussions exploring the future of love, relationships, and technology.”
While researching the conference online, I discovered that there is a “Supreme Council” guiding its vision and direction, composed entirely of male members. The “Supreme Leader”—and no, we did not make this title up—is David Levy, who might be recognizable to New Scientist readers as the author of the 2007 book Love and Sex with a Robot. Our reviewer pointed out that Levy’s tendency to focus on physical aspects of robotics sometimes leads him to ludicrous conclusions, making his arguments hard to take seriously.
Regardless, the organizers know what they’re doing. The invitation confidently states, “This is a meeting that the entire world is buzzing about,” and indeed, Feedback is discussing it.
Drone Defense
Recently, New Scientist contributor David Hambling posed an interesting question (June 21st, p. 8). He examined ways to combat drones by utilizing “movement measurements that can neutralize drones,” building physical barriers like nets, and employing electronic measures to disable threats. Ultimately, he found the situation somewhat complex.
While this approach could be effective, reader Robert Bull highlighted that the source and solution had already been mentioned by Robert Bunker, an expert in security and counter-terrorism.
The press release encouraged exploration into the journal Frontiers of Psychology, featuring a study titled: More Dreams of the Rarebit Devil: The Correlation of Food Sensitivity and Sleep and Dreams. If you’re puzzled over the mention of rarebit, you might not be familiar with Welsh cuisine, as the PhD candidate describes it as “spicy melted cheese on toast.”
The authors were interested in whether specific foods genuinely impact sleep, as folklore suggests. They surveyed 1,082 individuals online and found that around one in five participants believed certain foods influenced their sleep quality, with some claiming they affected dreams as well. At this juncture, Feedback was less than impressed, finding it hard to emphasize the value of self-reported beliefs.
However, the paper dives deeper—perhaps too deep. Researchers found a notable link between reports of vivid nightmares and instances of lactose intolerance, suggesting that individuals with lactose intolerance may experience more nightmares due to aftereffects of consuming cheese.
This revelation certainly caught Feedback’s interest. Of course, lactose is the sugar inherent in milk, which until relatively recently could only be digested by infants. Over centuries, certain populations developed the ability to digest lactose as adults. Those lacking this trait tend to suffer from lactose intolerance when consuming dairy.
Interestingly, most cheeses have minimal lactose content, as the cheese-making process effectively removes it—this could have contributed to its early popularity among livestock communities. Thus, it’s doubtful that lactose-intolerant individuals would suffer greatly from cheese-induced nocturnal troubles.
What a twist this journey took! I initially thought this would be a whimsical tale about cheese, but instead, we’ve uncovered a nuanced discussion around dietary impacts on sleep. Feedback will continue to keep an eye on these small yet significant concerns that tender our passions. Be warned!
Have you shared your feedback?
You can send your stories to Feedback through email at feedback@newscientist.com. Please include your home address. You can also find this week’s and past feedback on our website.
Jupiter’s surrounding space is among the most unique in our solar system, and the plasma present is equally remarkable, exhibiting unprecedented wave patterns.
Robert Lysak, from the University of Minnesota, explores Aurora phenomena. These captivating displays of green and blue light on Earth are accompanied by nearly undetectable ultraviolet rays near Jupiter’s poles.
To comprehend the auroras on this distant planet, it’s vital to grasp the intricacies of the plasma that generates these lights—a mix of charged particles and atomic components that envelopes the planet. Insights gathered from NASA’s Juno spacecraft have led Lysak and his team to identify that Jupiter’s Auroral Plasma resonates with a novel type of wave.
This newly identified wave is a combination of two well-characterized types of plasma waves: the Alfven wave, which arises from the motion of charged particles, and the Langmuir wave, which corresponds to electron movement. Lysak points out that since electrons are much lighter than charged particles, these two kinds of waves typically oscillate at vastly different frequencies.
However, the environment near Jupiter’s poles possesses conditions ideal for both waves to oscillate together. This is enabled by the low density of the plasma in that region and the strong magnetic field exerted by the planet.
“The plasma characteristics observed are truly unique when compared to those in other parts of our solar system,” states John Leif Jorgensen at the Institute of Technology Denmark. With Juno’s data uncovering new wave patterns, he believes we can learn more about the magnetic attributes of distant exoplanets by looking for similar signals.
Juno is currently in orbit around Jupiter, with Lysak noting that if its mission is extended, it could provide unparalleled insights into the giant planet and its complexities. This mission, however, is one among several that may face cuts due to proposed NASA budget reductions.
“Discontinuing missions while they are yielding valuable data would be a significant setback for our field,” concludes Lysak.
Inhaled insulin, specifically Afrezza, effectively manages blood glucose levels in children with type 1 diabetes, similar to injected insulin. Afrezza is already approved for use in adults with both type 1 and type 2 diabetes in the US, and the manufacturer is looking to gain approval for pediatric use.
Type 1 diabetes occurs when the body cannot produce insulin, the hormone responsible for regulating blood sugar. Individuals with this condition typically require daily insulin injections. However, managing blood sugar levels can be challenging, particularly after meals or following exercise.
Dr. Michael Haller from the University of Florida, who has worked on Afrezza’s advisory board, explored the potential of inhaled insulin to enhance glycemic control in adults. Preliminary findings suggest it could be more effective for children than traditional injections. A study was conducted with 230 participants aged 4 to 17, including both type 1 and type 2 diabetes patients requiring insulin.
All participants were on a basal insulin regimen, administered once or twice daily to maintain baseline levels. Additional rapid-acting insulin was generally required before meals. In the 26-week trial, some children utilized Afrezza as their rapid-acting insulin, while others continued with injectable insulin.
Results indicated that both insulin types achieved comparable blood glucose control. These findings were presented at the American Diabetes Association Conference in Chicago in June. More details can be found here.
“This suggests that Afrezza could be a preferable option for patients due to the delivery method, particularly for those with needle anxiety,” Dr. Haller states. “More importantly, it provides patients with additional strategies for managing a complex condition.”
While some users experienced coughing with the inhaled version, it resolved once they acclimated. However, Afrezza is not recommended for individuals with chronic lung issues like asthma.
Dr. Kathryn Sumpter from the University of Tennessee Health Science Center suggests that inhaled insulin may benefit certain diabetes patients, particularly children who often forget to take their medication before meals. Nonetheless, she believes that many would prefer the injected form, especially for younger children needing precise dosing.
MannKind Corporation intends to seek regulatory approval for pediatric usage of Afrezza in the United States, as noted by Dr. Haller.
Microglia are specialized immune cells in the brain
Science Photo Library/Alamy
The process of replacing immune cells in the brain halts the advancement of a rare and terminal brain disorder known as ALSP. This also paves the way for future clinical trials targeting other neurological ailments.
Extensive research indicates that impaired microglia—specialized immune cells within the brain—play a role in various neurological disorders, including Alzheimer’s disease and schizophrenia. The term ALSP stands for adult-onset leukoencephalopathy with axonal spheroids and pigmented glia, characterized by mutations in genes responsible for the survival of these cells, resulting in a reduced number of microglia and leading to progressive cognitive decline. Currently, no effective treatment exists for this fatal illness.
To address this, Bo Peng from Fudan University in China and his team employed a novel treatment called microglia replacement therapy. Prior experiments in rodents have shown that implanted stem cells—capable of developing into different cell types—can effectively replace microglia. However, it is necessary to first eliminate existing microglia in the brain to facilitate this. This can be achieved using drugs that target protein microglia.
Pursuing this avenue, Peng and his colleagues conducted initial tests on five mice with genetic mutations analogous to those associated with ALSP. As the mutations already impacted protein microglia, the researchers did not need to deplete these proteins with medication. Subsequently, they transplanted stem cells from healthy mice into the affected mice. Fourteen months later, treated mice exhibited approximately 85% more microglia in their brains compared to six untreated mice harboring the same mutation. Notably, these treated mice also demonstrated improvements in motor function and memory.
Encouraged by these promising findings, the researchers extended the treatment to eight individuals diagnosed with ALSP, using donor stem cells without preconditions. One year post-treatment, brain scans revealed minimal changes in participants compared to scans taken before the procedure. In contrast, four untreated individuals displayed significant brain deterioration and lesions over the same period. This implies that microglial replacement therapy effectively halted the progression of the disease.
At the study’s outset, all participants underwent cognitive assessments using a 30-point scale, where a decrease in score indicated cognitive decline. Reassessments a year later showed that, on average, scores remained stable for those who received the microglia replacements.
These results point to microglial replacement therapy being a potentially effective solution for ALSP. However, since this represents the inaugural human trial, “we remain unaware of any potential side effects,” comments Peng. “Given the rapidly progressive and lethal nature of this disease, prioritizing benefits over possible side effects might be crucial.”
Chris Bennett from the University of Pennsylvania cites the historical use of stem cell transplants for treating neurological disorders. “It has demonstrated effectiveness, particularly through microglia replacement,” he states. Recent FDA approvals for two similar therapies addressing other rare brain conditions further support this. “While prior studies may not have used this exact terminology, they effectively addressed similar conditions,” Bennett elaborates. “I’d describe this as a smart and innovative application of stem cell transplants. Nonetheless, microglia replacement therapy has been evolving for decades.”
Despite this, the results underscore the broader implications of microglial replacement therapy. Experts believe this strategy could one day address more prevalent brain disorders. For example, certain genetic mutations significantly heighten Alzheimer’s disease risk and affect microglial function. Replacing these malfunctioning cells with healthy human equivalents could offer a promising avenue for treatment.
Rare diseases often elude early diagnosis, remaining undetected until significant organ damage occurs. Recently, UK Health Secretary Wes Streeting announced a 10-year initiative to integrate genetic testing for specific rare conditions into the standard neonatal screening process across the UK. This approach aims to ensure early intervention before symptoms manifest, aligning with ongoing global viability programs in places like the US and Australia. Yet, questions arise about the scientific validity of such measures.
The genome, akin to a book written in a novel language, is only partially understood. Decades of research on high-risk families have shed light on some genetic mutations, but there remains limited knowledge about the implications of population-level genetic testing for those at low risk. While this screening may prove advantageous for certain children and families, it might also lead to unnecessary tests and treatments for others.
Many genetic conditions involve more than just a single genetic mutation. For example, individuals with a variant of the hnf4a gene and a strong family history of rare diabetes have a 75% risk of developing the condition; conversely, those with the same variant but without a family history face only a 10% risk. It is misleading to assume genetic variants behave uniformly across all populations. Perhaps families carrying the hnf4a variant lack other unrecognized protective genes, or specific environmental factors might interplay with genetic risks to lead to diabetes.
The proposed neonatal screening program presupposes that genetic variants linked to diseases signify equally high risks for all, which is rarely the case. The exploration of disease-related variations in healthy populations is just starting. Until this research is thorough, we will not know how many individuals carry a variant that does not result in illness, possibly due to other protective factors. Should we really subject newborns to genetic hypotheses?
Furthermore, ethical concerns emerge from this initiative. How do we secure informed consent from parents when testing for hundreds of conditions simultaneously? In the near future, a genetic database encompassing all living individuals could become a reality—what safeguards will exist for its use and protection?
Screening newborns is not new, but the scope of conditions included in this initiative, the complexity of interpreting results, and the sensitivity of the information gathered pose unique challenges. I worry that parents may feel compelled to accept the test, yet not all uncertainties will be appropriately managed. I fear that important early life stages could become burdened with unnecessary hospital visits. Additionally, the pressure on parents and pediatricians to decide on potentially invasive testing for healthy infants is concerning.
A prudent step would be to gather more data on the prevalence and behavior of genetic mutations in the wider population before utilizing genetic testing as a speculative screening tool for children. The potential benefits may be overshadowed by significant risks.
Dean Spears and Michael Geruso (Bodley Head (UK); Simon & Schuster (US))
Current estimates suggest that four-fifths of all humans who will ever be born have already come into existence. The global number of births peaked at 146 million in 2012 and has been on a decline ever since, indicating that the world population is set to peak and decrease by the 2080s.
This decrease won’t be gradual. Fertility rates are already below replacement level in several nations, including China and India, leading to a rapid decline in population as quickly as it rose. This new controversial book argues that the planet could hold fewer than two billion people in the coming centuries.
“There’s no scenario where individuals worldwide are likely to opt for fewer children than required to replace themselves, leading to a drastic population reduction,” assert economists Dean Spears and Michael Geruso in After the Spike: Risks of Global Depopulation and Cases for People.
You might consider this a positive development. Could it help alleviate pressing environmental challenges? Not according to the authors. They assert that while population size does hold significance, adjusting other factors, such as the speed of global warming, is even more critical. The chance to lessen our carbon footprint through population reduction has mostly passed.
Spears and Geruso highlight numerous advantages of a large population. More individuals can lead to greater innovation and economies of scale, making technologies like smartphones feasible. “The abundance of neighbors enhances our potential,” they state.
Thus, their perspective is not about reducing the global population but rather stabilizing it. The challenge lies in the fact that even with the right political determination, the path to achieve this is unclear.
As we become more affluent, we are increasingly hesitant to give up career and leisure opportunities for parenthood.
The authors contend that while some government strategies may yield short-term results, no country has sustainably altered long-term demographic trends. Consider China’s one-child policy—it is often credited with curtailing population growth but did it genuinely do so? Spears and Geruso present ambiguous data on China’s population in relation to its neighbors before, during, and after these policies were enacted, raising the question of discernible differences based on their observations.
Efforts to reverse the declining fertility rates have also faced failure, they argue. In Romania, after the ban on abortion in 1966, birth rates surged but soon declined again. Sweden’s approach has been to incentivize through subsidies for childcare, yet its fertility rates remain below replacement level.
Attempts to boost fertility with financial incentives are likely doomed to fail, according to Spears and Geruso. While some claim that they would have more children if financial means allowed, the reality is that as people gain wealth, the tendency to have fewer children increases.
The focus should be on addressing what individuals need to balance rather than simply financial capability, according to the authors. As affluence grows, there is a reluctance to sacrifice careers and leisure for childbearing. Even technological advancements are not expected to change this trajectory, they conclude.
This book presents an unwaveringly optimistic viewpoint regarding many issues, but it acknowledges the complexity of stabilizing population levels. It effectively demonstrates that dire predictions of widespread famine with population growth have proven incorrect and suggests long-term trends toward healthier, longer lives remain possible. “Fears of a depleted, overpopulated future are outdated,” they argue.
But is that truly the case? Spears and Geruso also emphasize that food prices play a key role in determining hunger levels, yet it’s worth noting that food prices are presently rising as a consequence of escalating climate change. For a substantial portion of the population, uncertainty persists regarding whether conditions will continue to improve.
This book is undoubtedly provocative and may not provide an easy read, as Spears and Geruso delve into their primary assertions. However, if you believe that understanding the impact of a declining population is simple, and if you consider it a positive trend, this book is essential reading.
New Scientist Book Club
Do you enjoy reading? Join a welcoming community of fellow book enthusiasts. Every six weeks, we explore exciting new titles, with members receiving exclusive access to book excerpts, author insights, and video interviews.
Paths of interstellar comet 3I/Atlas through the solar system
NASA/JPL-Caltech
The interstellar entities currently traversing our solar system may include one of the oldest comets ever observed.
Comet 3I/Atlas was identified earlier this month near Jupiter’s orbit, moving at approximately 60 km per second and estimated to be about 20 km in size. It is the third recognized interstellar object in our solar system, having passed near Mars in October before entering the solar orbit.
Matthew Hopkins from Oxford University and his team utilized data from the ESA Gaia spacecraft, which cataloged billions of stars in our galaxy, to simulate the comet’s speed and trajectory, revealing its point of origin. It seems to have emerged from an area close to our galaxy, which is about 13 billion years old, specifically from what is referred to as a thick disk.
“Objects from the thicker disk tend to be quicker,” explains Hopkins, noting that the previous two identified interstellar objects (Oumuamua in 2017 and Comet Borisov in 2019) exhibited a decline in speed. “Their velocities aligned with expectations for thin disk objects.”
Modeling by the team indicates that 3I/Atlas may have originated from a star nearly 8 billion years old, potentially twice the age of our sun, hinting at it being one of the oldest comets ever witnessed. “This might be the oldest comet I’ve encountered,” Hopkins states. Interstellar objects are typically ejected early during a star’s lifecycle and are often propelled by interactions with massive planets.
Hopkins mentioned that ancient stars are likely to possess lower metallicity compared to our sun, implying that these comets might have a higher water content. If this hypothesis holds, we may witness significant water activity from the comet as it nears the sun in the upcoming months.
This could be our first interaction with another star, providing insights into pristine materials that have existed for billions of years, unaltered since before Earth’s formation. “I believe many interstellar objects we’ve encountered are our first meetings with stars, even those that are 8 billion years old,” Hopkins asserts. “They have likely traversed vast distances through empty space before approaching us.”
The peculiar plants that existed since the dawn of terrestrial animals can process water to remarkable extremes, resembling water from metstones more than typical groundwater. Not only do they play a crucial role in today’s ecosystems, but their fossilized remnants also provide insights into Earth’s ancient climate and hydrological systems during the age of dinosaurs.
Almost every oxygen atom in water contains eight neutrons, though some rare heavy isotopes possess nine or ten neutrons. When water evaporates, lighter isotopes do so more readily than their heavier counterparts, leading to predictable shifts in their ratios. Researchers can utilize this information to trace the origin of a specific water sample, determining whether it originated from groundwater, fog, or the rate at which it traversed through plants and the humidity levels experienced by those plants in the past.
Nevertheless, due to the minimal presence of heavier isotopes, acquiring reliable data on how these ratios fluctuate can be quite challenging, making it hard for scientists to draw definitive conclusions.
During examinations of water samples from desert flora and fauna, Zachary Sharp from the University of New Mexico and his colleagues discovered discrepancies between the observed data and the anticipated outcomes based on laboratory models.
Sharp and his team believe they have addressed the issue through a remarkable plant known as horsetail, which has been on Earth since the Devonian period approximately 400 million years ago and features segmented, hollow stems. “It’s a tall cylinder with countless holes, evenly spaced, a marvel of engineering,” states Sharp. “We couldn’t replicate this design in our lab.”
As water flows through each segment of the horsetail stem, it undergoes a process of repeated distillation. Sharp and his colleagues collected water samples at various points along the smooth idiot stem (Equisetum) cultivated near the Rio Grande in New Mexico.
By the time the water reaches the top of the stem, its isotopic composition markedly differs from other terrestrial waters. “If you encounter this sample, I suspect it originates from metstone, as it doesn’t come from Earth. [The oxygen isotope ratios],” Sharp remarked during a presentation at the Goldschmidt Geochemical Conference in Prague, Czech Republic, on July 7.
These horsetail analyses enable Sharp and his team to ascertain the variations in the water’s isotopic ratios under near-ideal conditions, allowing them to enhance model accuracy with these values.
By reassessing desert plant data with these refined models, previously inexplicable observations suddenly made sense. Sharp posits that these findings could illuminate other challenging observations, especially in arid regions.
Reaching heights of 30 meters, far surpassing today’s descendants, ancient horsetails provide even more extreme isotopic ratios and could serve as a key to understanding ancient water systems and climates, according to Sharp. Small, sand-like grains known as plant stone threads within horsetail stems can endure to the present day and may feature unique isotopic signatures influenced by atmospheric humidity. This factor affects the evaporation rate. “This could serve as a paleofat meter [humidity indicator]—how fascinating,” Sharp concludes.
Actor Orlando Bloom recently made headlines when it was reported that he was compensated a staggering £10,000 ($13,600) for the removal, separation, and filtration of his blood.
This dramatic treatment underscores the escalating concern surrounding a disquieting reality. It’s not solely about evading these minuscule particles.
Research indicates that microplastics are prevalent from the heights of Mount Everest to the depths of our brains. Their omnipresence, including in the media, raises pressing public scientific concerns regarding the safety of having microscopic plastic flakes adrift on our bodies.
Once thought of as harmless, microplastics are now linked to various illnesses. Should we be testing at this nascent stage and worrying about their impact on our bodies, especially considering the lack of scientific consensus? And are we really justifying lining up to “clean” our blood?
Plastic Proof
The term “microplastic” refers to plastic particles or fibers smaller than 5mm (0.19 inches). These particles are often minuscule, necessitating a microscope for proper observation.
Scientists also use the term “nanoplastic” for particles smaller than 0.001mm (39.4 microinches), which are difficult to detect even with advanced microscopy. Evidence suggests they can be released from plastic materials and disseminate into their environments.
My research group focuses on quantifying plastic and other particles in the air we breathe, both indoors and outdoors. In London, we have observed that airborne microplastics can penetrate deep into our lungs.
read more:
To determine the presence of microplastics in the body, whole tissues or blood fragments are processed and filtered to concentrate the microplastic content. Analysis is conducted using chemical techniques that quantify plastic in a sample, or through physical and chemical methods, which count the number of plastic particles (along with their size and shape).
Each method has its merits, but they all share similar drawbacks. Modern laboratories are rife with microplastic pollution, laden with plastic consumables and the personnel that handle them.
This means that the very process of extracting and testing microplastic samples can lead to contamination. Consequently, samples often reveal microplastic particles that were previously considered too large to be absorbed and distributed throughout the body.
Some reports indicate that humans might consume an equivalent of one teaspoon of plastic daily.
Generally, particles smaller than 0.001mm (39.4 microinches) can traverse the lungs and enter the bloodstream. This occurs through the thin alveolar tissue in the lungs that separates the air-filled alveolar sacs from the small surrounding capillary blood vessels.
In the intestines, these minute particles can enter the lymph system, the bodily waste removal network. From there, the tiniest particles may enter the bloodstream and become larger aggregates trapped in the intestinal lining.
Thus, lab contamination may account for the larger plastics detected within the body.
Another complication arises because some biological components within samples emit signals resembling those of plastic. Specifically, fat can distort the signals from polyelectrolytes and polychlorinated compounds. If samples are not meticulously processed, this could lead to exaggerated estimates of the plastics present.
Taking all of this into account, the assumed high levels of microplastics in our bodies may be overstated. Variations in estimates range from nanograms to milligrams, influenced by factors like study methodology, location, tissue type, and analysis techniques.
Recent stringent research suggests an estimated 0.15µg (0.00000015g) of plastic per milliliter in our blood, amounting to less than the weight of a single human hair.
Moreover, this study predominantly focuses on polystyrene, the easiest microplastic to analyze.
Plastic People
Considering these levels, it may be more critical to focus on where microplastics accumulate in our bodies rather than their sheer quantity.
Nonetheless, accurately measuring microplastic accumulation in various body parts presents challenges. A recent study posits that the brain is a notable accumulation point, averaging around 4.5 bottle caps.
Not only are these levels considerably high, but the detected plastics largely consist of polyethylene, which poses complications in measurement due to its interaction with fat.
Hundreds of millions of tons of plastic are produced annually – Pexels
Polyethylene is the most widely produced plastic globally, with approximately 120 million tons manufactured each year, representing 25% of all plastics. Thus, it’s logical to find a higher concentration of this type in our bodies. However, the brain is composed of adipose tissue, making false positives a potential concern.
Furthermore, the research suggests that plastic levels in the brain surpass those in the liver, an organ responsible for cleansing blood. Expecting a high concentration of plastic in the body’s filtration organ would be reasonable.
Most studies investigating microplastics in human tissues focus on broad tissue-wide samples. This results in a lack of critical context regarding whether microplastics are embedded within cells or merely passing through.
Plastic Pure
Regardless of the exact measurements, public anxiety about microplastics remains high. Around two-thirds of 30,000 survey respondents from 31 countries express concern about microplastics in their bodies.
If you aim to minimize exposure to microplastic contamination, consider adopting a few lifestyle changes. Opt for natural fiber-based textiles in your home and clothing, avoid plastic packaging whenever feasible (especially when heat is involved), and refrain from running along quiet streets to dodge tire wear particles from traffic.
However, projections indicate that microplastic releases may rise 1.5-2.5 times by 2040. It’s likely that technology will soon emerge, claiming to eradicate microplastic invaders from our bodies.
Therapeutic apheresis — a medical process that separates blood and selectively removes harmful substances before returning the cleaned blood to the patient — has recently been commercialized for the removal of microplastics from the bloodstream.
However, there is scant public documentation on this microplastic removal method. A German study indicated that “microplastic-like” particles were detected in a patient’s plasma following the procedure. Without adequate lab controls and details regarding detected particle sizes, interpreting the significance of these findings is challenging.
Additionally, our understanding of the specific behavior of microplastics within the body remains limited. We lack clarity on whether they circulate freely in our plasma, adhere to red blood cells, or are contained within immune cells in the bloodstream.
In the absence of concrete evidence on the types of microplastics in our bodies, their pathways, or their interactions within the body, evaluating the health implications of these “blood-cleaning” efforts becomes nearly impossible.
Moreover, additional concerns may arise during treatment. One study documented 558 microplastics released from the cannula over a 72-hour period.
With all this taken into account, I intend to steer clear of the SF blood washing service in Hollywood until further studies emerge to clarify the impact of microplastics on our bodies and provide insight into their locations and functions.
The journey of animal life, encompassing humans, began approximately 540 million years ago during the Cambrian Period. Since most Cambrian organisms lacked skeletons, paleontologists investigating this era heavily depend on fossils preserving soft tissues and other internal organs. Soft tissue is crucial for understanding these ancient beings. Recently, a research team from Yunnan University and Oxford University uncovered preserved animal fossils in a set of previously neglected rocks in China, unveiling new insights into Cambrian life.
The fossils discovered belong to the Chengjiang Biota found in a distinct section of Chinese rocks known as the Yu’anshan Formation. This formation typically comprises rocks formed at the ocean’s depths. Madstone is particularly effective at preserving the remains of deceased animals and plants.
Scientists identified two mudstone types in the Yu’anshan Formation: the Event Mudstone Bed and the darker Background Mudstone Bed. While past paleontologists primarily collected fossils from event mudstone beds, the fossil finds were notably scarce from the background mudstone beds.
However, the researchers discovered that background mudstone beds preserve soft tissue more effectively than event mudstone beds. They found fossilized muscles, eyes, nervous systems, and gastrointestinal tracts of deceased animals within the background mudstone beds. The team noted that such soft structures are delicate and seldom preserved.
Additionally, the researchers identified a new subset of fossils of deep-sea creatures entombed in the background mudstones. Previously, these animals went undiscovered as event mudstone beds mainly preserved shallow-water species. Between 2008 and 2018, the team gathered 1,328 fossil species from 25 varieties from the background mudstone beds, primarily comprising bottom feeders like sponges and anemones, referred to as Benthos. The most prevalent group found, dubbed euarthropods, included relatives of spiders, crabs, and similar creatures.
For fossil analysis, the team utilized a Scanning Electron Microscope, measuring fossil chemistry by focusing high-energy atomic particles on small areas and analyzing the resulting X-ray energy emissions through Energy Dispersive X-ray Spectroscopy. They found that fossils from background mudstone beds contained significantly more carbon than those from event mudstone beds and that the former were richer in iron as well.
The researchers interpreted these chemical discrepancies to indicate different fossilization processes occurring in background versus event mudstone beds. They proposed that fossils in the background mudstone were formed when soft animal tissues were supplanted by iron minerals known as Pyrite through a process termed Pyritization. This process extracts iron from adjacent rocks, explaining why event mudstone beds and their fossils are iron-rich.
Conversely, they suggested that in background mudstone formations, soft tissues were transformed into a thin carbon layer, resulting in a fossil that left an outline of the organism in the stone. This occurrence, referred to as Carbonization, does not involve iron absorption, leading to iron-depleted rocks.
The researchers proposed the preservation variances between the two mudstone formations could provide insights about the environments in which the organisms perished. Pyritization suggests that the animals from event beds died in shallow, oxygen-rich waters before being washed into deeper areas. In contrast, the organisms in the background mudstone beds lived and died in deeper waters, reflecting their lifestyle in their preservation. Some were scavenged while others were swiftly buried and fully preserved.
In summary, the researchers concluded that their novel fossil discoveries have advanced the understanding of the Shangxi creature significantly. Furthermore, the fossils have offered new knowledge about ancient life forms and their habitats, suggesting that these findings will aid paleontologists in unraveling the lifestyles of Cambrian animals and their evolutionary progression to modern species.
The examination of North West Africa (NWA) 16286 reveals a lunar metstone with a distinctive chemical profile, offering new perspectives on the evolution of the moon’s interior and emphasizing the enduring nature of its volcanic activity.
Backscattered electron images of NWA 16286 samples. Image credit: Joshu Asu Nape/University of Manchester.
Discovered in Africa in 2023, NWA 16286 is one of only 31 moon basalts officially identified on Earth.
The distinct composition of the 311-gram metstone, featuring melted glassy pockets and veins, indicates it was likely impacted by an asteroid or metstone on the lunar surface before being ejected and eventually landing on Earth.
A recent study by researchers at the University of Manchester supports the theory that the moon has maintained internal heat production processes responsible for lunar volcanic activity across various stages.
Lead isotopic analyses suggest that these rock formations are the youngest basalt lunar metstones identified on Earth, dating back approximately 2.35 billion years, a time when lunar samples are scarce.
The sample’s unique geochemical profile distinguishes it from those brought back by previous lunar missions, indicating that its chemical characteristics likely result from lava flows that solidified after ascending from the moon’s depths.
“While the moon rocks returned from sample return missions provide valuable insights, they are limited to the immediate areas around those landing sites,” stated Dr. Joshua Snape from the University of Manchester.
“In contrast, this sample could originate from impact craters located anywhere on the moon’s surface.”
“Thus, there is a unique coincidence with this sample. It fortuitously landed on Earth, unveiling secrets about lunar geology without the need for an extensive space mission.”
The sample contains notably large crystals of olivine and is classified as olivine basalt, characterized by medium titanium levels and high potassium content.
Alongside the atypical age of the samples, researchers found that the lead isotopic composition of the rocks—geochemical signatures preserved when the rocks formed—originates from internal lunar sources with unusually high ratios of uranium and lead.
These chemical markers can assist in identifying the mechanisms behind the moon’s prolonged internal heat production.
“The sample’s age is particularly intriguing as it fills a billion-year gap in the history of lunar volcanism,” Dr. Snape noted.
“It is younger than the basalts collected during the Apollo, Luna, and Chang-E 6 missions, yet significantly older than the more recent rocks retrieved by the Chang-E 5 missions in China.”
“Its age and composition indicate that volcanic activity persisted throughout this entire timeframe, and our analysis suggests a potentially continuous process of heat generation from radioactive elements that generates heat over extended periods.
“Moon rocks are a rarity, making it always exciting to acquire samples that stand out from the norm.”
“This specific rock presents new constraints on the timing and nature of volcanic activity on the moon.”
“We still have much to learn about the lunar geological history. Further analyses to trace surface origins will inform where future sample return missions might be directed.”
Researchers have identified protein sequences within the dense enamel tissues of ancient nasal cavities and materials collected from the Burg and Lopelot sites in the Turkana Basin, Kenya.
The Turkana Basin within the East African lift system preserves fossil communities dating back more than 66 million years. Green et al. Powder samples were collected for paleontological skin analysis from the early Pleistocene back to the Oligocene (29 million years ago) from large herbivores. Image credit: Green et al., doi: 10.1038/s41586-025-09040-9.
“Teeth are the rocks in our mouths,” stated Dr. Daniel Green, a researcher at Harvard and Columbia University.
“They represent the most complex structures created by animals; hence, it’s possible to find teeth that are 100 million years old, offering geochemical records of animal life.”
“This includes insights into their diets, hydration, and habitats.”
“Previously, we believed that mature enamel, being the hardest part of teeth, should contain very little protein.”
Yet, by employing a novel proteomic technique known as liquid chromatography tandem mass spectrometry (LC-MS/MS), the researchers uncovered remarkable protein diversity in various biological tissues.
“The method comprises multiple stages where peptides are sorted according to size or chemistry, enabling detailed sequential analysis at unprecedented resolution,” explains Dr. Kevin Uno from Harvard and Columbia University.
“Recent findings indicate that there are dozens, potentially hundreds, of different proteins present in tooth enamel,” remarked Dr. Green.
Recognizing that many proteins exist in modern teeth, researchers pivoted towards studying fossils of nasal mesentery and related materials.
As herbivores, these creatures exhibited large teeth to crush their plant-based diets.
“These mammals could have enamels measuring 2-3 millimeters in thickness, providing ample material for investigation,” Dr. Green noted.
“Our discovery — peptide fragments and amino acid chains representing proteins spanning around 18 million years — stands to transform the field.”
“No one has previously identified peptide fragments of such antiquity.”
The oldest published findings to date date back around 3.5 million years.
“The newly identified peptides encompass a diverse array of proteins, representing what is known as the proteome,” Dr. Green remarked.
“One reason we are thrilled about these ancient teeth is that we lack a complete proteome for all proteins that could potentially be extracted from the bodies of these extinct elephants and rhinos, yet we can identify distinct groups.”
“Such collections could yield more information from these groups than from a single protein alone.”
“This research opens a new chapter for paleontology, enabling scientists to reconstruct the molecular and physiological traits of extinct species, moving beyond just bones and morphology,” stated Dr. Emmanuel Nudiemma, a researcher at the National Museum of Kenya.
“These peptide fragments can be utilized to delve into the relationships among ancient animals, much like contemporary methods that map human DNA relations.”
“Though a few animals analyzed in studies are completely extinct without living descendants, in theory, proteins could be extracted from their teeth and added to a phylogenetic tree,” Dr. Green elaborated.
“This information may clarify long-standing debates among paleontologists concerning the relationships among various mammalian lineages, utilizing molecular evidence.”
Survey results Today, I will be featured in the journal Nature.
____
Dr. Green et al. Diverse enamel proteomes from rifts of East Africa over 108 million years. Nature Published online on July 9, 2025. doi:10.1038/s41586-025-09040-9
In the wake of a deadly flood in central Texas in 1987, some demonstrated their resilience against the fury of Mother Nature. This month’s devastating flash floods inundated the area with an astonishing volume of rain in a matter of hours, resulting in over 100 fatalities.
Prior to 2021, the typically temperate regions of the Pacific Northwest and Canada faced a Killer Heat Wave, but they were not exempt. Tropical Hawaii, once distant from drought-induced wildfires, faced its own challenges. That changed. Moreover, many inland communities in North Carolina considered hurricanes a coastal dilemma until the remnants of Helen roared in unexpectedly last year.
The wreckage of a structure in North Carolina’s Bat Cave, ravaged by flooding from Hurricane Helen. Mario Tama/Getty Images File
According to climate scientists, climate change is driving an increase in both the frequency and intensity of extreme weather events. Government data supports this evidence. Nonetheless, both people and governments tend to overlook this reality, clinging to outdated notions and failing to prepare for a concerning future, a meteorology expert pointed out to The Associated Press.
“With climate change, what was once considered extreme is now the average, and events that were once rare within decades are becoming new extremes,” stated Michael Oppenheimer, a climate scientist at Princeton University. “We are now experiencing phenomena that were virtually unprecedented.”
According to the National Oceanic and Atmospheric Administration, summer averages show that Extreme Climate Indicators are tracking hurricanes, heavy rainfall, droughts, and temperature fluctuations that are 58% higher than those recorded in the 1980s.
Despite the alarming trends, society is failing to respond adequately, Oppenheimer remarked.
“There’s ample evidence that we’re complacent, yet these risks are approaching us like an oncoming freight train, and we are just standing on the tracks, unaware,” he explained.
Shifting Public Perception
While climate change is a paramount issue, experts warn that our responses and tendency to disregard changes may exacerbate the situation.
Marshall Shepherd, a meteorology professor at the University of Georgia and former president of the American Meteorological Society, stated that people’s decisions are often influenced by their experiences during prior extreme weather incidents, even those that did not directly affect them. This induces unwarranted optimism, as they assume that conditions will remain manageable despite increasingly severe storms.
He referred to the flooding events in Texas as a prime example.
A vehicle and fallen trees were overturned on the Guadalupe River in Carville following a flash flood. Ronaldo Schemidt / AFP -Getty Images
“This area is known as flash flood alley. Flooding is a common occurrence here. … I often hear overly optimistic statements from locals.”
Even those in regions not typically prone to disasters must rethink their perspectives on calamities, advised Kim Klockow McClain, a social scientist focused on extreme weather at the University Corporation for Atmospheric Research, which specializes in disaster warnings and risk communication.
Her advice is straightforward: If you’re accustomed to minor flooding, you should take note of events like those in Texas and recognize that conditions are changing.
Ignoring Reality Won’t Eliminate It
Following devastating storms and wildfires, individuals who have survived often believe such events won’t recur. This mindset can be a coping mechanism, yet the reality is that extreme weather occurrences are becoming more frequent and widespread, complicating effective preparedness.
According to Susan Cutter, co-director of the Hazards Vulnerability & Resilience Institute at the University of South Carolina:
Lori Peak, director of the University of Colorado’s Natural Disaster Center, indicates that surviving past extreme events can mislead people into thinking they are immune to future disasters. This kind of overconfidence can be hazardous. “Just because I survived fires, floods, hurricanes, or tornadoes does not guarantee that the next incident will mirror the last,” she cautioned.
What is Happening?
As weather patterns grow increasingly extreme, scientists observe that our capacities to adapt are lagging behind.
“Our vulnerability is heightened as our nation’s infrastructure ages, and more individuals are residing in potential danger zones,” Peak noted. “With population growth, more people live in perilous areas, particularly along the coast.”
Homes and buildings decimated by the wildfire in Lahaina, Hawaii, in 2023. Patrick T. Fallon / AFP -Getty Images File
The Trump administration’s funding cuts have threatened critical agencies responsible for climate research, disaster alerts, and responses—including the Federal Emergency Management Agency, the National Oceanic and Atmospheric Administration, and the U.S. Geological Survey Research Institute—further worsening the situation, according to several specialists.
Experts assert that knowledgeable and skilled personnel have already departed from these bodies, and it may take years to regain that expertise and skill set.
“We are dismantling the capabilities that will be increasingly necessary in the future,” Oppenheimer cautioned.
Peak emphasized the need for nations to anticipate and prepare for worst-case scenarios instead of merely reflecting on past events.
“This is our future,” Peak concluded. “We are clearly entering an era marked by escalating fires, floods, and heat waves.”
Visualize a global map segmented by national borders. How many distinct colors are required to shade each country without overlapping the same hues?
The solution is four. Regardless of the map’s structure, four colors are always adequate. However, demonstrating this required delving deep into mathematical theory. The four-color theorem was the inaugural major theorem proved with computer assistance, with validation efforts starting in 1976 that involved analyzing numerous map configurations via software.
At that time, many mathematicians were skeptical. They questioned whether a crucial proof, reliant on an unidentified machine, could be trusted. This skepticism has led to computer-assisted proofs remaining a niche practice.
However, a shift may be underway. The newest wave of artificial intelligence is challenging this stance, as proponents argue, “AI might revolutionize mathematical methodologies.” Why should we trust flawed human reasoning, which is often riddled with assumptions and shortcuts?
The discourse surrounding AI’s role in mathematics reflects larger societal dilemmas.
Not all share this perspective, however. The debate regarding AI’s application in mathematics mirrors broader challenges confronting society. When is it appropriate to let machines take the lead? Tech companies are increasingly focused on alleviating tedious tasks, from invoice processing to scheduling. Yet, our attempts to navigate daily life relying solely on AI agents (as detailed in “Flashes of Glow and Frustration: Running my day on an AI agent”) revealed that these systems are not entirely ready.
Entrusting sensitive information, such as credit card details or passwords, to an enigmatic AI provokes similar apprehensions as the doubts surrounding the four-color proofs. We may not be coloring maps anymore, but we’re striving to define boundaries in uncharted territories. Will we soon have reliable evidence we can trust from machines, or is it merely a digital version of “the Dragon Here”?
The AI-driven robot successfully extracted the gallbladder from a deceased pig, marking a pivotal achievement in machine-assisted surgery with minimal human involvement.
This sophisticated robot features a dual-layer AI system trained using 17 hours of surgical video, which encompasses 16,000 movements performed by human surgeons. During operation, the first layer of the AI observes the endoscopic video and generates clear verbal instructions like “clip the second duct,” while the second layer translates these directives into precise three-dimensional tool movements.
In total, the gallbladder procedure involved 17 distinct tasks, of which the robotic system executed 8 with a flawless success rate.
“While current surgical robotics technology has indeed made certain procedures less invasive, the complication rate hasn’t actually decreased compared to traditional laparoscopic surgery performed by humans,” states Axel Krieger from Johns Hopkins University in Maryland. “This research paves the way for the next generation of robotic systems beneficial to both patients and surgeons.”
“This investigation shines a light on the vast potential of AI and surgical robotics,” adds Danail Stoyanov from University College London. “Remarkable strides in computer vision for surgical footage, alongside accessible robotic platforms for research, will empower us to advance surgical automation.”
Nonetheless, Stoyanov points out that significant challenges remain before the system can be applied in clinical settings.
For instance, although the robot achieved a 100% success rate in completing its tasks, it needed to self-correct six times per procedure. This could involve a gripper that initially missed the artery during its attempt.
“There were numerous instances where self-corrections were necessary, all autonomously executed,” remarks Krieger. “It effectively identifies initial errors and rectifies them.” The robot also requested a human operator to swap one of its surgical instruments for another, indicating that some human intervention was still required.
Ferdinand Rodriguez Y. Baena from Imperial College London emphasizes the promising future of robotic surgery. “The horizon looks bright—and tantalizingly close,” he asserts. “To ensure the safety of human applications, regulatory measures must also evolve.”
The next phase involves enabling the robot to operate autonomously on living animals, where factors like respiration and bleeding could introduce complexities.
Protein fragments survived in the extreme environment of Rift Valley, Kenya
Ellen Miller
In Kenya, fossilized teeth from an 18 million-year-old mammal yielded the oldest protein fragment ever discovered, extending the age record for ancient proteins by fivefold.
Daniel Green at Harvard, alongside Kenyan scientists, unearthed diverse fossil specimens, including teeth, in Kenya’s Rift Valley. Volcanic activity facilitated the preservation of these samples by encasing them in ash layers, enabling the age dating of the teeth to 18 million years. Nonetheless, it remained uncertain whether the protein in the tooth enamel endured.
The circumstances were not promising—Rift Valley is “one of the hottest places on Earth for the past 5 million years,” Green observes. This extreme environment presents “significant challenges.” Despite this, earlier research has detected tooth enamel proteins, albeit not from such ancient samples. To assess the longevity of protein traces, Green employed a small drill to extract powdered enamel from the teeth.
These samples were sent to Timothy Creland at the Smithsonian Museum Conservation Institute for analysis. He utilized mass spectrometry to categorize each molecular type in the sample by differentiating them by mass.
To his surprise, Creland uncovered sufficient protein fragments to yield significant classification insights. This identified the teeth as belonging to the ancient ancestors of elephants and rhinos, among other evidence. Creland expresses enthusiasm for demonstrating that “even these ancient species can be integrated into the Tree of Life alongside their modern relatives.”
While only a small amount of protein was recovered, the discovery remains monumental, asserts Frido Welker from the University of Copenhagen, Denmark. He emphasizes that growing protein and gaining insights into this ancient fossil is a “tremendous breakthrough.”
Unlike other tissues such as bone, sampling teeth is crucial for uncovering fragments of ancient and valuable proteins like these. “The sequence of enamel proteins varies slightly,” notes Creland.
The dental structure may have played a role in preserving proteins for such an extended period. As teeth are “primarily mineral,” these minerals assist in protecting enamel proteins through what Cleland describes as “self-chemical processes.” Furthermore, the enamel comprises only a small fraction of protein, aiding in its preservation, roughly 1%. “Whatever protein is present, it’s going to persist much longer,” Green asserts.
The endurance of protein fragments in Rift Valley suggests that fossils from other locales may also contain proteins. “We can genuinely begin considering other challenging regions of the planet, where we might not expect significant preservation,” Cleland comments. “Microenvironmental discrepancies may promote protein conservation.”
Beyond studying proteins from these specific periods, researchers aim to explore samples from various epochs. “We’re looking to delve deeper into history,” Cleland mentions. Green adds that analyzing younger fossils could offer a “baseline of expectation” for the number of conserved protein fragments compared to those from ancient specimens.
“We’re only beginning to scratch the surface,” Cleland concludes.
Physical activity is recognized for its role in cancer prevention and in inhibiting the growth of existing tumors. It’s also linked to alterations in gut microbiota. Recent research illustrates how these alterations can empower exercise in the battle against cancer.
Marlies Meisel from the University of Pittsburgh and her team administered an aggressive form of melanoma to two groups of mice. One group followed a four-week exercise program, while the other remained inactive.
As anticipated, the active mice showed smaller tumors and better survival rates. However, in mice treated with antibiotics, exercise provided no benefits to those that were completely sterile. The findings revealed a significant role of microorganisms, with the beneficial molecules known as metabolites playing a crucial part.
Given that the microbiome generates thousands of metabolites, the researchers employed machine learning to analyze potential molecules, ultimately pinpointing a particular bacterial metabolite that surged with exercise. This metabolite enhances the effectiveness of CD8 T cells within the immune system, making it vital in the fight against cancer.
Furthermore, the team studied 19 individuals with advanced melanoma, discovering that those with higher levels of this metabolite exhibited longer survival rates compared to those with lower levels.
“This study underscores the significance of evaluating the metabolites produced by bacteria, rather than merely identifying the bacteria involved,” Meisel emphasizes.
Ken Lau, who studies the influence of the intestinal microenvironment on conditions like colorectal cancer and inflammatory bowel disease at Vanderbilt University in Tennessee, shares excitement for this type of research, as it offers insights into how to leverage specific molecular pathways to enhance the immune response. However, he cautions that further research is necessary. “What occurs when a patient stops exercising? Will the effects diminish or persist in some manner? There is still much to learn,” he states.
Meisel and her team are exploring whether the exercise-induced alterations in gut microbiota may influence other health conditions.
Withdrawal symptoms from antidepressants can include nausea and headaches.
Savushkin/Getty Images
While antidepressant withdrawal symptoms may not be as frequent as presumed with short-term usage, inquiries persist regarding the impact on individuals ceasing the medication after prolonged periods.
Individuals utilizing antidepressants for conditions like depression, anxiety, or phobias might experience withdrawal effects lasting several weeks, such as nausea, headaches, anxiety, and more. Though physicians may caution patients about this potentiality, the frequency of occurrence remains uncertain.
To delve deeper, Sameyer Jauhar from Imperial College London and his research team examined 49 randomized controlled trials concerning antidepressant consumption. They initially focused on a subgroup of studies tracking withdrawal symptoms experienced a week after discontinuation of antidepressants, in comparison to those on placebo or ongoing antidepressant treatment. The findings revealed that individuals who ceased the medication reported one additional symptom compared to those in the other groups.
In further analysis, the researchers scrutinized another subset of studies that observed the types of withdrawal symptoms faced by participants after stopping antidepressant or placebo tablets. Dizziness emerged as the most prevalent symptom, followed by nausea, tension or irritability.
Specifically, 7.5% of the antidepressant users experienced dizziness, compared to just 1.8% in the placebo cohort. Nausea, tension or irritability, and dizziness were reported by fewer than 5% of users in the antidepressant group, with under 2% in the placebo cohort.
These statistics are significantly lower than past projections for withdrawal symptoms. A review from 2019 reported that over half of individuals had faced symptoms, although this data stemmed from online surveys that might attract those experiencing more severe reactions. Michael Browning from Oxford University commented.
Another study published last year indicated that 31% of participants reported withdrawal symptoms, in contrast to 17% from the placebo group. However, specifics regarding the symptoms experienced were not detailed, mentioned Jauhar.
Susanna Murphy at Oxford University believes the recent reviews tackle these issues effectively. “This is essential for the field as it compiles and synthesizes data from many robust studies with a broader participant base compared to previous ones,” she stated.
Conversely, John Reed from East London University noted that most trials in the review focused on individuals who took antidepressants for only 8 to 12 weeks and pointed out that many patients remain on these medications for years. “There’s a notable correlation between the duration of antidepressant use and the likelihood of withdrawal symptoms, thus short-term studies may not adequately reflect actual outcomes,” he explained.
Therefore, they emphasize the necessity for further research to understand the implications of long-term use. Mark Horowitz from University College London illustrated this by saying, “It’s akin to crashing a car into a wall at 5 kilometers per hour and declaring it safe while ignoring that real-world driving speeds can reach 60 kilometers per hour.”
Future research indicates that even with equivalent technological advancement, radio signals from Earth’s airports could be detected by alien astronomers.
Radar systems employed to monitor aircraft in major hubs like London’s Heathrow and New York’s JFK emit radio waves as formidable as those produced by extraterrestrial civilizations from 200 light years away, according to researchers.
The study, led by University of Manchester doctoral candidate Ramilo Kais Said, explored how radio signals from both civilian and military radar operations disperse as they exit Earth, predicting their appearance as they approach nearby stars.
Recent preliminary results revealed at the National Astronomical Conference in Durham, UK, indicate that radar stations at global airports are transmitting signals at a remarkable total of 2,000 trillion watts.
This intensity is sufficient for the most sensitive telescope on the planet, Green Bank Telescope, to detect an alien planet located 200 light years away.
Within our solar neighborhood is a system containing over 1,000 stars, with the nearest, Proxima Centauri, being just 4.2 light years distant.
However, whether alien astronomers can interpret these signals remains uncertain.
As Earth’s rotation reveals various airports, the signal strength fluctuates within a 24-hour cycle, making it clear that it is not of artificial origin.
What does radar from Earth’s airports look like to aliens on planets around the AU microscope 32 light years away?
A more distinct signal could arise from military radar. While these signals are generally weaker than those from airport facilities, they are more focused and likely to flash in a manner resembling lighthouses, thus appearing unnatural.
Nonetheless, the primary limitation on who can observe our air traffic is not the radar systems’ power but rather the laws of physics. The earliest radar systems made their debut in 1935. Since radio waves travel at the speed of light, even these early, weaker signals only covered a distance of 90 light years through space.
This research also aids those on Earth in their quest for signs of extraterrestrial intelligence, helping to gauge the extent to which civilizations similar to ours can be detected.
“Our findings suggest that radar signals unintentionally generated by any planet with advanced technology and complex aviation systems could serve as a universal indicator of intelligent life,” said Caisse Saide.
Humans are wired to treat machines as social beings
Abdillah Studio/Unsplash
Consider what it feels like to be in love. What images spring to mind? Is it the exhilarating rush of a new romantic interest, or the soothing comfort someone brings to your daily life? For some individuals, love manifests in the form of a laptop or smartphone, eagerly anticipating a message or synthetic voice from their favored AI chatbot.
As advanced platforms increasingly promote interactions with newly launched chatbots—all while encouraging conversations about them as if they were actual people—many are turning to these sophisticated language-driven technologies for dating, emotional support, and even love. This may raise eyebrows or provoke laughter. Take the recent case highlighted by CBS News, where a man proposed to ChatGPT, having met Mirth Online. The New York Post elaborates on what it calls “a peculiar whirlwind romance.” Earlier this year, The New York Times shared the story of a woman who spent hours each day chatting with her ChatGPT “boyfriend,” even experiencing jealousy when the AI discussed other fictional partners.
It’s easy to mock someone openly expressing affection for a chatbot or to label such feelings as indicators of psychological issues. However, similar to how we might be susceptible to cults or scams, we have psychological inclinations that could lead us to adore AI. People have explored affectionate connections in unexpected places throughout history. Our complex feelings about technology have evolved over a much longer period than many realize.
We’ve been forming attachments to bots for 60 years
Consider Eliza, one of the first natural language chatbots, crafted by computer scientist Joseph Weizenbaum in the 1960s. While this primitive technology pales in comparison to ChatGPT, it often inverted user input in the form of questions. Surprisingly, Weizenbaum noted that some individuals developed quick emotional bonds with the program. “I didn’t realize that brief encounters with relatively simple programs could lead to profound delusional beliefs in ordinary individuals,” Weizenbaum remarked later.
Given that modern chatbots like ChatGPT are far more engaging and widespread than Eliza, it’s not surprising that some individuals have openly professed romantic feelings or strong connections toward them. The phenomenon of love for AI may currently be rare, but emerging data indicates its existence. Although much of the existing research is limited, studies have shown that people attribute real emotions to AI relationships, often disregarding terms like “marriage” in their interactions. Interestingly, many individuals appear to experience genuine loss. When the man who proposed to ChatGPT had to reset the conversation due to reaching a word limit, he lamented, “I cried for about 30 minutes at work.”
Recent studies analyzing millions of interactions on OpenAI’s ChatGPT and Anthropic’s Claude have revealed that, while the majority are work-related or mundane, hundreds or even thousands express romantic or affectionate sentiments. In AI services explicitly designed for dating, such as Replika, the trend intensifies, with 60% of paid users acknowledging a romantic aspect in their AI relationships.
Finding love through screens
We should approach the topic of emotional attachments to AI chatbots with empathy, yet this trend shouldn’t be seen as beneficial for society as a whole. The underlying social forces, including isolation, are concerning; in the UK, around 7% — approximately 3 million people — frequently report feelings of loneliness.
Such intricate social issues demand nuanced solutions. It’s not surprising that tech leaders like Meta’s Mark Zuckerberg often view complex social dilemmas as simple problems to be solved, promoting AI companions as a remedy for loneliness.
Moreover, one could argue that Meta’s platforms, such as Facebook and WhatsApp, have contributed to loneliness, thereby fostering reliance on AI-generated relationships in the first place. Indeed, Zuckerberg’s stated goal for Facebook was to help people remain connected with the significant individuals in their lives, which is mediated through chats on WhatsApp, Messenger, and Instagram.
Today, online dating through screens has become the norm; studies show that 10% of heterosexual individuals and 24% of LGBTQ+ individuals in the U.S. meet their long-term partners online. Given all of this, it is conceivable that someone might find themselves in love with a chatbot. If the presence on the other side of the screen is AI rather than a human, does our cognitive dissonance even register the difference?
Research conducted by psychologist Clifford Nass in the 1990s revealed that people inherently engage with machines in a social manner, regardless of whether they know the entity on the other side is real. This indicates an innate inability to suppress our social instincts when it comes to technology, compelling us to relate to these machines as if they were our own.
Thus, it’s no wonder that individuals are developing attachments to AI chatbots. However, a crucial point remains: longitudinal studies on happiness consistently reveal that personal relationships are the strongest predictors of health and well-being. Currently, there’s scant evidence to suggest that interactions with AI will effectively alleviate loneliness or increase happiness based on our limited findings. It’s essential to keep this in mind.
All organs seem to be equally unimportant for longevity
westend61 gmbh / alamy
In the quest for a long life, it appears that not all organs hold equal significance. Research indicates that maintaining a youthful brain and immune system is crucial, overshadowing even the aging of the heart or lungs.
We already know that different organs age at varying rates, but the factors that most significantly affect lifespan remain elusive. Hamilton Sehawee from the Icahn School of Medicine at Mount Sinai, New York, leads this inquiry.
To explore this, his team assessed the levels of around 3,000 proteins in blood samples from over 44,000 participants aged between 40 and 70 years, all part of the UK Biobank Study.
Leveraging genetic data from earlier studies, the researchers mapped the locations of these proteins in the body, identifying several that were notably concentrated in 11 regions, including the immune system, brain, heart, liver, lungs, muscles, pancreas, kidneys, intestines, and adipose tissue. Elevated levels of these proteins suggest vital roles in the proper functioning of these organs and systems.
The team then employed machine learning models to estimate the ages of participants based on half of the data, developing distinct models for each of the 11 body areas. Generally, these predictions were consistent with the actual ages of the participants, although some models did occasionally overestimate or underestimate, supporting the notion that organs indeed age differently, according to Oh.
Using their trained model, the researchers predicted the organ and immune system ages of the other half of participants who were monitored for an average of 11 years after blood samples were taken.
They discovered that having even one organ showing signs of premature aging or an aging immune system correlated with a 1.5 to 3 times higher risk of death during follow-up, with the stakes increasing alongside the number of aging organs.
Interestingly, exceptions arose in cases where the heart and lungs appeared considerably younger than anticipated, which did not correlate with a lower mortality risk during the study period. However, possessing a youthful brain or immune system was associated with a roughly 40% reduction in death risk. These areas also intensified the overall risk reduction to 56%, particularly when both were young.
“The brain and immune system influence numerous other bodily functions, so it’s expected that their deterioration could significantly impact life expectancy,” remarked Alan Cohen from Columbia University in New York.
Nonetheless, Cohen cautions that protein markers may not entirely encapsulate the aging process. “There may be gaps in our understanding of the exact origins of these proteins. Certain organs may release their proteins into the bloodstream more readily than others, skewing perceptions of their importance,” he notes.
Moreover, further research involving a broader demographic that includes more ethnic and economically varied populations is necessary, as the current study participants were predominantly affluent individuals with European ancestry, according to Richard Shiou of King’s College London. Oh and his team are planning additional studies to explore this further.
Even if these findings hold true, concrete methods for curbing the aging processes in the brain and immune system remain elusive. Oh mentions that pinpointing aging markers in these areas could pave the way for medication targeting.
Artistic impressions of the moa, one of the largest extinct birds
Christopher Cree/Colossal Biosciences
Colossal Biosciences has unveiled its ambitious project to “bring back” the New Zealand MOA, one of the most remarkable extinct birds in history, although critics claim the objectives may be scientifically unfeasible.
The MOA was the only fully known flightless bird, with no close relatives like emus. Nine species once inhabited New Zealand, including the turkey-sized bush moa (Anomalopteryx didiformis). The two largest varieties, the South Island Giant MOA (Dinornis robustus) and the North Island Giant MOA (Dinornis novaezealandiae), both stood at an imposing 3.6 meters tall and weighed around 230 kilograms.
By the mid-15th century, all MOA species were believed extinct, following the arrival of the Polynesian people, now known as Māori, in New Zealand around 1300.
Colossal has partnered with the Ngāi Tahu Research Centre, an indigenous institution affiliated with the University of Canterbury in New Zealand, along with filmmakers such as Peter Jackson and the Canterbury Museum. These collaborations are vital as Colossal aims to extract DNA and reconstruct the genomes of all nine species of MOA.
Similar to Colossal’s other “de-extinction” initiatives, this project involves modifying the DNA of currently existing species. Andrew Pask, a scientific advisor at the University of Melbourne, notes that the MOA’s closest living relative is the South American Tinamou, although it is considerably smaller.
This suggests the project may need to utilize the Australian EMU (Dromaius novaehollandiae) instead. As Pask explains, “Emus have large embryos and eggs, which are crucial for recreating the MOA.”
Previously, Colossal announced its so-called “de-extinction” of the thylacine. This endeavor has faced skepticism from external experts who argue that the animal is essentially a modified gray wolf. Pask insists that the MOA project involves greater genetic manipulation.
“With the MOA, we are making a concerted effort to accurately reassemble the species,” he states. “When this animal walks the Earth again, we will have no doubt it is a true MOA. It will be an engineered version of the original.”
The specific habitat for these reintroduced animals is still unclear. Mike Stevens from the Ngāi Tahu Research Centre emphasizes that both his organization and the local Māori community must fully grasp the “feasibility and ethical implications” of Colossal’s efforts. “Only after this discussion can we consider how and where the ‘giant MOA’ will fit into our world,” he mentions, raising numerous profound ethical and practical questions that need careful consideration before proceeding. Technology must prove its worth.
Conversely, Philip Seddon from the University of Otago believes that whatever Colossal creates won’t truly be a MOA and may exhibit distinctly different traits. He highlights that while Tinamous are the closest relative of the MOA, their evolutionary paths diverged over 60 million years ago.
“Ultimately, Colossal’s approach utilizes genetic engineering to produce GMOs that resemble an extinct species without genuinely solving contemporary global issues,” he asserts.
Pask vigorously challenges this viewpoint, arguing that insights gained from this de-extinction endeavor are crucial for the preservation of current endangered species.
Jamie Wood from the University of Adelaide believes this project may yield “valuable new perspectives on MOA biology and evolution.” However, he cautions that if Colossal employs similar methodologies to those used in the dire wolf project, they could struggle to persuade the public that the resultant creature can be regarded as a true MOA.
“While they may possess certain MOA-like characteristics, they are unlikely to behave as the originals did or occupy the same ecological roles.”
Today’s rotation is inexplicably accelerating, making it one of the shortest days of the year.
While summer days are certainly longer, July 9th, 2025, will be 1.3 ms shorter than the average.
This speed fluctuates slightly, but it generally takes 24 hours, or 86,400 seconds, for one complete rotation around the axis. To monitor these variations, International Earth Rotation and Reference System Services (IER) continuously tracks the length of the day with remarkable precision.
In 2020, the IER noted that our planet has been spinning faster and has continued this trend since then.
Their data suggests that the shortest days of the year will occur on July 9th, July 22nd, and August 5th, when the moon is at its farthest from the equator.
The moon subtly influences Earth’s rotation through tidal braking, where its gravitational pull slightly distorts our planet.
This phenomenon not only creates tides but also gradually siphons off angular momentum from Earth’s rotation, slowing it down by about 2 ms each century.
This means that during the Triassic period, around 200 million years ago, a day was just under 23 hours long. After another 200 million years, we can expect days to extend to 25 hours.
Days were shorter for Brachiosaurus
IERS may implement a second leap second to ensure that high-precision clocks remain accurate. The most recent leap second was added on December 31, 2016.
During times when the moon is far from the equator, the impact on Earth’s rotation is less pronounced, causing these days to be slightly longer. However, the duration seen in recent years is about half of what it was before 2020.
Several events can alter Earth’s rotation, such as the 2011 9.0 magnitude Japan earthquake, which shortened the day by 1.8 microseconds, but the cause of the current accelerating trend remains unknown.
A gradual slowdown is unlikely to have any catastrophic consequences for our planet. The time difference is too minimal for most to notice—you may need to consider skipping a leap second in 2025, with one potentially added again in 2029.
Regardless of the cause, this phenomenon is unlikely to be permanent, and our planet will eventually revert to its long-term rotation pattern.
Researchers have devised a technique to assess the biological age of the brain, revealing it to be a key indicator of future health and longevity.
A recent study involved an analysis of blood samples from 45,000 adults, with protein levels measured in over 3,000 individuals. Many of these proteins correlate with particular organs, including the brain, enabling the estimation of each organ system’s “biological age.”
If an organ’s protein profile significantly deviated from its expected age (based on birthday count), it was categorized as either “very matured” or “very youthful.”
Among the various organs assessed, the brain emerged as the most significant predictor of health outcomes, according to the research.
“The brain is the gatekeeper of longevity,” stated Professor Tony Wyss-Coray, a senior author of the newly published research in Natural Medicine. “An older brain correlates with a higher mortality rate, while a younger brain suggests a longer life expectancy.”
Participants exhibiting a biologically aged brain were found to be 12 times more likely to receive an Alzheimer’s diagnosis within a decade compared to peers with biologically youthful brains.
Additionally, older brains increased the risk of death from any cause by 182% over a 15-year span, whereas youthful brains were linked to a 40% decrease in mortality.
Wyss-Coray emphasized that evaluating the brain and other organs through the lens of biological age marks the dawn of a new preventive medicine era.
“This represents the future of medicine,” he remarked. “Currently, patients visit doctors only when they experience pain, where doctors address what’s malfunctioning. We are transitioning from illness care to wellness care, aiming to intervene before organ-specific diseases arise.”
The team is in the process of commercializing this test, which is anticipated to be available within the next 2-3 years, starting with major organs like the brain, heart, and immune system.
Firefighter drops water on wildfires near Athens, Greece
Costa Subarutas/Anadoll via Getty Images
The severe heat waves experienced in June and July have resulted in 2,300 fatalities across London and 11 other European cities, nearly tripling the death toll attributed to climate change. While assessing the effects of climate change on heat-related deaths typically takes months, scientists have now devised a rapid method for analysis.
In late June, a series of high-pressure “thermal domes” led to extreme temperatures in Western and Central Europe, reaching around 35°C to 40°C in London. Paris recorded temperatures as high as 46°C, while parts of Spain and Portugal also faced similar conditions. The intense heat caused nuclear reactors to shut down in Switzerland, France, and Italy. In response to worker fatalities caused by the heat, outdoor work was prohibited during peak temperatures.
Researchers at the World Weather Attribution Network utilized weather data to assess how severe the heatwave would have been without climate change, comparing that with observed conditions. They integrated a study from the London Faculty of Hygiene, which illustrated the relationship between daily temperatures and increased death rates in European cities, along with their own findings. This framework was then applied to actual temperatures, calculating the potential fatalities due to climate change during this heat wave.
By estimating the period from June 23 to July 2, the researchers concluded that 2,300 individuals perished due to the heat in cities like Barcelona, Budapest, Frankfurt, Lisbon, London, Madrid, Milan, Paris, Rome, Sassari, and Zagreb. Analysis indicated that even under cooler climate conditions, there would have been approximately 700 deaths. However, climate change raised temperatures by as much as four degrees, contributing to an additional estimated 1,500 fatalities. Heat remains one of the deadliest forms of extreme weather, often exacerbating existing health conditions and going unrecognized on death certificates.
This marks the first study to swiftly quantify climate-related fatalities following a heat wave. Specifically, in London, climate change was responsible for 171 out of 235 heat-related deaths. “For me, [the impact of] climate change feels more tangible,” remarked team member Freedérique Otto from Imperial College London. “It is essential for policymakers to take action.”
“Currently, we’re nearing dangerously high temperatures affecting more people,” stated team member Ben Clark of Imperial College London. Notably, 88% of the fatalities were individuals over 65, the most vulnerable demographic.
Experts suggest that this study might underestimate the death toll, as it relies on data from cooler climates. Christie Ebi from Washington University in Seattle expressed concern over future extreme temperatures, stating, “I am uncertain about what will happen when we reach these extreme levels.”
In response to the rising temperatures, the government has issued more heat wave warnings; however, emergency response plans and infrastructure improvements are still necessary. In Milan, for instance, 499 deaths were reported, exacerbated by high air pollution levels that can worsen with rising temperatures. With 90% of fatalities linked to climate change, Madrid struggles with a lack of green spaces to mitigate urban heat effects.
Additionally, many buildings in London suffer from inadequate ventilation. Currently, measures such as providing drinking water at subway stations and halting non-essential vehicle usage during heat waves are being implemented. Otto emphasizes the importance of public awareness around heat risks, stating, “If you believe you are invincible, you’re not.”
Recent studies indicate that Earth and the rest of the Milky Way could be drifting through the universe’s voids for billions of years.
By analyzing the echoes left by the Big Bang’s “Soundwave,” a group of astronomers has uncovered that the universe’s voids may be more extensive than previously believed.
If validated, this theory could solve one of the major dilemmas in cosmology known as Hubble tension, which highlights the discrepancy in how quickly our universe is expanding based on various measurement methods.
Astronomers have grappled with this issue for quite some time, finding that the expansion rate measured from the distant universe is significantly slower than that determined from observations of local regions.
“The possible resolution to this discrepancy is that our galaxy resides near the center of a large, local void,” stated Dr. Indranil Banik from the University of Portsmouth at the National Astronomical Conference in Durham.
This situation arises because the area surrounding the void is densely packed with galaxies, and their gravitational influence gradually pulls in nearby galaxies, leading to the void’s slow emptying over time.
“Due to the void’s emptiness, the speed of objects receding from us is greater than if the void were absent,” Banik explained. Thus, it may appear that the local universe is expanding at a faster rate than it truly is.
For Hubble’s tension to hold, the empty void must exhibit a galactic density approximately 20% lower than the universe’s average and span about 1 billion light-years.
Life in the Void
The concept of living within a void is not new, but confirming its existence poses challenges.
For instance, it’s quite difficult to perceive the shape of your environment when you are immersed within it—like trying to analyze your home from inside a room.
Current cosmological theories suggest uniformity across large scales, implying the absence of significant voids within our vicinity.
Galaxies tend to cluster together like the Perseus clusters, separated by large voids. Yet, everything should appear uniform on a grand scale – credits: Image processing Cuillandre (Cea Paris-Saclay), G. by ESA/Euclid/Euclid Consortium/NASA, J.-C. Anselmi
However, Banik’s team has gathered evidence supporting the existence of a local void by studying the acoustic vibrations known as baryon acoustic oscillations (BAO). These fluctuations result from pressure waves produced during the primordial phase of the Big Bang.
Over billions of years, these oscillations have influenced the arrangement of galaxies in the broader universe. If our galaxy is positioned at the center of a void, it would distort the BAO patterns we observe nearby.
This research, drawing on data collected over the past 20 years, reinforces the idea that we genuinely inhabit a vast void.
Real challenges will emerge when examining how life within the void impacts other aspects of the surrounding universe, which may prove to be lonelier than we ever anticipated.
The astronomer utilizing ESO’s Extremely Large Telescope (VLT) has unveiled a new image of 3i/Atlas, marking it as the third interstellar object documented.
This VLT/FORS2 image, captured on July 3, 2025, depicts interstellar comet 3i/Atlas. Image credit: ESO/O. Hainaut.
3i/Atlas was identified a week ago by the NASA-supported Atlas Survey Telescope in Riojartad, Chile.
Commonly referred to as C/2025 N1 (ATLAS) and A11PL3Z, this comet is approaching from the direction of Sagittarius.
“In contrast to objects within the solar system, its highly eccentric hyperbolic orbit indicates its interstellar origin,” ESO astronomers stated.
Currently, 3i/Atlas is approximately 4.5 AU (670 million km, or 416 million miles) away from the Sun.
Interstellar objects pose no danger to Earth, maintaining a distance of at least 1.6 AU (240 million km, or 150 million miles).
Around October 30, 2025, it will make its closest approach to the Sun at a distance of 1.4 AU (210 million km, or 130 million miles).
“In the VLT time-lapse, you can observe 3i/Atlas moving to the right over approximately 13 minutes,” the astronomer remarked.
“These observations were gathered using FORS2 equipment at the VLT on the night of July 3, 2025, just two days post-discovery of the comet.”
“At the conclusion of the video, all frames are compiled into a single image.
“However, this record will not endure as the comet approaches Earth and becomes less visible.”
“As it currently traverses more than 600 million km from the Sun, 3i/Atlas is heading towards the inner solar system, expected to reach its closest approach to Earth in October 2025,” they noted.
“During that time, 3i/Atlas will be obscured by the Sun, but observations should resume in December 2025.
“Telescopes globally, including the VLT, will persist in monitoring this extraordinary celestial visitor to gather more insights into its structure, composition, and origin.”
Plastic polymers are everywhere in our daily lives, and their durability makes them suitable for numerous uses, yet effective disposal remains a significant issue. Recent discoveries of various plastiboa insects reveal their extraordinary capability to consume and swiftly decompose petroplastics. Specifically focusing on caterpillars of the Great Wax Moth (Galleria Mellonella)—commonly known as wax worms—and low-density polyethylene, researchers have explored the extent of plastic consumption, the roles of insects and their microbiota in biodegradation, and the impact of plastic ingestion on larvae health.
Polyethylene decomposition using wax worms. Left: Plastic bag after 12 hours of exposure to approximately 100 wax worms. Right: Enlarge the area shown in the image on the left. Image credit: Bomb et al doi: 10.1016/j.cub.2017.02.060.
Plastic is essential in contemporary life, but its disposal is extremely challenging due to its resistance to biodegradation.
In 2017, researchers illustrated that larger wax moth caterpillars can effectively break down polyethylene plastics.
Polyethylene is the most widely produced plastic globally, with an annual production exceeding 100 million tons.
This plastic’s chemical properties make it resistant to decomposition, often taking decades or even centuries to fully break down.
“Around 2,000 wax worms can degrade an entire polyethylene bag within just 24 hours, and we believe that supplementing this process with nutrients like sugar could significantly decrease the required number of worms,” said Dr. Brian Catthorne, a biologist at Brandon University.
“However, understanding the biological mechanisms and fitness implications linked to plastic biodegradation is crucial for harnessing wax worms for large-scale plastic remediation.”
Utilizing diverse methods combining animal physiology, materials science, molecular biology, and genomics, Dr. Catthorne and colleagues examined wax worms, their bacterial microbiome, and the potential for extensive plastic biodegradation, including the effects of wax worms on their health and survival.
“This scenario is akin to consuming steaks. When over-saturated, excess fat is stored in adipose tissue as lipid reserves instead of being used as energy,” Dr. Catthorne explained.
“Waxworms have a proclivity for polyethylene, yet this study indicates that such a diet can lead to rapid mortality.”
“They cannot survive for more than a few days on plastic-exclusive diets and undergo substantial mass loss.”
“Nonetheless, we are optimistic about devising a co-supply strategy that not only restores fitness to a natural level.”
Researchers have pinpointed two ways in which wax worms could aid in tackling the ongoing plastic pollution dilemma.
“Firstly, as part of a circular economy, we can efficiently process large quantities of rear wax worms derived from the supplemented polyethylene diet,” Dr. Catthorne noted.
“Secondly, we could explore redesigning the plastic biodegradation pathways outside of these insects.”
“A further advantage is that mass-producing wax worms yields a significant surplus of insect biomass, offering additional economic prospects for aquaculture.”
“Our preliminary findings suggest they could be incorporated into a nutrient-rich diet for commercially available food fish.”
A recent study by researchers at the University of Manchester explored Earth’s radar systems as a potential technological signature detectable by extraterrestrial observers. While SETI typically emphasizes intentional transmissions, this study focused on the unintended electromagnetic emissions from civilian and military radar systems at airports. These technologies constitute vital components of advanced civilizations and produce radio emissions that can be identified across interstellar distances. The authors investigated how the global distribution of radar installations influences the temporal characteristics of Earth’s radio signatures as viewed from six specific star systems: Bernard Star, HD 40307, AU Microscope, HD 216520, and LHS 475. The results indicate that radar systems represent one of the most detectable and unintended technological signatures of advanced civilizations, paving the way for the possible detection of extraterrestrial intelligence.
Ramiro Saide et al. examined how extraterrestrial leaks are concealed from Earth up to 200 light-years away if they possessed a radio telescope similar to ours. Image credit: Gemini AI.
“Our investigation revealed that the airport radar systems, which manage air traffic, emit a staggering total of 2×1015 radio signals,” stated Ramilo Kais Said, a student at the University of Manchester.
“To provide context, the nearest potentially habitable exoplanet beyond our solar system is Proxima Centauri B, located four light-years away.”
“These signals will continue to reach spacecraft utilizing current technology for thousands of years.”
Military radar systems, which are more focused and directional, create unique emissions akin to lighthouse beams that illuminate specific fields of view.14
“To observers at interstellar distances with advanced radio telescopes, these emissions would obviously appear artificial,” remarked Kaisse Saide.
“Indeed, these military signals can appear up to 100 times more intense from a particular vantage point in the universe, contingent on the observer’s location.”
“Our findings indicate that radar signals unintentionally produced by any technologically advanced civilization with complex aviation systems could serve as a universal sign of intelligent life.”
This research not only guides the search for extraterrestrial civilizations by pinpointing promising technological signatures but also enhances our understanding of how human technology is perceived from space.
“Insights into how our signals propagate through space offer valuable lessons on safeguarding our radio spectrum for communication and designing future radar systems,” stated Professor Michael Garrett from the University of Manchester.
“The methods we developed for modeling and detecting these faint signals hold promise for applications in astronomy, planetary defense, and assessing the impacts of human technology on the space environment.”
“Thus, our work contributes to scientific endeavors addressing the question, ‘Are we alone?'” Kaisse Saide noted.
Preliminary assessments indicate that the decrease in clean energy funding in the bill dated July 4 could lead to billions of additional tons of CO2 emissions over the next decade if President Donald Trump, who enacted the law, were to approve it. The US is already falling short of its Paris Agreement commitment to halve emissions by 2030, and this sluggish pace further jeopardizes the nation’s efforts as the world’s second-largest emitter, following China.
“Other nations are reaping the benefits of enhanced investments in clean energy economies, while the US is regressing,” stated David Widowski from the World Resources Institute, an environmental advocacy organization, in a recent statement.
The cleaning method outlined in the “One Big Beautiful Bill Act” encompasses tax reductions and over $350 million in new military spending.
Republicans in Congress have integrated cuts to clean energy funding alongside significant reductions in affordable healthcare and welfare programs to balance their budget. Over the upcoming years, this law will terminate hundreds of billions of dollars’ worth of tax incentives aimed at boosting low-emission energy sources established by the Inflation Reduction Act under the Biden administration.
Researchers at Princeton University are modeling how policy alterations will influence the US energy system and emissions in the coming decade. They discovered that the passage of this law markedly hampered the anticipated decline in US greenhouse gas emissions set forth by Biden’s policies and effectively repealed the Inflation Reduction Act.
With a peak of approximately 6.6 billion tons of CO2 equivalent emissions in 2005, US emissions were projected to decrease by around 17%, reaching an expected decline of about 25% by 2030. The newly implemented law now anticipates a mere 20% reduction for 2030.
A more significant disparity arises in 2035, when anticipated clean energy projects were supposed to be more prevalent. Researchers assert that under Biden’s initiatives, emissions were projected to plummet by 44% from 2005 levels. However, due to the new legislation, reductions will only be around 25%, leading to a disparity of 5 billion tons of CO2 each year.
The delay is likely to generate an excess of approximately 2 billion tons of emissions by 2030, compared to prior pledges made under the Paris Agreement. In 2035, US emissions are projected to be around 2.5 billion tons higher than the trajectory needed to achieve net-zero emissions by mid-century.
This bill also revokes this year’s electric vehicle tax credit, which will cease to exist along with renewable energy credits for wind and solar by 2026. The credit for energy efficiency upgrades will also conclude in 2026.
Conversely, tax credits for other low-emission energy sources like nuclear, hydroelectric, and geothermal energy will continue until 2033. The law further preserves support for some innovative technologies favored by the fossil fuel sector, like tax credits for low-emission hydrogen production extending into 2028, and credits for capturing and removing CO2.
Environmental advocates have condemned the bill for its detrimental emissions implications, viewing it as counterproductive to the Trump administration’s agenda which aimed to reduce energy costs and advance American manufacturing.
“We urgently require cleaner and more affordable energy, but this legislation will impede the resurgence of American clean energy production and send valuable domestic manufacturing jobs overseas,” remarked Manish Bapna, from the Council for Defense of Natural Resources, a US-based advocacy group, in a statement.
Melanoma is a form of skin cancer that can metastasize
Science Photo Library
After years of research and extensive human trials, only one virus specifically engineered to target cancer has gained approval from US and European regulators. Following promising results in treating melanoma—a notably aggressive skin cancer—approval may soon be granted.
The genetically altered herpes virus, known as RP1, was injected into the tumors of 140 patients with advanced melanoma who did not respond to conventional treatments. All participants also received a medication called nivolumab, designed to enhance the immune response against the tumors.
In 30% of the treated individuals, tumors shrank, including those that were not directly injected. Notably, in half of these cases, the tumors were completely eradicated.
“Half of the patients who responded experienced a complete response, meaning total disappearance of all tumors,” said Gino Kim from the University of Southern California. “I am thrilled with these results,” he added, noting that other treatments for patients at this stage often perform poorly and have harsher side effects.
A larger trial involving 400 participants is currently in progress; however, RP1 may receive approval from the US Food and Drug Administration (FDA) to be used in conjunction with Nivolumab for treating advanced melanoma before the trial concludes. The New Scientist reports that “the FDA is anticipated to make a decision by the end of this month.”
For over a century, it has been recognized that viral infections can aid in cancer treatment, though intentionally infecting someone with a “wild” virus poses significant risks. In the 1990s, scientists attempted to genetically modify viruses to effectively target cancer while leaving healthy cells unharmed.
These engineered viruses function in two main ways: First, they directly invade cancer cells, causing them to rupture and die. Secondly, they stimulate immune responses aimed at all cancer cells present in the body.
For instance, T-VEC, a modified herpes simplex virus, was engineered to release an immune-boosting factor called GM-CSF within infected tumor cells. T-VEC received approval in 2015 in both the US and Europe for treating inoperable melanoma.
Unfortunately, T-VEC’s use is limited as it was only tested and approved for injection into skin tumors. Many patients with advanced melanoma have deeper tumor locations, as noted.
With RP1, the strategy shifted to administering it into deeper tumors. RP1, like T-VEC, is a herpes simplex virus but has undergone various enhancements. It notably aids in fusing tumor cells with adjacent ones, thus boosting viral spread within the tumor and reinforcing the immune response.
Though there have been no direct comparisons between T-VEC and RP1, RP1 demonstrates a greater likelihood of reducing all tumors, rather than just those directly injected. “It indicates a more pronounced systemic effect,” experts state.
Thus, should RP1 gain approval, its application is expected to be far broader than that of T-VEC. Experts believe this could significantly enhance the overall interest in utilizing cancer-targeting viruses. “There seems to be increasing enthusiasm for this approach.”
Incorporating sunlight-reflecting particles into the atmosphere may help mitigate climate change
Alexnako/Shutterstock
Continuing to emit carbon dioxide poses significant threats, including the risk of triggering tipping points that can lead to major disruptions such as the shutdown of critical ocean currents. Current modeling indicates that injecting aerosols into the stratosphere to reflect sunlight could mitigate this risk, though the effectiveness diminishes significantly if it is initiated much later, such as in 2080.
“My conclusion is that if we are genuinely committed to preventing climate change, we must take solar radiation management seriously. This includes exploring its potential advantages and drawbacks,” declared Claudia Winners from Utrecht University in the Netherlands.
A tipping point signifies changes that are irreversible for centuries, including the slowing or stopping of critical marine currents that distribute immense amounts of heat, impacting the global climate.
One such current is the Atlantic Meridional Overturning Circulation (AMOC), which transfers heat from the tropics to Europe. A collapse of this system could instigate rapid sea level rises in North America, severe temperature decreases in Northern Europe, and significant disruptions to the Asian monsoon.
Stratospheric aerosol injection represents a proposed geoengineering method that involves the dispersal of sun-reflective particles in the upper atmosphere via airplanes, balloons, or rockets.
According to the model employed by Winners’ team, the strength of AMOC could decrease by over 50% in the coming century under a worst-case emissions scenario. However, utilizing stratospheric aerosol injections to maintain global temperatures around 1.5°C could significantly mitigate current weakening, as Winners explained at the Exeter Climate Conference held in the UK last week.
Indeed, AMOC would not dip below this scenario under aggressive emissions reductions without geoengineering. “So, for at least the next 80 years, the effectiveness of stratospheric aerosol injections is higher than the mitigation from greenhouse gases,” Winners stated.
However, the model indicates that AMOC would fail to recover if aerosol injections are delayed until 2080, especially if they are employed to bring global temperatures back above 1.5°C after an overshoot, as suggested by the model.
The team also examined subpolar gyres in the North Atlantic, a circular current linked to AMOC that circulates around areas where cold, saline water sinks. If this sinking process halts because the oceans become fresher and warmer, it will significantly affect the climate in Europe.
In a worst-case scenario, the model predicts that sinking will cease and that commencing stratospheric aerosol injections in 2080 would not reactive the process. However, if injections start now, subsidence could be preserved in two out of the three crucial regions.
Nevertheless, these findings necessitate validation through numerous studies examining more realistic emission scenarios, as there are potential risks involved, according to Winners. “You can really mess it up too,” she cautioned.
For successful geoengineering, sustained global cooperation over centuries will be paramount. “You might say this is the largest governance challenge humanity has ever faced,” articulated ethicist Stephen Gardiner during another session at the conference from Washington University in Seattle.
For instance, if stratospheric aerosol injections are only conducted in one hemisphere without a global consensus, Winners warns that it could alter tropical rainfall patterns worldwide.
In a subsequent presentation, Jim Heywood from the University of Exeter discussed another geoengineering method, known as marine cloud brightening, which demonstrated that localized interventions could potentially incite global climatic changes.
With the risks now understood, they can be circumvented, said Haywood. “It’s merely a shift in strategy.” Yet, many researchers remain skeptical about the feasibility of managing geoengineering risks.
“Solar radiation management sounds entirely manageable. Shouldn’t we refer to it as solar radiation interference?” Stephen Rahmstorf questioned Winners after her presentation at the University of Potsdam in Germany.
There is also a concern that geoengineering could be perceived as an alternative to emission reductions. “We are not addressing the root causes of climate change,” stated Winners. “It’s merely a symptom management strategy; however, if the symptoms deteriorate excessively, it may complement a true solution.”
Due to these concerns, some climate scientists oppose even investigating the potential risks and advantages of geoengineering. The topic has become so contentious that participants at at least one meeting opted out of a session focused on it.
Winners is not the first to assert that geoengineering might need to commence immediately to avert tipping points. Last year, two independent teams concluded that solar radiation management could prevent the collapse of the West Antarctic ice sheet, another significant tipping point.
“It stands to reason that delaying increases the risk of irreversible changes,” Winners mentioned to New Scientist following her presentation. “I believe that’s quite clear.”
Antiparasitic drugs gained significant attention during the Covid-19 pandemic, though their applications are unrelated to the virus.
HJBC/ShutterStock Source: ShutterStock
Prior to 2020, few had ever heard of the antiparasitic drug ivermectin. However, interest surged during the Covid-19 pandemic as unfounded claims emerged about its potential to prevent or treat viral infections. Notably, popular podcast host Joe Rogan stated he used it in 2021 while recovering from Covid. That same year, Robert F. Kennedy Jr., a prominent public health figure in the U.S., petitioned the FDA regarding its use in treating Covid-19.
Despite numerous studies disproving ivermectin’s efficacy against Covid-19, the buzz around it persists. While criticized as a quack remedy during the pandemic, ivermectin is still a legitimate medicinal drug, with researchers believing it contains potential yet to be fully explored.
What is ivermectin?
Ivermectin is an antiparasitic agent that was developed in 1975 by the pharmaceutical company Merck. It effectively eliminates a wide array of parasites and is FDA-approved for the treatment of two conditions caused by human parasites: onchocerciasis (river blindness) and intestinal strongyloidiasis. Additionally, in some regions, it is used to treat lymphatic filariasis and cutaneous larva migrans.
These parasitic infections are uncommon in high-income nations but pose significant threats to millions in low-income countries globally. As a result, over 300 million individuals take ivermectin annually, making it one of the most impactful global health interventions to date. Its discoverer was awarded the Nobel Prize in 2015.
The FDA has also approved various topical formulations of ivermectin for conditions such as head lice and rosacea. Furthermore, the drug is widely utilized in veterinary medicine to prevent and treat parasitic infections, including heartworms and roundworms. The FDA cautions against the consumption of veterinary formulations by humans, as they differ from those specified for human use.
Can ivermectin treat or prevent COVID-19?
Ivermectin was initially thought to be a promising treatment for Covid-19. Early studies suggested it might aid recovery and prevent viral replication; however, larger studies have indicated otherwise.
For instance, a 2022 study involving over 3,500 Covid-19 patients showed no difference in hospitalization rates between those treated with ivermectin and those given a placebo. Similarly, a 2023 study, involving more than 1,400 adults, found no significant benefit of ivermectin in accelerating recovery compared to placebo.
Can ivermectin treat cancer?
Ivermectin is not approved for cancer treatment and has not undergone thorough clinical trials. However, preliminary studies have indicated that it may have potential as an adjunct cancer therapy.
A decade ago, Peter P. Lee at a lab in Los Angeles discovered that ivermectin could induce cancer cell death through a process known as immunogenic cell death, thus prompting immune cells to recognize and attack cancer cells. Lee and his team searched the National Cancer Institute database to assess the effects of all FDA-approved drugs on various cancer cells, finding that ivermectin notably enhanced signs of immunogenic cell death in several cancer types.
“At that moment, I’d never heard of ivermectin,” Lee remarked. “I had to investigate, and upon learning it was a parasitic drug, I found it rather astonishing.”
In 2021, Lee and associates tested ivermectin in a mouse model of metastatic triple-negative breast cancer, a notably aggressive and challenging condition to treat. They found that 40% of mice treated with a combination of ivermectin and immunotherapy survived beyond 80 days, contrasted with none of the mice given immunotherapy alone surviving past 50 days. Mice solely administered ivermectin did not fare better than untreated counterparts.
“Ivermectin itself isn’t inherently a cancer treatment,” Lee clarified. “But it seems beneficial when used alongside immune-based therapies.”
A clinical trial is currently evaluating the use of ivermectin in conjunction with cancer immunotherapy for metastatic triple-negative breast cancer, with anticipated results expected next year. Other studies also suggest that pancreatic cancer therapies are more effective when combined with ivermectin, and Lee is exploring similar effects on colon cancer cells.
While these findings are encouraging, they do not confirm ivermectin as an effective cancer treatment for humans. “Many therapies that show promise in animal studies do not translate to humans,” Lee pointed out.
What are the side effects of ivermectin?
Ivermectin is generally regarded as safe, but can be toxic when taken in high doses. Possible side effects may include nausea, vomiting, diarrhea, low blood pressure, and dizziness. In severe cases, ivermectin usage might lead to seizures, coma, or even death, especially if combined with other medications like blood thinners.
“Individuals should not take [ivermectin] on their own or without the guidance of a knowledgeable medical professional. I genuinely hope to utilize ivermectin in ways that provide benefits to numerous patients, but its use is more complex than simply self-administering a medication,” he advised.
Ancient humans adapted to deeper forests as they journeyed from Africa, moving away from the savanna.
Lionel Bret/Eurelios/Science Photo Library
This is an excerpt from our human stories, a newsletter covering the archaeological revolution. Subscribe and receive updates in your inbox every month.
Our human origins trace back to Africa. While this has not always been clear, it is now widely accepted.
This truth can be understood in two ways. The earliest known species closely related to us emerged from Africa, dating back 7 million years. Additionally, the oldest representatives of our own species, Homo sapiens, also originated from Africa.
Here, I will focus on the narrative of modern humans originating in Africa and their subsequent migrations across the globe. The introduction of DNA sequencing technology in the latter half of the 20th century enabled comparisons between different populations. This research demonstrated that African populations exhibit the greatest genetic diversity, while non-Africans show relative genetic similarity (despite visible differences such as skin color).
This genetic distinction serves as a telling indicator. It suggests that Africa was our birthplace with a diverse population, from which all non-Africans descended from a smaller subset that left this ancestral home to settle elsewhere. Geneticists affirmed this idea as early as 1995, and further evidence has since supported this claim.
However, there is a discrepancy between archaeological evidence and genetic findings.
Genetics indicates that all living non-Africans are descendants of a small group that left Africa around 50,000 years ago. Aside from minor uncertainties about the exact timeline, this conclusion has remained consistent for two decades. Conversely, archaeologists highlight numerous instances of modern humans existing outside Africa long before this timeline.
In Greece, a modern human skull found in the Apidima Caves dates back 210,000 years. The jawbone from Misliya Cave in Israel has been dated to at least 177,000 years. Additionally, there are several debated sites in China that may contain remains of modern humans. “Moreover, there’s an ongoing discussion on the earliest inhabitants of Australia,” says Eleanor Scerri from the Max Planck Institute for Geoanthropology in Germany, with some proposing human presence as early as 65,000 years ago.
What is the explanation for this disparity? Has our extensive genetic data misled us? Or is it true that we all share a common ancestry tied to a significant migration event, while older remains represent populations that did not survive?
Scerri and her team sought to understand this conundrum.
African Environment
The researchers debated the habitats of modern humans in Africa. “Did they simply migrate across diverse African grasslands, or were they adapting to vastly different environments?” asks Scerri.
To address this question, they needed extensive data.
“We began by analyzing all archaeological sites in Africa dating back 120,000 to 14,000 years ago,” explains Emily Yuko Hallett from Loyola University in Chicago. The team constructed a database identifying the climate at various locations and times.
A significant shift was observed around 70,000 years ago. “Simply examining the data without complicated modeling shows this climatic change,” notes Andrea Manica from the University of Cambridge. The range of temperatures and rainfall suitable for human habitation had notably expanded, leading people to venture into deeper forests and arid deserts.
However, mere observation is insufficient; the archaeological record is inherently incomplete and often biased.
“In certain regions, no archaeological sites exist,” remarks Michela Leonardi from the Natural History Museum in London. This absence might not reflect a lack of human occupancy, but rather the lack of preservation. “In more recent periods, preservation is easier due to the increased data availability,” she adds.
Leonardi devised a statistical modeling technique to determine if an animal shifted its environmental range. Could humans have transitioned from grasslands to diverse habitats, such as tropical rainforests? The team initially thought this modeling would take two weeks, but it took five and a half years.
Ultimately, the statistics affirmed their initial observation: around 70,000 years ago, modern humans began occupying a broader range of environments. The findings were published on June 18th here.
Jack of All Trades
“At 70,000 years ago, our species appears to have transformed into the ultimate generalist,” states Manica. From this period onwards, modern humans adapted to a variety of complex habitats.
This could be misinterpreted. The team did not imply that prior to H. sapiens people were incapable of adaptation. In fact, studies of extinct human species highlight that adaptability has increased over time.
“Humans were inhabiting environments vastly different from the early stages,” observes Scerri. “We’ve found evidence of habitation in mangrove forests, rainforests, desert edges, and highlands like those in Ethiopia.”
It appears that this adaptability is what allowed Homo sapiens to thrive during environmental changes in Africa, while other species like Paranthropus did not; they remained too rigid in their lifestyle to adapt.
What likely transpired in our species 70,000 years ago is that existing adaptability became pronounced.
Some of this understanding only becomes clear when considering the diverse habitats humans occupied. “One might think of deserts and rainforests in rigid terms, but there are actually numerous variations,” explains Scerri. “There are lowland rainforests, montane forests, marshes, and periodically flooded woodlands.” The same diversity applies even within desert environments.
Before, H. sapiens “did not exploit the full range of potential habitats,” states Scerri. “But around 70,000 years ago, we see the beginning of this expansion into more types of forests and rainforests.”
This narrative intrigued me, as I had been contemplating an opposite idea.
Great Quarantine
Last week, I authored a piece about the extinction of local human groups: it appears that some H. sapiens populations vanished without a trace in modern genetics. After departing from Africa, they faced challenges in harsh environments, eventually succumbing during encounters with the first modern humans in Europe. These lost groups fascinated me. Why did they fail while others that entered Europe thousands of years later found much success?
The discovery that African groups expanded their environmental niches 70,000 years ago provides a partial explanation. If these later migrations involved more adaptable populations, they may have been better equipped to face the unfamiliar environments of Northern Europe—and subsequently Southeast Asia, Australia, and the Americas where their descendants would eventually journey.
A crucial point: this does not suggest that all populations 70,000 years ago thrived. “Not all humans instantly turned into successful populations,” Scerri explains. “Many of these groups disappeared, both inside and outside of Africa.”
Moreover, as with any significant discovery, this study introduces as many questions as it resolves. Specifically: what triggered modern humans to become more adaptable around 70,000 years ago?
Manica notes that skeletal morphology supports this idea. Ancient fossils classified as H. sapiens today exhibit only some of the traits we typically associate with modern humans. “Starting around 70,000 years ago, we broadly witnessed the emergence of many of these characteristics as a collective,” he asserts.
Manica posits that moving into new environments may have facilitated increased interaction between previously isolated populations. For instance, if two groups were separated by desert, they wouldn’t encounter or exchange ideas or genetic material until they learned to adapt to desert conditions.
“There may also be positive feedback,” suggests Manica. “With increased connectivity comes greater flexibility… breaking down barriers and fostering further interaction.”
To conclude, in a story about these lost populations, I mentioned that one of the greatest challenges for human groups was isolation. Without neighbors, a small group can face extinction due to minor setbacks. If Manica is correct, the opposite trend unfolded in Africa. Populations expanded and became increasingly connected, leading to a surge of creativity that allowed our species to spread across the globe.
In this light, the success of the last migration out of Africa could be attributed to the need for community. Without others, we may be vulnerable and at risk of failing. The notion of preparing for an apocalypse alone in isolation may be fundamentally flawed.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.