A Brief Psychotherapy Course Can Alleviate Back Pain for Three Years

Most treatments for back pain provide temporary relief

Amenic181/ShutterStock

A brief course of a specific type of psychotherapy has proven to be three times more effective in alleviating chronic low back pain than conventional treatments, even after several years.

Cognitive Function Therapy (CFT) offers individuals a customized program designed to help them comprehend and manage pain via movement and lifestyle adaptations. In a 2023 study, researchers reported significant chronic back pain relief lasting at least a year after just eight sessions.

Recent findings revealed that these sessions continue to provide relief even three years later. CFT leads to three times the improvement in pain and associated disability when compared to the conventional care options patients typically receive, such as pain medications, physical therapy, and massage treatments.

“Our findings suggest that for patients with severe impairments, back pain management can yield long-lasting benefits,” notes Jan Hartvigsen from the University of Southern Denmark.

Back pain is among the leading causes of global disability, and existing treatments often only provide mild, temporary relief. In the 2023 trial, Hartvigsen and his team enlisted 492 participants suffering from chronic low back pain, categorized as experiencing at least four points on a pain scale of 0-10.

Among the participants, one-third continued with their usual care regimen. In contrast, the other two-thirds paused standard care to partake in seven CFT sessions over a final 12-week span, concluding with a 26-week session.

During these sessions, specially trained physical therapists examined each participant’s thoughts regarding their posture, pain, emotions, and lifestyle factors. Their goal was to help participants view their pain in a new light. They focused on modifying exercise habits and control strategies, promoting healthier diets, rest techniques, stress management, and workout plans.

“Individuals living with chronic pain often fear using their bodies,” explains Hartvigsen. “It’s not a mental issue; they require support from someone who can build a strong therapeutic bond with them, as their behaviors, beliefs, and nervous systems are very flexible and conditioned to these pain-related behaviors.”

Half of the participants in the CFT group also received biofeedback, a sensor-based approach that enables real-time monitoring of movement patterns to retrain posture and motion.

After one year, pain intensity and disability levels, measured by the Roland Morris Disability Questionnaire, showed substantial improvements—approximately three to four times greater in the CFT group than in those receiving traditional care. Biofeedback enhanced the effectiveness of CFT marginally.

In a follow-up three years later, the Hartvigsen research team gathered updated evaluations from 312 participants evenly split between treatment groups.

The results indicated that those who underwent CFT experienced nearly three times greater improvement in both pain and disability when contrasted with the standard care group. Furthermore, about three times more individuals in the CFT group recorded lower disability ratings, indicating pain did not severely hinder their functionality.

However, all participants were permitted to pursue additional care after the initial unmonitored year.

topic:

Source: www.newscientist.com

Transforming Retired Coal Plants into Green Energy Sources

Abandoned coal power plant at an abandoned Indiana Army Ammunition Factory

American Explorer/Shutterstock

Numerous decommissioned coal-fired power plants have the potential to become reliable backup or emergency energy sources for the grid, eliminating the dependence on fossil fuels. Instead, they can utilize thermal energy trapped in soil.

The idea involves accumulating a large mound of soil near the coal facility and embedding industrial heaters within it. During periods of low electricity demand, these devices transform inexpensive electricity into heat, storing it in the soil at around 600°C. When electricity demand peaks, the heat can be transferred from the soil through heated liquid pipes.

A generator linked to the turbine blades of a coal plant can convert this heat into supplemental energy. The heat transforms water into steam, turning the turbine blades to produce electricity. “Rather than burning coal to heat water for steam, we harness heat from the energy stored within the soil,” explains Ken Caldeira from Stanford University in California.

This type of energy storage is crucial in supporting renewable energy sources like wind and solar, which often generate power intermittently. Soil offers a more affordable, abundant, and accessible resource for long-term energy storage compared to alternatives like lithium batteries and hydrogen fuels.

“The most exciting aspect is the low cost of energy capacity, especially since it is significantly cheaper than other energy technologies,” states Alicia Wongel at Stanford University.

Nonetheless, this approach has its challenges. “In such systems, minimizing plumbing and electrical costs is crucial, yet can be difficult,” notes Andrew Maxson from the Electric Power Research Institute, a non-profit research organization based in California.

Most soil consists of naturally heat-resistant materials like silicon dioxide and aluminum oxide, which makes it “very resilient to heat,” says Austin Vernon from Standard Thermals in Oklahoma. His startup aims to commercialize this “thermal” technology, especially for repurposing retired coal power plants in conjunction with nearby solar and wind energy sources.

There are many retired coal facilities across the United States. Close to 300 coal-fired power plants were shut down between 2010 and 2019, and an additional 50 gigawatts of coal capacity is expected to reach retirement age by 2030. In the late 2000s, cheaper natural gas and renewable energy began to outcompete coal.

Christian Phong from the Rocky Mountain Institute, a research organization in Colorado, views the idea of repurposing defunct coal plants positively. “This provides an opportunity for local communities to engage in the clean energy transition, generating jobs and additional tax revenue while navigating the shift away from coal,” he remarks.

Topic:

Source: www.newscientist.com

Fossilized Teeth Uncover How Extinct Carnivorous Mammals Adapted to Global Warming 56 Million Years Ago

Around 56 million years ago, during a period of significant geological warming known as the Paleocene-Eocene Thermal Maximum (PETM), the mesonychid mammal Dissacus Praenuntius exhibited remarkable dietary changes—it began to consume more bones.



Dissacus Praenuntius. Image credit: DIBGD / CC by 4.0.

“I am a doctoral student at Rutgers University in New Brunswick,” stated Andrew Schwartz from the University of New Jersey.

“We are observing a similar trend: rising carbon dioxide levels, increasing temperatures, and the destruction of ecosystems.”

In their study, Schwartz and his team analyzed small pits and marks left on fossilized teeth using a method known as dental microwear texture analysis. The research focused on the extinct mammal Dissacus Praenuntius, part of the Mesonychidae family.

This ancient omnivore weighed between 12 and 20 kg, comparable in size to jackals and coyotes.

Common in the early Cenozoic forests, it likely had a diverse diet that included meat, fruits, and insects.

“They resembled wolves with large heads,” Schwartz remarked.

“Their teeth were similar to those of hyenas, though they lacked small hooves on their toes.”

“Before this phase of warming, Dissacus Praenuntius mainly consumed tough meat, akin to a modern cheetah’s diet.”

“However, during and after this ancient warming period, their teeth showed wear patterns consistent with crushing hard substances like bones.”

“Our findings indicate that their dental microwear is similar to that of lions and hyenas.”

“This suggests they were consuming more brittle food rather than their usual smaller prey, which became scarce.”

This shift in diet occurred alongside a slight decrease in body size, likely a result of food shortages.

“While earlier theories attributed body size reduction solely to rising temperatures, this latest research indicates that food scarcity was a significant factor,” Schwartz explained.

“The rapid global warming of this time lasted around 200,000 years, but the changes it caused were swift and dramatic.”

“Studying periods like this can offer valuable lessons for understanding current and future climatic changes.”

“Examining how animals have adapted and how ecosystems responded can reveal much about what might happen next.”

“The research underscores the importance of dietary flexibility; species that can consume a variety of foods are more likely to endure environmental pressures.”

“In the short term, excelling in a specific area can be beneficial,” Schwartz added.

“However, in the long run, generalists—animals that are adaptable across various niches—are more likely to survive environmental changes.”

This understanding can assist modern conservation biologists in identifying vulnerable species today.

Species with specialized diets, like pandas, may struggle as their habitats diminish, while more adaptable species, such as jackals and raccoons, might thrive.

“We’re already starting to see these trends,” Schwartz noted.

“Previous research has shown that African jackals have begun to consume more bones and insects over time, likely due to habitat loss and climate stress.”

The study also indicated that rapid climate change, reminiscent of historical events, could lead to significant shifts in ecosystems, influencing prey availability and predator behaviors.

This suggests that contemporary climate change could similarly disrupt food webs, pushing species to adapt and face extinction risks.

“Nonetheless, Dissacus Praenuntius was a robust and adaptable species that thrived for about 15 million years before eventually going extinct,” Schwartz said.

Scientists believe this extinction was driven by environmental changes and competition with other species.

The study was published in June 2025 in the journal Paleogeography, Paleoclimatology, Paleoecology.

____

Andrew Schwartz et al. 2025. Dietary Changes in Mesonychids During the Eocene Heat Maximum: The Case of Dissacus Praenuntius. Paleogeography, Paleoclimatology, Paleoecology 675:113089; doi:10.1016/j.palaeo.2025.113089

Source: www.sci.news

Studies Suggest Giant Megalosauroids and Allosauroids Had Weak Bites

Similar to the tyrannosaurus dinosaur Tyrannosaurus Rex, a study conducted by paleontologists at the University of Bristol revealed that other massive carnivorous dinosaurs, while having skulls designed for formidable bite forces, exhibited much weaker bites and specialized instead in physical reduction and clefts.

Tyrannosaurus Rex Holotype specimens from the Carnegie Museum of Natural History in Pittsburgh, USA. Image credit: Scott Robert Anselmo/CC BY-SA 3.0.

Dr. Andrew Lowe, a paleontologist at the University of Bristol, noted:

“Tyrannosaurs developed skulls that were robust and capable of grinding, while other species exhibited relatively weaker but more specialized skull structures, indicating diverse feeding strategies despite their large size.”

“In essence, there wasn’t a singular ‘best’ skull design for being a predatory giant; a variety of designs functioned effectively.”

Dr. Lowe and his colleague, Dr. Emily Rayfield, sought to understand how bipedalism affected skull biomechanics and feeding methods.

Historically, it was known that predatory dinosaurs evolved in distinct regions of the world at varying times, showcasing a range of skull shapes, even as they reached similar sizes.

These observations prompted questions about whether the skulls were functionally similar underneath or if significant differences existed in predatory behaviors.

To explore the connection between body size and skull biomechanics, the researchers employed 3D techniques, including CT scans and surface scans, to analyze skull mechanics, assess feeding performance, and measure bite strength across 18 species of theropods, a category of carnivorous dinosaurs ranging from small to gigantic.

While they anticipated some variations among species, the analysis astounded them as it revealed distinct biomechanical differences.

“For instance, the Tyrannosaurus Rex skull, designed for high bite force, ultimately compromised on stress resistance,” Dr. Lowe explained.

“Conversely, other large species like Giganotosaurus exhibited a calculated stress pattern, indicating a relatively gentle bite.”

“This insight led us to consider how multiple evolutionary paths could exist for life as a massive, carnivorous organism.”

Surprisingly, skull stress did not exhibit a consistent increase with size; some smaller species experienced higher stress levels than certain larger counterparts due to greater muscle mass and bite force.

The findings demonstrate that being a predatory giant does not always equate to having a bone-crushing bite.

In contrast to the Tyrannosaurus Rex, other dinosaurs, such as Spinosaurus and Allosaurus, evolved into giants while maintaining weaker bites better suited for slashing and shredding flesh.

“I often liken Allosaurus to modern Komodo Dragons in terms of feeding behavior,” Dr. Lowe commented.

“On the other hand, the larger tyrannosaurs had skulls optimized for high bite force, akin to modern crocodiles that crush their prey.”

“This biomechanical variability suggests that dinosaur ecosystems could have supported a broader spectrum of ecology among giant carnivores than previously thought, indicating reduced competition and increased specialization.”

This study will be featured in the journal Current Biology this week.

____

Andre J. Lowe & Emily J. Rayfield. 2025. The carnivorous dinosaur lineage employs a variety of skull performances in huge sizes.Current Biology 35 (15): 3664-3673; doi: 10.1016/j.cub.2025.06.051

Source: www.sci.news

Centuries-Old Equations Forecast Flow—Until They Fail

The Navier-Stokes equations provide predictions for fluid flow

Liudmila Chernetska/Getty Images

Here’s an excerpt from the elusive newsletter of space-time. Each month, we let physicists and mathematicians take over your keyboard, sharing intriguing concepts from the universe’s vast expanse. You can Sign up for Losing Space and Time here.

The Navier-Stokes equations have approximately 200 years of history in modeling fluid dynamics, yet I still find them perplexing. It’s a strange feeling, especially given their significance in building rockets, creating medications, and addressing climate change. But it’s crucial to adopt a mathematical mindset.

The equations are effective. If they weren’t, we wouldn’t rely on them across such diverse applications. However, achieving results doesn’t guarantee comprehending them.

This situation parallels many machine learning algorithms. We can set them up, code for training, and observe outputs. Yet when we hit ‘GO’, they evolve, utilizing every step in their process to optimize outcomes. Thus, we often refer to them as “black boxes” for their obscure input-output mechanics.

The same uncertainty looms over the Navier-Stokes equations. While we possess a clearer understanding of the processes behind fluid dynamics compared to many machine learning methods—thanks to outstanding computational fluid dynamics solvers—these equations can still yield chaotic results. Identifying why this occurs is a significant problems in mathematics, linked to the Millennium Prize Problems, marking it as one of the seven most challenging unresolved questions. This makes deciphering the Navier-Stokes anomaly a million-dollar endeavor.

To grasp the challenge, let’s delve into the Navier-Stokes equation, particularly the adaptation for modeling “incompressible Newtonian fluids.” Think of it like water—conversely to air, it resists compression. (Though a more generalized version exists, I will focus on this variant, as it tied closely to my four-year doctoral thesis.)

These equations may seem daunting, but they stem from two well-established principles of the universe: mass conservation and Newton’s second law. For instance, the first equation describes the fluid parcel’s velocity, addressing how the fluid moves and alters shape without adding or removing mass.

The second equation is a complex representation of Newton’s famed equation, f = ma, applied to fluid parcels with density (ρ). It states that the momentum change rate of a fluid (left side) equals the applied force (right side). Simply put, the left side addresses mass acceleration; the right side deals with pressure (p), viscosity (μ), and exerted forces (f).

So far, so good. These equations derive from solid universal laws and function admirably—until they don’t.

2D liquid flows at right angles

NumberPhile

Consider a setup where a 2D fluid flows around a right angle. As the fluid approaches the corner, it is compelled to pivot along the channel. You could replicate this experiment in a laboratory setting, and many do around the globe. The fluid smoothly adapts its path, and life as we know it persists.

But what happens when you apply the Navier-Stokes equations to this scenario? These equations model fluid behavior and reveal how velocity, pressure, density, and related attributes progress over time. Yet, upon inputting this setup, the calculations suggest an infinite angular velocity. This isn’t just excessively large; it’s beyond comprehension—endless.

Model of 2D fluids’ flow at right angles using the Navier-Stokes equation

Keaton Burns, Dedalos

What’s happening? This result is absurd. I have conducted this experiment and observed that nothing unusual occurred. So, why did the equations fail? This is precisely where mathematicians get intrigued.

When I visit schools to discuss university applications, students invariably inquire about the admission processes at institutions like Oxford or Cambridge (I participate in selection interviews for both). I share my criteria for evaluating a strong applicant, emphasizing the importance of “thinking like a mathematician.” Breaking equations fascinates mathematicians for a reason.

It’s remarkably useful when a model operates successfully in 99.99% of cases, producing meaningful, viable results that tackle real-world problems. Despite its occasional failure, the Navier-Stokes equations remain indispensable for engineers, physicists, chemists, and biologists, aiding in solving intricate matters.

Designing a quicker Formula 1 car requires harnessing airflow dynamics. Developing a fast-acting drug necessitates understanding blood flow patterns. Predicting carbon dioxide’s effect on climate demands insights into atmospheric-oceanic interactions. Each of these scenarios pertains to fluid dynamics, making the Navier-Stokes equations critical across varied applications as they adapt to fill different mediums.

However, addressing a multitude of complex scenarios with unique dynamics necessitates elaborate equations. This complexity explains our limited understanding. Indeed, the Navier-Stokes equations are designated as Millennium Prize Problems. The Clay Mathematics Institute emphasizes the need for deeper insight as fundamental to resolving the million-dollar inquiry.

“Our vessel follows the waves as they ripple across the lake. Meanwhile, turbulent airflow continues to affect modern aircraft travel. Mathematicians and physicists feel that answers regarding turbulence and breezes lie in understanding the solutions to the Navier-Stokes equations. They seek to unveil the hidden secrets of these equations.”

How can we enhance our comprehension of equations? By experimenting until they break, something I often suggest to high school students. The cracks represent your gateway. Continue probing until the facade shatters, revealing the hidden treasures beneath.

Consider the historical context of solving quadratic equations, particularly in finding the value of x that satisfies the equation ax2 + bx + c = 0. Many will recognize this from their GCSE studies and understand that quadratic equations typically yield two roots.

This equation usually functions correctly, producing two solutions when substituting values for A, B, and C. However, certain conditions can render it ineffective, such as when b2 – 4AC <0, leading to non-existent square roots. I’ve identified circumstances where equations fail.

But how is this possible? Mathematicians from the 16th and 17th centuries proposed utilizing instances where quadratic equations seemed faulty to define “imaginary numbers,” stemming from negative square roots. This insight catalyzed the emergence of complex numbers and the rich mathematical frameworks that followed.

In essence, we often learn invaluable insights from failures more than from successful instances. For the Navier-Stokes equations, the rare occasions of malfunction occur when modeling infinite velocity in a right-angled fluid flow. Similar instances can arise when addressing vortex reconnection or soap membrane separation processes—real phenomena replicable in labs that produce infinite variable trends using Navier-Stokes.

Such apparent failures could uncover deeper truths about our mathematical models. Nevertheless, discussions remain open. It might indicate a level of detail issue in numerical simulations or faulty assumptions regarding individual liquid molecule behavior.

Conversely, these breakdowns may enlighten aspects of the Navier-Stokes equation’s inherent structure, bringing us a step closer to unlocking their mysteries.

Tom Crawford is a mathematician at Oxford University. speaker at this year’s New Scientist Live.

Topic:

Source: www.newscientist.com

How can you effectively boost your cognitive reserve?

How can I maintain my brain health for an extended period?

Tom Wang / Alamy

As we age, some cognitive lapses may appear unavoidable. However, in recent years, it’s become evident that age does not uniformly affect everyone’s brain. Even individuals with plaque buildup associated with Alzheimer’s disease can display sharp cognitive abilities, while others may experience considerable decline from relatively minor damage.

What distinguishes these individuals? The primary element is cognitive reserves, which provide a protective shield against brain aging and allow adaptation to various damages. This cognitive buffer is profoundly influenced by lifestyle choices, behaviors, and, perhaps, cognitive patterns.

With an improved understanding of cognitive reserves, scientists are increasingly exploring methods to enhance them. There are indeed ways to fortify our neural defenses, particularly during specific life stages.

The concept of cognitive reserve was first introduced by Yakov Stern at Columbia University in New York, indicating that higher levels of education and challenging professions are associated with a lower likelihood of developing dementia. Over the years, the ways we cultivate our brains can explain varying degrees of degeneration and differing outcomes influenced by numerous lifestyle factors.

This phenomenon is generally referred to as “cognitive reserve,” which can be categorized into three types. “Brain reserve” refers simply to the physical size of the brain; a larger brain may be more resilient to cognitive decline. “Cognitive reserve” denotes the dynamic capability of our brains to adapt in the face of degeneration—akin to taking alternate routes when the primary road is obstructed. Lastly, “brain maintenance” describes the brain’s proactive measures to safeguard itself against diseases.

The encouraging news is that, aside from education, many lifestyle factors influencing these essential defenses against cognitive decline have been identified. “We now appreciate cognitive reserves as dynamic attributes that evolve throughout our lives,” states Alvaro Pascual-Leone from Harvard Medical School.

One significant factor is bilingualism. Research by Ellen Bialystok at York University, who first identified the correlation between speaking a second language and enhanced cognitive reserve, indicates that bilingual individuals can delay the onset of dementia by up to four years. The mental agility required for switching languages seems to grant greater neural flexibility, allowing bilingual individuals to maintain cognitive function despite increased brain atrophy. Additionally, a recent study found that bilingualism supports the maintenance of the hippocampus, a brain region integral to memory processing.

Musical training is another impactful activity. Research released in July shows that elderly individuals who received music training displayed superior ability to discern speech in noisy environments compared to non-musicians. Brain imaging revealed that, unlike non-musicians, they did not need to engage additional neural networks to perform the task.

If you play informally, research indicates there may be a threshold effect. While occasional play does offer modest cognitive benefits, significant improvements arise from practicing for at least an hour nearly every day.

Physical exercise is often cited as beneficial, although the evidence is mixed. One study analyzing 454 post-mortem brains revealed that the most physically active individuals retained better cognitive function despite comparable levels of Alzheimer’s-related brain damage. This was true even when controlling for cognitive decline impairing motor abilities. Exercise enhances cerebral blood flow and increases protective brain chemicals, yet further investigation is necessary.

Is it ever too late to enhance cognitive reserves?

For years, experts believed that cognitive reserve was largely established during childhood—and there is some truth to this theory. “Without early stimulation, certain neural pathways may not develop fully. If not utilized later, these pathways can diminish over time,” explains Rhonda R. Voskuhl at UCLA.

However, recent findings demonstrate that cognitive reserves continue to develop throughout our lives. Middle age might present a particularly critical period for enhancement. Research indicates that those who remain mentally and physically active in their 40s and 50s—through reading, socializing, playing card games, learning new instruments, etc.—exhibit improved recognition abilities later in life. Importantly, these advantages are independent of childhood education or later activities. Thus, midlife offers unique opportunities for bolstering cognitive reserves.

And there’s no reason to stop—taking piano lessons later in life can protect against neurodegeneration. Even if you’re beginning to experience the decline you’re aiming to evade, opportunities to build reserves still exist, according to Pascual-Leone. “Individuals experiencing mild early cognitive decline due to Alzheimer’s can still strengthen their cognitive reserve, helping to mitigate or suspend the risk of dementia,” he states. “It is never too late.”

Finally, while it’s easy to focus on physical activities that enhance cognitive reserves, emerging research suggests that psychological traits may also play a significant role.

For instance, having a sense of purpose correlates with a greater quality of life, where individuals with a more substantial sense of purpose experience superior cognitive functioning despite similar levels of Alzheimer’s damage.

Similarly, maintaining a consistent mindset—the belief that life is comprehensible and manageable—can further enhance resilience against brain damage. Although the mechanism remains unidentified, several studies suggest that people exhibiting high coherence show reduced brain activation when completing identical tasks, hinting at enhanced neural efficiency as opposed to those with lower coherence.

The takeaway is that while you cannot alter the brain you were born with or the education you received early in life, it’s never too late to influence how it ages. It may not always be straightforward. “What challenges the brain is beneficial to the brain,” says Bialystok. However, engaging in social activities, staying physically active, learning a new language, playing an instrument, and finding purpose in life appear to be incredibly impactful.

The Arts and Science of Crafting Science Fiction

Dive into the fascinating realm of science fiction and discover how to create your own compelling science fiction narratives during this immersive weekend experience.

Topics:

Source: www.newscientist.com

Astronomers Uncover a Rare Red Supergiant Star

The newly identified Stephenson 2 DFK 52, an extraordinary red supergiant, is situated within the expansive stellar cluster RSGC2.



This image showcases the red supergiant star Stephenson 2 DFK 52 and its surroundings. Image credits: Alma / ESO / NAOJ / NRAO / Siebert et al.

RSGC2 is a cluster containing at least 26 red supergiants located at the base of the Milky Way’s diagonal crux spiral arm, approximately 5,800 parsecs (18,917 light-years) away.

Also referred to as Stephenson 2, this cluster is an active site for recent star formation where the arms intersect with galaxy bulges.

A team of astronomers led by Mark Siebert from Chalmers University of Technology observed the RSGC2 star using the Atacama Large Millimeter/submillimeter Array (ALMA).

“What we catch in this image of Stephenson 2 DFK 52 is indeed a supermassive red star that is shedding clouds of gas and dust as it approaches the end of its lifecycle,” they explained.

“Such nebulae are typically found around supermassive stars; however, this particular cloud presents an intriguing mystery for astronomers.”

“This cloud of ejected material is the most expansive discovered around a giant star, spanning an impressive 1.4 light-years.”

“Stephenson 2 DFK 52 is quite similar to Betelgeuse, another renowned red supergiant, so we anticipated observing a comparable cloud surrounding it.”

“If Stephenson 2 DFK 52 is as close to us as Betelgeuse, the surrounding cloud would appear about one-third the size of the full moon.”

Recent observations from ALMA have enabled astronomers to quantify the mass of material enveloping the star and analyze its velocity.

“Regions moving towards us appear in blue, while those receding are represented in red,” they stated.

“The data suggests that the star experienced a significant mass loss event about 4,000 years ago, followed by a slow-down in its current mass loss rate.”

The team estimates that Stephenson 2 DFK 52 has a mass between 10-15 solar masses and has already lost 5-10% of its mass.

“The rapid expulsion of such materials within a brief time frame poses a mystery,” the researchers commented.

“Could an unusual interaction with a companion star be responsible? Why does the cloud exhibit such a complex shape?”

“Understanding why Stephenson 2 DFK 52 has expelled so much material can illuminate insights into its eventual fate.

The team’s paper is set to be published in the journal Astronomy and Astrophysics.

____

Mark A. Sheebert et al. 2025. Discovery of the extraordinary red supergiant Stephenson 2 DFK 52 within the expansive stellar cluster RSGC2. A&A in press; Arxiv: 2507.11609

Source: www.sci.news

Hubble Discovers Dusty Clouds in the Tarantula Nebula

The stunning new image from the NASA/ESA Hubble Space Telescope reveals intriguing details of the Tarantula Nebula, a dynamic region of star formation located in the Large Magellanic Cloud.

This Hubble image showcases part of the Tarantula Nebula, located about 163,000 light years away in the Dorado constellation. The colorful image is a composite of various exposures captured by Hubble’s Wide Field Camera 3 (WFC3) across ultraviolet, near-infrared, and spectral optical ranges. It is based on data collected using four different filters. Colors have been assigned by applying various hues to each monochromatic image produced by the individual filters. Image credits: NASA/ESA/Hubble/C. Murray.

The Tarantula Nebula is situated roughly 163,000 light years from the southern constellation of Dorado.

Also known as NGC 2070 or 30 Dorados, this nebula is part of the expansive Magellanic Cloud, which is one of our closest galactic neighbors.

The nebula’s brilliant glow was first observed in 1751 by French astronomer Nicolas Louis de Lacaille.

At its core lies some of the most massive stars known, with some reaching up to 200 solar masses, making this region ideal for studying how gas clouds collapse under gravitational forces to give rise to new stars.

“The Tarantula Nebula is the largest and brightest area of star formation not only within the Large Magellanic Cloud but also among the entire group of nearby galaxies that include the Milky Way,” astronomers associated with Hubble stated.

“Within the nebula are some of the most massive stars discovered, some of which are approximately 200 times the mass of our Sun.”

“The scene depicted here is located far from the nebula’s center, where the superstar cluster known as R136 resides, but is quite close to a rare star called the Wolf-Rayet Star.”

“The Wolf-Rayet star is an enormous star that has shed its outer hydrogen layers; it is extremely hot, bright, and generates a dense, powerful wind,” they elaborated.

The Tarantula Nebula is frequently observed by Hubble, and its multi-wavelength capabilities play a crucial role in capturing the intricate details of the nebula’s dusty cloud formations.

“The data used to produce this image come from an observational program known as Scylla, which is named after the multi-faceted sea monster from the Greek mythology of Ulysses,” the astronomer noted.

“The Scylla program was developed to complement another Hubble observational initiative called Ulysses (the Ultraviolet Legacy Library of Young Stars as a fundamental criterion).”

“While Ulysses focuses on giant young stars in the small Magellanic Cloud, Scylla explores the gas and dust structures surrounding these stars.”

Source: www.sci.news

Paleontologists Unveil a New Species of Plesiosaurus

Paleontologists have uncovered a remarkable new genus and species of early extinct plesioaurooid plesiosaurs from a nearly complete skeleton discovered in the Jurassic Posidonian shale of Holzmaden, Germany.

Reconstruction of Plesionectes longicollum‘s life. Image credit: Peter Nicolaus.

The newly identified species, Plesionectes longicollum, thrived in the early Jurassic seas approximately 183 million years ago.

This marine reptile reached lengths of about 3.2 m, with a body length of 1.25 m and a tail measuring 81 cm.

The skeleton, complete with fossilized soft tissue remnants, was excavated in 1978 from a Posidonia Shale quarry in Holzmaden, Germany, and its distinct anatomical features are now fully recognized through thorough scientific examination.

“The specimen has been part of our collection for decades, yet prior studies never fully explored its unique anatomy,” remarked Dr. Sven Sachs, paleontologist at Naturkunde-Museum Bielefeld.

“Our in-depth analysis uncovered a rare combination of skeletal traits that distinctly separate them from all previously recognized plesiosaurs.”

Skeleton of Plesionectes longicollum. Scale bar – 30 cm. Image credit: S. Sachs & D. Madzia, doi: 10.7717/Peerj.19665.

Plesionectes longicollum is particularly significant as it represents the oldest known plesiosaur from the Holzmadden area.

“This discovery contributes another piece to the evolutionary puzzle of marine ecosystems during a pivotal period in Earth’s history,” stated Dr. Daniel Magia, a paleontologist at the Polish Academy of Sciences.

“The early Toarcian epoch, when this creature existed, was marked by substantial environmental changes, including major marine anoxic events that impacted life in oceans globally.”

This finding illustrates that the Posidonian shales, well-known for their remarkably preserved fossils, harbor an even greater diversity of marine reptiles than previously acknowledged.

“The Posidonian Shale of Holzmaden has already yielded five other plesiosaur species, encompassing representatives from three major plesiosaur lineages,” the authors noted.

“This new addition provides one of the most vital insights into Jurassic marine life, enhancing our understanding of this era.”

Survey results will be available online in the journal Peerj.

____

S. Sachs & D. Madzia. 2025. An unusual early fledgling plesiosauroid from the Lower Jurassic Posidonian Shale in Holzmaden, Germany. Peerj 13:E19665; doi:10.7717/peerj.19665

Source: www.sci.news

Webb Observations Reveal Two Stars Shape the Irregular Structure of NGC 6072

Astronomers captured a new high-resolution image of the planetary nebula NGC 6072 using two instruments on board the NASA/ESA/CSA James Webb Space Telescope.

This Webb/Nircam image depicts NGC 6072, a planetary nebula located about 4,048 light years away in the constellation of Scorpius. Photo credits: NASA/ESA/CSA/STSCI.

NGC 6072 is situated approximately 1,241 parsecs (4,048 light years) away from the southern constellations of Scorpius.

Also known by designations such as ESO 389-15, HEN 2-148, and IRAS 16097-3606, this nebula has a dynamic age of about 10,000 years.

It was first discovered by British astronomer John Herschel on June 7, 1837.

“Since their discovery in the 1700s, astronomers have learned that planetary nebulae, the expanding shells of luminous gases expelled by dying stars, can take on various shapes and forms,” noted Webb astronomers.

“While most planetary nebulae are circular, elliptical, or bipolar, the new Webb image of NGC 6072 reveals a more complex structure.”

Images captured by Webb’s Nircam (near-infrared camera) suggest that NGC 6072 displays a multipolar configuration.

“This indicates there are multiple oval lobes being ejected from the center in various directions,” the astronomers explained.

“These outflows compress the surrounding gas into a disk-like structure.”

“This suggests the presence of at least two stars at the center of this nebula.”

“In particular, a companion star appears to be interacting with an aging star, drawing in some of its outer gas and dust layers.”

The central area of the nebula glows due to hot stars, reflected in the light blue hue characteristic of near-infrared light.

The dark orange regions, composed of gas and dust, create pockets and voids appearing dark blue.

This material likely forms when dense molecules shield themselves from the intense radiation emitted by the central star.

There may also be a temporal aspect; for thousands of years, rapid winds from the main star could have been blowing away the surrounding material as it loses mass.

This web/milli image highlights the planetary nebula NGC 6072. Image credits: NASA/ESA/CSA/STSCI.

The long wavelengths captured by Webb’s Miri (mid-infrared instrument) emphasize the dust, unveiling a star that astronomers believe resides at the center of the nebula.

“The image appears as a small pink dot,” remarked the researchers.

“The mid-infrared wavelengths also reveal a concentric ring expanding outward from the central region.

“This might indicate the presence of a secondary star at the heart of the nebula, obscured from direct observation.”

“This secondary star orbits the primary star, creating rings of material that spiral outward as the original star sheds mass over time.”

“The red regions captured by Nircam and the blue areas highlighted by Miri track cool molecular gases (likely molecular hydrogen), while the central region tracks hot ionized gases.”

Source: www.sci.news

Oldest Known Sauropodmorph Dinosaurs Discovered in East Asia, Excavated in China

wudingloong wui existed around 200 million years ago in Yunnan Province, China, during the early Jurassic Epoch.



Reconstructed skeletons and representative bones of wudingloong wui. Individual scale bars – 5 cm. Reconstructed skeleton scale bar – 50 cm. Image credit: Wang et al., doi: 10.1038/s41598-025-12185-2.

wudingloong wui was a medium-sized member of the non-Sauropodang group, part of the Sauropodomorpha, a highly successful dinosaur clade found nearly worldwide, from Antarctica to Greenland.

“The Chinese non-Sauropodian sauropods are primarily known from the Rufen and the adjacent Lower Jurassic Rufen Formation in Yunnan Province, including species like Lufengosaurus, Yunnanosaurus, Jing Shanosaurus, xingxiulong, and Yizhousaurus,” said Jamin Wang, a paleontologist at the Chinese Geological Museum and a collaborator.

“The discovery of Qianlong from the Jurassic Jillusin Formation in the neighboring Gituhou province is a recent finding that expands our understanding of non-Sauropodian Sauropodomorphs in China.”

“The discovery of wudingloong wui provides additional evidence that the Sauropodomorph community in southwestern China is the most taxonomically diverse and morphologically varied in the world, featuring a range of species from early Massospondylidae to non-Sauropod forms.”

Fossilized remains of wudingloong wui were collected from the Yubacun Layer in Wande Town, Yunnan Province, China.

“The specimen includes a partial skeleton comprising the skull, lower jaw, atlas, axis, and the third cervical vertebra.”

“Fully developed skull elements and closed central nerve sutures suggest that the specimen is likely a mature individual.”

wudingloong wui is the earliest and statistically oldest Sauropodomorph dinosaur discovered in East Asia.

“The new species fits within the Sauropodomorph classification, predating Massospondylidae and Sauropodiformes, thus contributing valuable information to the Sauropodomorph community in southwestern China,” the researchers stated.

“Thus, the Sauropodomorph community in early Jurassic southwestern China is possibly characterized by four distinct associations comprising four relatively small species, including the medium-sized Massospondylid Lufengosaurus, early Zauropod horns, and assemblages resembling late Triassic to early Jurassic medium-sized sauropods, presumably quadrupedal Massopodans, akin to those found in the Elliott Formation of South Africa and the Zauropodmorph group in Zimbabwe.”

“Close phylogenetic ties between wudingloong and Plateosauravus from the Elliott Formation in late Triassic South Africa, as well as Ruehleia from late Triassic Germany, indicate that the early dispersal of Sauropodomorphs in East Asia occurred at least during the Late Triassic Rhaetian (206-201 million years ago) or around the Triassic-Jurassic boundary (201 million years ago).”

“To substantiate this hypothesis, further samples and additional analyses are required.”

“Nonetheless, the discovery of wudingloong raises questions regarding the distribution of non-Sauropodian sauropods in East Asia and its correlation with Triassic-Jurassic extinction events.”

The team’s paper is published in the journal Scientific Reports.

____

YM. King et al. 2025. The new early Jurassic dinosaurs represent the earliest and oldest Sauropodmorph in East Asia. Sci Rep 15, 26749; doi:10.1038/s41598-025-12185-2

Source: www.sci.news

A Blanket of Wildfire Smoke Triggers Air Quality Alerts for Millions Amidst Our Expansive Skies

On Monday, air quality warnings were issued for millions across the upper Midwest and northeastern regions as smoke from wildfires in Canada moved into these areas.

Areas expected to experience hazy skies include Minnesota, Wisconsin, Michigan, Northern Indiana, Pennsylvania, New York, New Jersey, Connecticut, Massachusetts, Vermont, Rhode Island, New Hampshire, Delaware, and Maine. The National Weather Service reports.

In Canada, approximately 200 wildfires remain uncontrolled, including 81 in Saskatchewan, 159 in Manitoba, and 61 in Ontario. Data from Canada’s Interagency Forest Fire Centre indicates that over 16.5 million acres have been affected this year, which may lead to a record-breaking wildfire season.

High-pressure systems in the Midwest are trapping smoke, contributing to air quality issues that may last for several days. According to the Michigan Department of Environment, Great Lakes, and Energy.

The Air Quality Index on Monday across 14 Midwest and Northeastern states indicated conditions ranging from “moderate” to “unhealthy” for the general population.

Wildfire smoke is particularly hazardous as it contains fine particles measuring less than 2.5 micrometers in diameter, which is about 4% the width of an average human hair. This type of pollution can penetrate deeply into the lungs, exacerbating asthma, lung cancer, and other chronic respiratory conditions.

High levels of air pollution can lead to inflammation and weaken the immune system. Infants, children, the elderly, and pregnant women are especially at risk during poor air quality conditions.

Research indicates that climate change contributes to the frequency and intensity of wildfires. Elevated temperatures can desiccate vegetation, elevating the likelihood of wildfires igniting and spreading quickly.

Cities experiencing poor air quality on Monday included Milwaukee, Detroit, Buffalo, Albany (New York), Boston, and New York City. Multiple alerts are in effect until Tuesday, as reported by the Weather Bureau.

In the western regions, several wildfires are causing additional air quality concerns. Over 65,000 acres have burned in California’s Los Padres National Forest, where high temperatures and dry conditions are fueling the growth of wildfires.

In Colorado, the Air Quality Index also displayed “moderate” readings on Monday.

“If the smoke becomes thick in your area, we advise you to remain indoors,” stated the Colorado Department of Public Health and Environment. This recommendation particularly applies to individuals with heart diseases, respiratory issues, young children, and the elderly. If smoke levels are moderate to intense, consider reducing outdoor activities.

Source: www.nbcnews.com

Deep Microorganisms Capable of Harnessing Energy from Earthquakes

Sure! Here’s a rewritten version of your content while preserving the HTML tags:

SEI 261182732

Microorganisms may derive energy from surprisingly confined environments

Book Worms / Public Domain Sources from Aramie / Access Rights

Fractured rocks from earthquakes could reveal a variety of chemical energy sources for the microorganisms thriving deep beneath the surface, and similar mechanisms may feed microorganisms on other planets.

“This opens up an entirely new metabolic possibility,” says Kurt Konhauser, from the University of Alberta, Canada.

All life forms on Earth rely on flowing electrons to sustain themselves. On the planet’s surface, plants harness sunlight to create carbon-based sugars that are consumed by animals, including humans. This initiates a flow of electrons from the carbon to the oxygen we breathe. The chemical gradient formed by these carbon electron donors and oxygen electron acceptors, known as redox pairs, generates energy.

Underground, microbes also depend on redox pairs, but these deep ecosystems lack access to various solar energy forms. Hence, traditional carbon-oxygen pairings are inadequate. “Challenges remain in identifying these underground [chemical gradients]. Where do they originate?” Konhauser questions.

Hydrogen gas, generated by the interaction of water and rock, serves as a primary electron source for these microbes, much like carbon sugars do on the surface. This hydrogen arises from the breakdown of water molecules, which can occur when radioactive rocks react with water or iron-rich formations. During earthquakes, when silicate rocks are fragmented, they expose reactive surfaces that can split water, producing considerable amounts of hydrogen.

However, to utilize that hydrogen, microorganisms require electron acceptors to complete the redox pair. Attributing value solely to hydrogen is misleading. “Having the food is great, but without a fork, you can’t eat it,” remarks Barbara Sherwood Lollar from the University of Toronto, Canada.

Konhauser, Sherwood Lollar, and their research team employed rock-crushing machines to simulate the reactions that yield hydrogen gas within geological settings, which could subsequently form a complete redox pair. They crushed quartz crystals, mimicking strains in various types of faults and mixing the water present in most rocks with different iron and rock forms.

The crushed quartz reacted with water to generate significant quantities of hydrogen, both in stable molecular forms and more reactive species. The team’s findings revealed many of these hydrogen radicals react with iron-rich liquids, creating numerous compounds capable of either donating or accepting enough electrons to establish different redox pairs.

“Numerous rocks can be harnessed for energy,” Konhauser pointed out. “These reactions mediate diverse chemical processes, suggesting various microorganisms can thrive.” Secondary reactions involving nitrogen or sulfur could yield even broader energy sources.

“I was astonished by the quantities,” said Magdalena Osburn from Northwestern University, Illinois. “It produces immense quantities of hydrogen, and it also initiates fascinating auxiliary chemistry.”

Researchers estimate that earthquakes generate far less hydrogen than other water-rock interactions within the Earth’s crust. However, their insights imply that active faults may serve as local hotspots for microbial diversity and activity, Sherwood Lollar explained.

Importantly, a complete earthquake isn’t a prerequisite. Similar reactions can take place as rocks fracture in seismically stable areas, like continents or geologically dead planets such as Mars. “Even within these massive rocks, you can observe pressure redistributions and shifts,” she noted.

“It’s truly exciting to explore sources I was recently unfamiliar with,” stated Karen Lloyd from the University of Southern California. The variety of usable chemicals produced in actual fault lines is likely even more diverse. “This likely occurs under varying pressures, temperatures, and across vast spatial scales, involving a broader range of minerals,” she said.

Energy from infrequent events like earthquakes may also illuminate the lifestyles of what Lloyd refers to as aeonophiles—deep subterranean microorganisms thought to have existed for extensive time periods. “If we can endure 10,000 years, we may experience a magnitude 9 earthquake that yields a tremendous energy surge,” Lloyd added.

This research is part of a growing trend over the last two decades that broadens our understanding of where and how organisms can endure underground, states Sherwood Lollar. “The deep rocks of continents have revealed much about the habitability of our planet,” she concluded.

Topic:

Let me know if you need any further modifications or adjustments!

Source: www.newscientist.com

Skull of a Massive Carnivorous Dinosaur Uncovers a “Bone-Crushing” Bite

Illustration of Tyrannosaurus Rex

Roger Harris/Getty Images/Science Photo Library

When examining the colossal dinosaur skull, it becomes evident that some species prefer to shred their prey, while others deliver bone-crushing attacks.

Andre Lowe and Emily Rayfield from the University of Bristol, UK, studied the skulls of 18 Mesozoic theropod species. This varied group, including T. Rex, Giganotosaurus, and Spinosaurus, walked on two legs and was characterized by large heads and razor-sharp teeth.

Nevertheless, despite their similarities, each dinosaur’s feeding behavior cannot be generalized. Eric Snively from Oklahoma State University notes that Giganotosaurus, with its “thin sawtooth teeth” reminiscent of a cross between a great white shark and a Komodo Dragon, was designed for tearing away large chunks of flesh from its prey. In contrast, the semi-aquatic Spinosaurus had a unique anatomy likened to a heron supported by a dachshund body and equipped with teeth similar to those of crocodiles.

Using a 3D scan of the skull’s surface, the researchers explored the bite mechanics of these dinosaurs by employing a method to model bridge stress. By juxtaposing the skull muscle structures of each dinosaur with those of modern relatives like birds and crocodiles, they learned that Giganotosaurus and Spinosaurus had significantly weaker bites compared to the more recent Tyrannosaurus, which utilized a robust, shorter skull to exert substantial “bone-grabbing” force. “Ultimately, Tyrannosaurus showed more emphasis on the skull than we anticipated, thus indicating harder chewing,” Snively remarked.

“The feeding strategies of these apex predators are more intricate than previously thought,” states Fion Waisum Ma from the Beipiao Palace Museum in China. “T-Rex existed during the late Cretaceous period, a time when competition for hunting was intense,” she adds.

Topics:

Source: www.newscientist.com

Achieve Weight Loss with a Highly Processed Food Diet

Cereal bars and protein bars can either be store-bought or homemade, often containing ultra-processed components.

Drong/Shutterstock

Research suggests that while it’s possible to shed weight consuming highly processed foods, the results may not be as significant as when they are eliminated from the diet.

Foods are categorized as extremely processed when they include ingredients such as high fructose corn syrup or additives meant to enhance flavor and presentation, such as flavoring agents and preservatives.

Numerous studies have connected the consumption of ultra-processed foods to adverse health effects, including cardiovascular issues, type 2 diabetes, and weight gain. However, it’s debated whether the unhealthy aspect is solely due to certain ingredients or if the processing itself is inherently damaging.

To explore this in relation to weight loss, Samuel Dicken from University College London and his team conducted randomized trials, assigning 55 overweight or obese individuals to either an ultra-processed or minimally processed diet.

“People often think of pizza and chips, yet the study incorporated meals from the UK Eatwell Guide, featuring protein sources like beans, fish, and meat, while encouraging a balanced diet with at least five portions of fruits and vegetables. The meals were matched in terms of fats, sugars, salt, and carbohydrates,” explained Dicken.

Participants received the meals, marking the first study to assess these diets under real-world conditions instead of clinical environments. The ultra-processed options included lower-fat and lower-salt items like breakfast cereals, protein bars, chicken sandwiches, and ready-made lasagna. “These are the types of foods that carry health claims in supermarkets,” says Dicken.

Meanwhile, the minimally processed meals encompassed homemade options such as overnight oats, chicken salad, freshly baked bread, and spaghetti bolognese. Both groups were provided around 4,000 calories daily, with the instruction to eat to their satisfaction. Participants switched between the diets after eight weeks, taking a four-week break before transitioning again.

Although the study’s primary aim focused on the health effects of balanced diets prepared in various ways rather than directly targeting weight loss, both diets resulted in weight reductions. The minimally processed diet led to a 2% weight loss, while those on the ultra-processed diet saw a 1% decrease.

“We observed greater weight loss from the minimally processed diets, as well as increased fat loss and a notable reduction in cravings,” stated Dicken.

Further evaluations revealed that minimally processed diets contributed to lower body fat volumes and improved blood markers. Interestingly, participants on the ultra-processed diet exhibited decreased levels of low-density lipoprotein (LDL), known as “bad” cholesterol.

However, Ciarán Forde from Wageningen University in the Netherlands pointed out that ultra-processed meals are typically more calorie-dense compared to minimally processed alternatives. “Fundamental questions remain regarding which specific treatments or ingredients drive the observed outcomes,” he noted.

Forde also emphasized that the weight loss observed might not be applicable to the general population since participants started as overweight or obese and transitioned to healthier eating habits.

Topic:

Source: www.newscientist.com

Can I Launch a Spacecraft to Intercept the Interstellar Object 3i/Atlas?

NASA’s Juno spacecraft may be tasked with intercepting interstellar objects

NASA/JPL-Caltech

Interstellar objects passing through our solar system make a brief journey around the sun before heading back into deep space. While astronomers can capture images of comet 3i/Atlas traversing our universe, is there a possibility of intercepting this object?

Researchers globally are investigating several strategies, including repurposing European Space Agency (ESA) missions and rerouting existing NASA endeavors to intervene. However, the task is complicated by the comet’s speed of 60 km/sec and the limited preparation time available.

One notable proposal comes from Avi Loeb at Harvard University, who suggests that the interstellar object “Umuamua is akin to an alien spacecraft; I made a similar assertion regarding 3i/Atlas.” Loeb and his team have published a paper, which, despite not being peer-reviewed, indicates that NASA’s Juno spacecraft could adjust its orbit around Jupiter to rendezvous with 3i/Atlas on March 14th next year.

Nonetheless, this idea faces challenges. Mark Burchell from the University of Kent emphasizes the aging spacecraft’s limitations. Launched in 2011, Juno was initially slated to end its mission with a collision into Jupiter’s surface in 2021, which has been delayed until September this year. It has already experienced two technical issues this year, both resolved by engineers.

“The current orbit allows for closer views of Jupiter and a pass by Io [Jupiter’s moon] in 2023. By 2024, it will be exposed to significant radiation, which is unsurprising given the performance anomalies observed that necessitate a restart,” Burchell explains. “If those modifications are successful and the instruments function properly, there might be valuable data to acquire.”

In a post on X, Jason Wright from Penn State has also voiced skepticism regarding this concept, highlighting that the spacecraft has limited fuel and systematic engine issues.

Another potential avenue to observe 3i/Atlas closely is through the ESA’s Jupiter Ice Moon Explorer (Juice). Luca Conversi from ESA mentions that they are considering this possibility. “We acknowledge this valuable opportunity and are currently assessing the technical feasibility. However, we can’t divulge too much at this stage,” Conversi states.

Despite Juice being closer to 3i/Atlas than Earth, it cannot alter its course towards the comet. “I’m uncertain if redirecting it to a comet is practical. Astrodynamics is far more complex than depicted in science fiction films, and altering a spacecraft’s trajectory is quite challenging,” comments Conversi.

Presently, several spacecraft, including Mars Reconnaissance Orbiter and Mars Odyssey, are in orbit around Mars and nearing the end of their operational lifespans. Research conducted by Michigan State University and colleague Atsuhiro Yaginuma suggests that while this method has advantages, it’s unclear whether these spacecraft possess sufficient fuel for such a journey.

The ESA is developing another mission aimed at improving the chances of approaching interstellar objects in the future. The Comet Interceptor spacecraft, scheduled for launch in 2029, is set to await the discovery of comets or interstellar objects that can be targeted at a stable position between Earth and the Sun, facilitating exploration. These missions are rare, as scientists often do not know what the target will be or its appearance timeline.

Colin Snodgrass at the University of Edinburgh, who serves as the deputy lead of the Comet Interceptor, elaborates that this mission would “require a bit of additional maneuverability” to effectively intercept fast-moving objects like 3i/Atlas. For these swift visitors, he suggests a broader mission with a streamlined payload. “If the goal is simple speed, minimize non-essential equipment and prioritize fuel mass,” he advises.

Another future concept involves deploying small satellites in large orbits monthly. “This would distribute them across Earth’s orbit,” Snodgrass explains. “At any time, one of them could return to Earth and leverage gravity to navigate to interesting locations.”

Astrometric endeavors, such as the legacy investigation of space and time, could quickly enhance our understanding of the frequency of these objects entering our solar system, improving prior warnings about their arrival. “When they are moving rapidly, timely notifications can make a significant difference. Instead of providing alerts months ahead of perihelion, having earlier warnings will significantly impact our response,” Snodgrass remarks.

Topic:

Source: www.newscientist.com

Hidden Superpowers of Hibernating Animals Might Be Within Human DNA

Recent research conducted by scientists at the University of Utah sheds light on unlocking hibernation abilities, potentially paving the way for treatments that could reverse neurodegeneration and diabetes.

Investigating the evolution of hibernation in certain species like helinates, bats, ground squirrels, and lemurs can unveil the mysteries of their extraordinary resilience. Image credit: Chrissy Richards.

Gene clusters known as fat mass and obesity (FTO) loci are crucial to understanding hibernation capabilities. Interestingly, these genes are also present in humans.

“What stands out in this region is that it represents the most significant genetic risk factor for obesity in humans,” states Professor Chris Greg, the lead author of both studies from the University of Utah.

“Hibernators seem to leverage genes in the FTO locus uniquely.”

Professor Greg and his team discovered DNA regions specific to hibernation factors near the FTO locus that regulate the expression of nearby genes, modulating their activity.

They hypothesize that hibernators can accumulate weight prior to entering winter by adjusting the expression of adjacent genes, particularly those at or near the FTO locus, utilizing fat reserves gradually for winter energy needs.

Moreover, regulatory regions linked to hibernation outside the FTO locus appear to play a significant role in fine-tuning metabolism.

When the research team mutated these hibernation factor-specific regions in mice, they observed variations in body weight and metabolism.

Some mutations accelerated or inhibited weight gain under specific dietary conditions, while others affected the mice’s ability to restore body temperature post-hibernation or regulate their overall metabolic rate.

Interestingly, the hibernator-specific DNA regions identified by researchers are not genes themselves.

Instead, this region comprises a DNA sequence that interacts with nearby genes, modulating their expression like conductors guiding an orchestra to adjust volume levels.

“This indicates that mutating a single hibernator-specific region can influence a broad array of effects well beyond the FTO locus,” notes Dr. Susan Steinwand from the University of Utah. First study.

“Targeting a small, inconspicuous DNA region can alter the activity of hundreds of genes, which is quite unexpected.”

Gaining insight into the metabolic flexibility of hibernators may enhance the treatment of human metabolic disorders like type 2 diabetes.

“If we can manipulate more genes related to hibernation, we may find a way to overcome type 2 diabetes similar to how hibernators transition back to normal metabolic states,” says Dr. Elliot Ferris, Ph.D., of the University of Utah. Second survey.

Locating genetic regions associated with hibernation poses a challenge akin to extracting needles from a vast haystack of DNA.

To pinpoint relevant areas, scientists employed various whole-genome technologies to investigate which regions correlate with hibernation.

They then sought overlaps among the outcomes of each method.

Firstly, they searched for DNA sequences common to most mammals that have recently evolved in hibernators.

“This region has remained relatively unchanged among species for over 100 million years; however, if significant alterations occur in two hibernating mammals, it signals critical features for hibernation,” remarked Dr. Ferris.

To comprehend the biological mechanisms of hibernation, researchers tested and identified genes that exhibited fluctuations during fasting in mice, producing metabolic alterations similar to those seen in hibernation.

Subsequently, they identified genes that serve as central regulators or hubs for these fasting-induced gene expressions.

Numerous recently altered DNA regions in hibernators appear to interact with these central hub genes.

Consequently, the researchers predict that the evolution of hibernation necessitates specific modulations in hub gene regulation.

These regulatory mechanisms constitute a potential candidate list of DNA elements for future investigation.

Most alterations related to hibernation factors in the genome seem to disrupt the function of specific DNA rather than impart new capabilities.

This implies that hibernation may have shed constraints, allowing for great flexibility in metabolic control.

In essence, the human metabolic regulator is constrained to a narrow energy expenditure range, whereas, for hibernators, this restriction may not exist.

Hibernation not only reverses neurodegeneration but also prevents muscle atrophy, maintains health amidst significant weight fluctuations, and suggests enhanced aging and longevity.

Researchers surmise that their findings imply if humans can bypass certain metabolic switches, they may already possess a genetic blueprint akin to a hibernation factor superpower.

“Many individuals may already have the genetic structure in place,” stated Dr. Steinwand.

“We must identify the control switches for these hibernation traits.”

“Mastering this process could enable researchers to bestow similar resilience upon humans.”

“Understanding these hibernation-associated genomic mechanisms provides an opportunity to potentially intervene and devise strategies for tackling age-related diseases,” remarks Professor Greg.

“If such mechanisms are embedded within our existing genome, we could learn from hibernation to enhance our health.”

The findings are published in two papers in the journal Science.

____

Susan Steinwand et al. 2025. Conserved non-coding CIS elements associated with hibernation regulate metabolism and behavioral adaptation in mice. Science 389 (6759): 501-507; doi: 10.1126/science.adp4701

Elliot Ferris et al. 2025. Genome convergence in hibernating mammals reveals the genetics of metabolic regulation of the hypothalamus. Science 389 (6759): 494-500; doi: 10.1126/science.adp4025

Source: www.sci.news

Understanding Frost Formation on Mars – Sciworthy

Picture a winter morning where everything glistens in white. The morning frost serves as a testament to Earth’s water cycle, with dew forming from the chilled air overnight. A similar phenomenon occurs on Mars, situated 63 million miles (or 102 million kilometers) away, presenting scientists with a unique opportunity to understand how water behaves on the red planet.

A group of researchers led by Dr. Valantinus from the University of Bern has uncovered evidence suggesting that morning frost may indeed exist on Mars. They identified this potential frost in bowl-shaped formations known as Calderas at the summit of the Tharsis Volcano. Among these volcanoes, Olympus Mons stands out as it towers over Mount Everest—more than double its height—reaching 21 km (approximately 13 miles) above sea level, making it the tallest volcano in the solar system.

Earlier studies estimated that around 1 trillion kilograms (approximately 2.2 trillion pounds) of water vapor cycles through Mars’ atmosphere annually between its northern and southern hemispheres. The massive Tharsis volcano disrupts this water flow due to its significant elevation, creating areas with lower pressure and wind speed referred to as Microclimates. The Valantinus team concentrated on this region, which produces optimal conditions for frost development in the microclimate above the volcano, increasing the likelihood of water vapor condensing to form frost.

To search for potential frost, the team analyzed thousands of spectral images captured by a color and stereo surface imaging system called Cassis, part of the European Space Agency’s Trace Gas Orbiter satellite orbiting Mars. They noted that the bright bluish tint in the area might indicate frost. By focusing on images with cooler tones, they set out to gather more evidence supporting the presence of frost.

To accomplish this, the team utilized a tool capable of detecting the composition of materials based on light wavelengths, known as a Spectrometer. A spectrometer onboard the Trace Gas Orbiter, named NOMAD, yielded ice readings concurrent with Cassis images. By combining Cassis imagery with NOMAD spectrometer data and additional high-resolution stereo camera images, the researchers pinpointed frosts in 13 distinct locations related to Mars’ volcanoes.

The Valantinus team anticipated that observations would reveal frost, but they needed to identify its type. Mars possesses a carbon dioxide atmosphere, which means carbon dioxide frost can naturally appear on the planet’s surface. To differentiate between carbon dioxide and water frost, researchers analyzed the surface temperatures on Mars.

They noted that the temperature at which carbon dioxide frost forms on Mars is around -130°C (-200°F), resulting in the conversion of solid carbon dioxide to gas as temperatures rise. Conversely, water frost appears at about -90°C (-140°F). Using a general circulation model, the team estimated that the average surface temperature in the areas where frost was discovered is roughly -110°C (-170°F), a temperature too warm for carbon dioxide frost but sufficiently cool for water frost.

Observations revealed frost deposits along the floors and edges of the volcanic calderas, while bright, warm areas inside the caldera lacked these deposits. The team also observed that some frost partially rested on dust-like particles on the ground, which cool down more at night and warm gradually in the morning, providing an ideal surface for frost. Additionally, frost was only evident during the early mornings on Mars, likely due to the daily warming cycle of the planet’s surface, similar to Earth.

The Valantinus team utilized imaging and chemical measurements on Mars to track the exchange of water between the planet’s surface and atmosphere. They recommend that future researchers continue to monitor Cassis images in these regions to deepen understanding of how morning frosts develop on Mars.

For alternative perspectives on this article, please see summary by Paige Lebman, a University of Delaware student.


Post view: 352

Source: sciworthy.com

Five Years Later: How Have the Developers of the Sci-Fi Cult Classic Evolved?

Forest (Nick Offerman) is the CEO of Quantum Computing Firm Amaya

Album/Alamy

Developer
Alex Garland
FX Hulu, Disney+

March 2020 was an awkward period for many. This might explain why Developer, an eight-part sci-fi series by Alex Garland, premiered during a global lockdown and struggled to garner a wide audience; I, too, unfortunately, missed it.

There are various reasons I decided to catch up on it now: Garland’s works had lingered in my mind after enjoying 28 Days Later, and the darkly captivating worlds of Developer felt like a welcome escape from the heatwave. However, mainly, I was curious about how it had aged five years after its debut.

In Developer, Lily Chan (Sonoya Mizuno) works as an engineer for Amaya, a quantum computing firm based in San Francisco. Each day, she collaborates with her boyfriend and colleague Sergei (Karl Glassman), who is involved in Amaya’s AI division. After being invited to join the secretive Devs program, Sergei disappears nearly immediately, leaving Lily convinced that Amaya and the enigmatic Devs project played a role in his vanishing.

Everything in Developer feels cold yet beautiful. The score and sound design are haunting, punctuated by jolts of static and dialogue. The performances reflect this chill, particularly Mizuno’s compelling portrayal of Lily. Meanwhile, Allison Pill shines as Katie, a scientist at Amaya. The company’s campus is an ethereal setting of glass and refined concrete enveloped by pine trees and illuminated by glowing halos, all under the watchful gaze of a towering young girl statue.

The Devs compound feels like entering a Byzantine mosaic, transformed into a secular, three-dimensional space.

Yet, the stunning DEVS compound overshadows everything else; it feels like stepping into a Byzantine mosaic, now rendered secular and three-dimensional. This space serves as a meticulously organized sanctuary for clandestine research, immersed in lavish gold while floating within an electromagnetic field inside a Faraday cage.

The nature of this research prompts a profound shift that delves deep into human impulses, despite the risks of redefining humanity itself. Forest posits that the project is fundamentally tied to all that is valuable. It boldly explores the extent of incredible technological advancements that might arise—or be stunted—due to the personal philosophies of privileged figures like himself.

Watching Developer at its peak feels akin to being enveloped in a soothing sound bath, the slow reverberations drawing you in. At its least inspired, it can seem self-indulgent. Still, it offers an intellectual experience, addressing fascinating concepts such as the multiverse. However, Lily’s pursuit to unravel the truth about Sergei gets sidelined in favor of Amaya’s overarching mysteries, causing the series to spiral into self-importance.

In a twist of life’s quirks (light spoilers ahead), the show’s most insightful theme might revolve around the desire to revisit the past and what we gain or lose along the way. Interestingly, such reflections may prove more compelling than lofty visions about our technological future. I’m glad I finally watched Developer five years post-release; despite some indulgent tendencies, it left me with plenty to appreciate. Even if Forest and his counterparts might not find full success, Developer still resonates deeply with me.

I also recommend…

Ex Machina
Alex Garland

In Garland’s directorial debut, programmer Caleb (Domhnall Gleeson) is tasked by his boss with evaluating whether Ava, an artificial intelligence, possesses true sentience. The film delivers a chilling psychological exploration.

Never Let Me Go
Mark Romanek

This adaptation of Kazuo Ishiguro’s novel features a rare boarding school depicted through a haunting lens; it’s flawed yet captivating and definitely worth the watch.

Bethan Ackerley is a sub-editor at New Scientist, with a passion for science fiction, sitcoms, and the eerie. Follow her on Twitter at @inkerley

The Arts and Science of Writing Science Fiction

Dive into the world of science fiction writing this weekend, exploring the art of building new worlds and narratives.

Topics:

Source: www.newscientist.com

Boost Your Mathematical Creativity with This String Art Game

“Like any other mathematical concept, this idea is open to exploration.”

Peter Rowlett

As a child, Mary Everest Boole discovered several cards adorned with evenly spaced holes along the edges. By tightening threads from each hole to its opposite, she created a line that gracefully crossed the center. This exercise allowed her to form a symmetrical curve and fostered her intuition for formal geometry.

A few years later, in 1864, she found herself a widow with five children. Despite the academic establishment’s disregard for women’s contributions, she persevered as a librarian and math tutor in London.

Boole believed that engaging children with mathematical objects, like her curve stitching activities, could deepen their understanding. She connected mathematical imagination and creativity in various ways, using fables and history to elucidate logic and algebra.

Now you can explore by creating a “string art” image inspired by her work. Begin with a pair of horizontal and vertical axes, each 10 cm long and marked with numbers 1-10 spaced 1 cm apart. Create a straight line from point 1 on the horizontal axis to point 10 on the vertical axis. Continue connecting points 2 to 9, 3 to 8, and so forth. While all lines are straight, the intersections will form curves.

You may have used drawing software to control the path’s shape via two endpoints. These represent Bezier curves, crucial in computer-aided design, reminiscent of Boole’s early stitching curves fixed to the axes and their intersection points.

With practice, you should be able to draw lines without numbering them—experiment with different colors as well. She recommended it as a stitching exercise rather than a drawing, which can also be approached using threads. Simply substitute the dots with holes.

Like other mathematical concepts, this idea invites exploration. For instance, alter the axes to meet at varying angles, or examine what occurs when the distances between dots differ, such as 1 cm for one line and 2 cm for another.

Consider drawing a circle or another shape, distributing dots evenly around it, then systematically connecting them. For example, connect all dots in a clockwise fashion for ten dots. You can even recreate the boat-like image shown above (center, right). What else can you create?

For more creative projects, visit newscientist.com/maker

Topic:

Source: www.newscientist.com

Universal Detectors Identify AI Deepfake Videos with Unprecedented Accuracy

Deepfake video showcasing Australian Prime Minister Anthony Albanese on a smartphone

Australia’s Associated Press/Alamy

Universal DeepFake Detectors have demonstrated optimal accuracy in identifying various types of videos that have been altered or entirely produced by AI. This technology can assist in flagging adult content, deepfake scams, or misleading political videos generated by unregulated AI.

The rise of accessible DeepFake Creation Tools powered by inexpensive AI has led to rampant online distribution of synthetic videos. Numerous instances involve non-consensual depictions of women, including celebrities and students. Additionally, deepfakes are utilized to sway political elections and escalate financial scams targeting everyday consumers and corporate leaders.

Nevertheless, most AI models designed to spot synthetic videos primarily focus on facial recognition. This means they excel in identifying a specific type of deepfake where a person’s face is swapped with existing footage. “We need a single video with a manipulated face and a model capable of detecting background alterations or entirely synthetic videos,” states Rohit Kundu from the University of California Riverside. “Our approach tackles that particular issue, considering the entire video could be entirely synthetically produced.”

Kundu and his team have developed a universal detector that leverages AI to analyze both facial features and various background elements within the video. It can detect subtle signs of spatial and temporal inconsistencies in deepfake content. Consequently, it identifies irregular lighting conditions for people inserted into face-swapped videos, as well as discrepancies in background details of fully AI-generated videos. The detector can even recognize AI manipulation in synthetic videos devoid of human faces, and it flags realistic scenes in video games like Grand Theft Auto V, independent of AI generation.

“Most traditional methods focus on AI-generated facial videos, such as face swaps and lip-synced content.” says Siwei Lyu from Buffalo University in New York. “This new method is broader in its applications.”

The universal detector reached an impressive accuracy rate of 95% to 99% in recognizing four sets of test videos featuring manipulated faces. This performance surpasses all previously published methods for detecting this type of deepfake. In evaluations of fully synthetic videos, it yielded more precise results than any other detectors assessed to date. Researcher I presented their findings at the 2025 IEEE Conference on Computer Vision and Pattern Recognition in Nashville, Tennessee, on June 15th.

Several researchers from Google also contributed to the development of these new detectors. Though Google has not responded to inquiries regarding whether this detection method would be beneficial for identifying deepfakes on platforms like YouTube, the company is among those advocating for watermarking tools that help label AI-generated content.

The universal detectors have room for future enhancements. For instance, it would be advantageous to develop capabilities for detecting deepfakes utilized during live video conference calls—a tactic some scammers are now employing.

“How can you tell if the individual on the other end is genuine or a deepfake-generated video, even with network factors like bandwidth affecting the transmission?” asks Amit Roy-Chowdhury from the University of California Riverside. “This is a different area we’re exploring in our lab.”

Topics:

Source: www.newscientist.com

Lancet Highlights Plastic Crisis Ahead of Global Plastics Treaty

A recent report from prominent medical journals highlights that the worldwide “plastic crisis” incurs a staggering cost of $1.5 trillion annually for governments and taxpayers.

By 2060, plastic production is projected to triple, with less than 10% being recycled. Currently, approximately 8,000 megatons of plastic are contaminating the planet. Recent research reviews published on Sunday by the Lancet.

This issue inflicts damage at every phase, from fossil fuel extraction and production to human consumption and eventual environmental disposal, according to the British publication.

“Plastics pose a significant, escalating, and often overlooked threat to both human and environmental health.” “They contribute to illness and mortality from infancy to old age, exacerbating climate change, pollution, and biodiversity loss.”

He also noted that these adverse effects “disproportionately impact low-income and vulnerable populations.”

In June, boaters collected recyclable plastic from the heavily polluted Sitaram River in Bandung, West Java, Indonesia.
Timur Matahari/AFP via Getty Images

This serves as the latest alarming message from experts regarding the widespread dangers posed by plastics, which the Journal deems “the material for our age.” After years of warnings about their presence in oceans and rivers, microplastics have now been discovered in humans, including in breast milk and brain tissue.

Sunday’s announcement initiated a new monitoring system called the “Lancet Countdown on Health and Plastics.”

This was introduced alongside the concluding speeches in Geneva, Switzerland, where representatives from 175 countries are seeking to establish the first global treaty on plastics.

Activists are hopeful that the discussions taking place from Tuesday through August 14th will set key objectives for reducing plastic production. Some nations, including China, Russia, Iran, and Saudi Arabia, have previously resisted these initiatives and advocated for increased plastic recycling.

According to the Lancet, major petrochemical companies are “key players” in the escalating production of plastics as they shift their focus towards plastics in light of dwindling fossil energy demand.

Various plastics, often derived from food and beverage containers and packaging, contain up to 16,000 different chemicals, which “enter the human body through ingestion, inhalation, and dermal absorption,” the study states.

Pregnant women, infants, and young children are “especially vulnerable,” facing risks such as miscarriage, physical deformities, cognitive impairment, and diabetes. In adults, the risks include cardiovascular disease, stroke, and cancer.

“Given the substantial gaps in our understanding of plastic chemicals, it is likely that the health threats they pose are undervalued, and the disease burden resulting from them is currently underestimated,” he added.

The Lancet cited a study that estimated the global financial burden of these illnesses to be $1.5 trillion.

“It is now evident that the world cannot escape the plastic pollution crisis,” stated the Lancet. “Addressing this crisis requires continuous research, involving science-backed interventions: legislation, policy, monitoring, enforcement, incentives, and innovation.”

Source: www.nbcnews.com

Why Food Noise, Not Willpower, Holds the Secret to Weight Loss

Among the countless enigmas of science, I find myself intrigued by the enticing toffee tucked away in the kitchen cupboard. It has me completely captivated—almost like I’m being swept into some metaphysical currents.

The pressing question is: how? How do chocolate bars diminish my willpower when I thought I was a seasoned adult who should know better?

The solution may be linked to the concept of “food noise.” This pervasive and disruptive phenomenon relates to the relationship between food and our perceptions of it.

In the case of my beloved toffee crunch, these fleeting thoughts are mere distractions that I inevitably give in to within an hour.

Food noise can be a serious issue

For some individuals, food noise is a substantial concern. Hayashi Daimon, a doctoral researcher at Penn State University, explains.

He notes that when the volume increases, food noise becomes a source of “a constant obsession that undermines an individual’s well-being and complicates healthy choices.”

Although research on food noise is sparse, in 2023, Hayashi and his colleagues published a paper aiming to connect established knowledge about food cues and anecdotal insights into food noise. Their team is actively pursuing further research.

“I believe that research on food noise is at a point where asking the right questions and seeking empirical data is more crucial than making presumptions,” Yayashi highlights.

“From our preliminary findings, I can tell you that most social media accounts describe food noise as a source of distress, with people indicating they would rather avoid it.”

read more:

Individuals respond variably to food cues

Hayashi asserts that food noise is characterized by an ongoing, heightened reaction to food signals—ranging from social media advertisements to the aroma of someone’s dish, and the internal mechanisms that regulate appetite hormones.

While these cues serve to nourish us, some individuals exhibit quicker responses.

“A combination of personal attributes such as genetics, lifestyle, and stress can lead to heightened sensitivity to food noise, particularly as we are frequently subjected to strong external food signals.”

An intriguing study has emerged from research on GLP-1 agonists, a category of weight loss medications including Ozempic.

“Anécdotal evidence suggests that many individuals use the term ‘food noise’ to describe obsessive behaviors regarding food prior to starting such treatments,” says Hayashi.

“My hypothesis is that one of the impacts of these medications, which might clarify their efficacy in obesity treatment, is that they diminish the responsiveness to food cues and lessen susceptibility to food noise.”

Further research will address this inquiry and determine the extent of food noise’s impact on others.

Certain foods are rich in brain dopamine reactions termed “Bliss Points,” particularly those high in sugar, fat, and salt, like chocolate digestives – Credit: Peter Dazelly via Getty

What actions can we take regarding food noise now?

If you’re searching for approaches to manage intrusive thoughts about food (or intense cravings for forbidden toffee), Hayashi suggests consulting a nutritionist. They can assist in examining your dietary habits and devising strategies to enhance resistance to food cues.

These strategies may encompass mindful eating practices or ensuring sufficient meal consumption to avoid the discomfort of hunger at mealtimes.

Crucially, avoid falling into the trap of guilt surrounding the notion that struggling with food represents personal failure.

“We exist in a paradoxical society where cultural messages incessantly promote thinness and muscularity, while simultaneously compelling us to engage with external food cues that lead to poor dietary choices,” he explains.

“This creates an ideal scenario for suffering from food noise, compounded by a social structure that makes access to healthier options more challenging than opting for convenient, highly processed foods lacking in nutritional value.”

About our experts

Hayashi Daimon is a doctoral researcher at Penn State University in the United States. His work has been featured in Nutrients, Journal of Human Nutrition and Dietetics, and Surgery for Obesity and Related Diseases.

read more:

Source: www.sciencefocus.com

Essential Information for Those Taking Statins to Manage Cholesterol Levels

Statin usage is on the rise, with the National Institute of Excellence in Health and Care (Nice) reporting that in October 2024, around 5.3 million people in the UK were using statins or cholesterol-lowering medications in the previous year.

This figure has nearly tripled since 2015/2016, now reflecting almost 10% of the nation’s population. Likewise, statin usage is also increasing worldwide.

Doctors prescribe statins primarily to prevent heart disease, the leading cause of death globally. These medications lower low-density lipoprotein (LDL) cholesterol—the “bad” cholesterol that contributes to artery clogging—by aiding the liver in cholesterol production and blood extraction.

While statins effectively prevent heart attacks and strokes, there are still questions individuals have before commencing treatment.

Consider inquiries like: If my cholesterol is high, should I take statins? Could I improve my condition through diet and exercise first? What side effects might I experience when I start taking statins?

The answers to the first two questions are ultimately the same: the decision rests with you.

How to Determine if Statins are Right for You

The choice to begin statin therapy should be made alongside a healthcare provider, considering not just cholesterol levels, but the overall risk of heart disease.

This involves evaluating other risk factors such as blood pressure, family history, and even geographic location.

As Julie Ward, a senior cardiac nurse at the British Heart Foundation, explains, your physician will use all available information to calculate your individual cardiovascular risk score.

“Once we have that cardiovascular risk score, we can initiate a discussion on measures to reduce that risk,” Ward states. “It’s about individuals [doctors] or pharmacists communicating, ‘This is your cardiovascular risk. We recommend starting you on a statin.’

The initial conversation may focus on lifestyle modifications, such as healthier eating habits and smoking cessation. You can assess your risk with an online calculator like this one. A higher 10-year risk score indicates a greater likelihood of needing to discuss statin therapy.

After a few months, you may visit the calculator again to see if your risk has changed, and perhaps determine that your risk is low enough to pursue dietary changes and exercise instead. But what if the recommendation for statins remains strong?

Statins are Safe and Effective

It’s natural to feel apprehensive about starting a potentially lifelong medication.

However, scientific research may provide reassurance; ample evidence highlights the effectiveness of statins in preventing heart disease, says Professor James Shepherd, a Health Data Scientist at the University of Oxford.

“Statins are arguably the most studied medication in medical history,” he points out. “Numerous clinical trials have examined their effects.”

Additionally, researchers have compiled the results of numerous trials to bolster the evidence surrounding statins.

Cholesterol accumulates in veins, obstructing blood flow – Image credit: Getty Images

For instance, in 2015, researchers from Cochrane, a distinguished medical review publisher, synthesized data from nearly 39,000 individuals who participated in 296 trials assessing Atorvastatin, the most commonly prescribed statin.

Their comprehensive review revealed that taking atorvastatin for up to 12 weeks reduced LDL cholesterol by 37-52%, depending on the dosage.

What’s the impact of statins on the risk of death from heart attacks and other cardiovascular issues? The answer largely depends on individual circumstances.

A recent review from early 2025 indicated a risk reduction ranging from 20% to 62%, with higher percentages for individuals in high-risk groups. This represents significant savings for medications costing less than £2 (around $2.50) per 28 tablets.

However, it’s essential to understand how to interpret numbers for your own decision-making.

In large-scale trials, efficacy is often expressed in relative terms, indicating the difference compared to those not taking statins.

As Shepherd emphasizes, “For real-world treatment decisions, the absolute risk is what matters most.”

For example, if a statin reduces the risk of a heart attack by 20%, a patient with a 1% absolute risk (or cardiovascular risk score) sees their risk drop from 1% to 0.8%.

In contrast, those with risk scores above 10% can realize significantly greater benefits.

read more:

Side Effects Vary

While we know statins are effective, what about the negative aspects related to side effects?

“Previous reporting has skewed public perception,” reveals Ward. “Yet, research demonstrates that side effects are minimal, and statins are well-tolerated by most patients.”

This is supported by findings from Shepherd and his team, detailed in a 2021 study that reviewed side effects across 62 trials involving over 120,000 participants, revealing only “a small number” reported issues.

Approximately 15 out of every 10,000 individuals experienced muscle pain and related symptoms, while liver, kidney, and eye abnormalities were even less prevalent.

When patients discontinue statin use, it is often linked to side effects. So, what’s behind this?

A different 2021 study suggests that the perceived side effects may be associated with the act of taking medication rather than their actual occurrence.

In that research, 60 participants received a month’s supply of different medications—some statins, some placebos—without knowing which was which. A year later, researchers noted more symptoms as patients were taking medication compared to nothing at all, with 90% of those experiencing side effects from statins also reporting symptoms while on the placebo.

Adjusting Dosage or Medication

If you encounter side effects, it’s essential to communicate these with your doctor instead of just enduring them.

For instance, atorvastatin can be prescribed in doses ranging from 10 to 80 mg per day. Side effects are often dose-dependent; hence, 80 mg is more likely to induce issues than 10 mg, though a lower long-term dose is usually possible.

“If someone has high cholesterol, they may start at 80 mg,” explains Ward. “If they’re managing well in a few months, we might lower it to 40 mg, and potentially down to 20 or 10 mg later on, transitioning to a maintenance dose.”

Alternatively, switching medications can also be effective. In the UK, five different statins are available, all functioning similarly, though atorvastatin is often regarded as the most effective.

“If someone previously took a higher dose of a different statin, a doctor could prescribe atorvastatin at a lower dosage that could achieve similar cholesterol-lowering effects with fewer side effects,” Shepherd adds.

In rare cases, taking statins may lead to more serious issues affecting the liver and kidneys, which is why regular blood tests are crucial for monitoring.

Individuals with diabetes might be concerned regarding findings suggesting that statins can elevate blood sugar levels.

Nevertheless, the cholesterol-lowering benefits are believed to outweigh the minimal increases in blood glucose.

In conclusion, taking statins is a personal choice. If you have concerns, consider discussing them with a cardiac nurse at the British Heart Foundation or explore resources on cholesterol at Heart UK.

About Our Experts

Julie Ward is a senior heart nurse at the British Heart Foundation.

Professor James Shepherd is a health data scientist at the University of Oxford, focusing on cardiovascular disease prevention. His work has been featured in General UK magazines, BMC Medical Research Methodology, and BMJ Open.

read more:

Source: www.sciencefocus.com

Tired, Hungry, and Clumsy? It Might Be Time to Revamp Your Sleep Routine!

Lack of sleep is a widespread issue, often leading to a bad mood the following day and leaving you feeling somewhat awkward.

The NHS recommends that adults aim for 7-9 hours of sleep each night to feel refreshed and alert. Persistent sleep deprivation can result in severe health issues such as high blood pressure, depression, and obesity. Even just one or two nights of poor sleep can significantly impact your mood and performance.

What occurs in your brain while you sleep? And why do just a few hours less than your usual sleep amount have such a detrimental effect?

Is your brain “awake” while you sleep?


While you sleep, your brain conducts several crucial processes to help reset your body’s organs and systems.

It eliminates toxins and metabolic waste through the glymphatic system and organizes long-term memories in the neocortex.

Neural connections are reinforced, and activity in the amygdala and prefrontal cortex aids in regulating emotional responses for the following day. REM sleep is vital for problem-solving and emotional processing, while hormonal regulation during sleep promotes stress recovery and appetite balance.

Consequences of sleep deprivation


In our fast-paced world, achieving sufficient sleep can be challenging. With constant demands on our time, even short-term fatigue can set the stage for a tough day, making it important to understand the significance of sleep.

Common symptoms of sleep deprivation include:

• Impaired cognition and reduced concentration
• Decreased emotional resilience
• Weakened immune response
• Impaired exercise adaptation
• Increased appetite due to hormonal imbalances
• Elevated cortisol levels
• Disruption of insulin sensitivity

While it’s advisable for adults to target 7-9 hours of sleep each night, how can you ensure you get enough rest to stay alert and healthy?

Tips for Improved Sleep Quality


Silentnight has dedicated 80 years to exploring the science of quality sleep. In partnership with Central Lancashire University, the sleep brand gathers sleep biomechanics data to develop a variety of sleeper-type products.

We reached out to Silentnight for suggestions on fostering healthy sleep habits.

Maintain a Consistent Routine

Melatonin is a hormone that regulates your body’s circadian rhythm, particularly the sleep/wake cycle. It signals that it’s time for sleep, prompting a drop in body temperature and reduced alertness. Consistency is key in maintaining melatonin levels, so keep your schedule regular.

Establish a Relaxing Pre-Sleep Ritual

Cortisol levels naturally decrease at night, which is essential for sleep since high levels can disrupt melatonin production. Engage in calming activities—baths, reading, or listening to soothing music—but avoid blue light from screens as it can hinder melatonin release.

Keep your Sleep Environment Cool

The ideal room temperature for sleep is typically between 15.5°C and 21°C. Even slightly exceeding this range can negatively affect the quality and duration of your sleep.

Choose the Right Mattress

Silentnight states, “Pressure points and overheating can disrupt sleep.” They offer a range of mattresses with varying spring systems and materials to accommodate different sleeping styles.

Discover more about Silentnight products and find a mattress tailored to your sleeping needs here.

Learn more

Source: www.sciencefocus.com

The Lethal Fungus Linked to Tutankhamun’s “Curse” May Now Hold Life-Saving Potential

The fungus that has long been linked to the death of the archaeologist who uncovered King Tutankhamun’s tomb may now have a role in saving lives. Researchers have utilized the toxic bacteria Aspergillus flavus—often associated with the so-called “Pharaoh’s Curse”—to develop a potent new compound capable of killing cancer cells.

A study published in Natural Chemistry Biology revealed that the fungus produced previously unknown molecules, which the research team subsequently corrected and tested against human leukemia cells.

Two compounds known as asperigycin exhibited strong anti-cancer activity. After correction, one variant, along with two FDA-approved drugs, was effective in eliminating cancer cells.

“We know that fungi have significant potential to generate bioactive molecules,” stated senior author Professor Sherry Gao in an interview with BBC Science Focus. “However, only a small fraction of these possible molecules has been discovered.”

A. flavus carries a grim legacy. Following the opening of King Tut’s tomb in the 1920s, a wave of fatalities fueled the myth of the Pharaoh’s curse. Subsequent investigations indicated that spores of A. flavus, sealed within the tomb for millennia, could have triggered deadly pulmonary infections.

A similar incident occurred in the 1970s, where 10 out of 12 scientists who entered the tomb of a Polish king died shortly after exposure to the fungus.

Samples of Aspergillus flavus cultured in GAO labs. – Credit: Veracielbo

Now, the same lethal fungus may catalyze a medical advancement. The research team discovered that A. flavus produces a type of molecule called RIPP, short for ribosome-synthesized post-translationally modified peptides.

These molecules are known for their intricate structure and significant biological effects, yet few have been identified from fungi.

The team isolated four peptides featuring a distinctive ring-shaped structure. When tested on cancer cells, two were particularly effective against leukemia. The third, artificially modified with a fatty molecule known as a “lipid chain,” exhibited effects similar to conventional chemotherapeutics like cytarabine and donorubicin.

“After modification, the compounds were better at entering the cell,” Gao explained. “I believe that once inside, there is a mechanism to inhibit cell division.”

GAO noted that further research is essential to understand how RIPPS target cancer cells and why they are effective against leukemia but not other tested cancer types.

According to GAO, the team aims to develop a platform to identify more potentially beneficial products derived from fungi.

“Nature has gifted us this incredible pharmacy,” Gao remarked in a statement. “It is up to us to uncover that secret.”

Read more:

About our experts

Xue (Sherry) Gao serves as an Associate Professor of President Pen Compact at the University of Pennsylvania. Her laboratory is focused on developing highly specific and effective genome editing tools for diverse applications in disease treatment, diagnosis, and the exploration of new small molecule drugs.

Source: www.sciencefocus.com

Transform Your Body in a 4-Day Work Week: Here’s How!

If you find yourself at your desk, feeling a bit fatigued and pondering where the weekend went, the thought of a four-day workweek might sound incredibly appealing. Just think about all you could accomplish with an extra day! You could finally tackle those odd tasks, enjoy some fresh air, or simply catch up on sleep.

This notion has circulated for years, but now the evidence is mounting. By trimming the workweek by just one day, you can reduce stress, enhance sleep quality, boost physical activity, and even improve productivity.

This concept is shaping a global movement toward rethinking the modern workweek, backed by trials occurring in Europe, North America, and other regions.

A recent study conducted by researchers from Boston College and University College Dublin tracked approximately 3,000 employees across 141 organizations in six English-speaking countries. For six months, these participants worked up to eight hours less per week, without any reduction in pay.

The results published in Natural Human Behavior were quite impressive. Employees reported enhanced mental and physical health, fewer sleep disturbances, and lower fatigue levels. Most companies found sufficient value in the results to continue with the new arrangements post-trial.

“We are observing global trends where workers experience burnout, extended hours, and minimal time for personal and family matters—not just in high-income nations but across many low and middle-income countries,” noted Wenfang, the study’s author and associate professor of sociology at Boston University, in an interview with BBC Science Focus.

“A four-day workweek offers a potential avenue for employees to rethink and restructure their work arrangements for better benefit.”

Hard Data

While many studies rely on employee surveys, recent research in Germany led by Professor Julia Bachmann aims to gather more concrete data. Her team monitored stress, activity levels, and sleep using Garmin fitness trackers worn by both participants in the four-day workweek and those maintaining full-time schedules.

The findings revealed that those in the four-day workweek group experienced significantly lower stress levels, as indicated by heart rate variability.

“The four-day workweek group showed significantly less stress on most days,” said Bachmann to BBC Science Focus. “Interestingly, even on weekends, they did not reach the stress levels of the control group.”

Interestingly, Saturday turned out to be the most stressful day, likely due to errands and family responsibilities, while Sunday was the least stressful. Participants also increased their walking, exercise, and gained an extra 38 minutes of sleep per week.

“They are more active, engaging in more sports. Their stress levels are lower, and they’re sleeping a bit more during the week,” Bachmann noted.

According to Bachmann, the early indicators point in a favorable direction; however, the data on sleep quality is still being analyzed.

Crucially, these physiological findings aligned with the self-reported data from participants. This is significant given the long-standing concerns about bias in self-reported data in other studies on the four-day workweek. “This is typically the main criticism,” Bachmann stated. “But now we have objective data that supports these self-reported outcomes.”

As part of the same study, researchers also gathered hair samples to analyze cortisol levels, a hormone linked to chronic stress. The results are pending, but Bachmann is hopeful they will be available later this year. If consistent with other findings, these results could provide further independent evidence of the health benefits of a four-day workweek.

The trial included 41 organizations across Germany, spanning from IT firms to healthcare providers. Not every employee transitioned to a four-day schedule within each company, as some departments within large corporations maintained full-time hours. Most who switched reduced their work hours without extending their workdays. Reports indicated that monthly overtime also decreased.

Fortunately, for any CEOs reading this, no significant revenue changes were noted during the four-day workweek, and both employee productivity and work intensity improved.

Importantly, this model gained widespread popularity: 73% of organizations expressed plans to continue with the four-day workweek in some capacity, and 82% of workers hoped to maintain it.

Fitness trackers were used to capture hard data on how four days of the week can improve your health – Credit: Getty Images

The Future of Work

So, is the Monday to Friday grind truly sustainable? According to Professor Cal Newport, a Georgetown University computer science professor trained at MIT and author of Deep Work, it’s not that straightforward. He agrees that a shortened week may offer some relief; however, he believes it lacks proper progression. “One of the key contributors to burnout among knowledge workers is overload,” he noted in an interview with BBC Science Focus. “Individuals juggle numerous projects, tasks, and obligations simultaneously.”

In other words, the focus should not solely be on how long we work, but also on the expectations attached to that work. “Transitioning to a four-day week only indirectly addresses this issue,” he asserted. “There’s anecdotal evidence suggesting that reducing workloads might lessen them somewhat. These new constraints can help people feel comfortable saying ‘no,’ making the most effective approach to managing workloads direct.”

Bachmann’s team is currently planning to streamline the four-day workweek concept further to explore how employees compress their tasks into four days compared to genuine reductions in total working hours.

Overall, the outlook for a four-day workweek is positive. Studies around the globe are converging on similar conclusions. Hopefully, a shorter workweek can enhance health and well-being without compromising performance.

However, as Newport emphasizes, the hours we work may matter less than the expectations we set. If a four-day week becomes a reality, it may require reevaluating our workloads rather than just adjusting our calendars.

Read more:

About Our Experts

Wenfang He is an associate professor in the Sociology department at Boston University, USA. His research has appeared in journals like Natural Human Behavior, Social Forces, Jobs and Occupations, and Advances in Life Course Research.

Julia Buckmann is the chair for co-direction at the Centre for Work Transformation and Business Transformation at the University of Münster in Germany. Before this role, she served as an assistant professor at the University of Dublin and LMU Munich. Having received several international awards, Julia is focused on the impact of social and technological change on (collaborative) work, leadership, and innovation.

Cal Newport is a computer science professor trained at MIT and teaching at Georgetown University in the United States. He writes extensively about technology, work, and the pursuit of depth in an increasingly distracting world. His publications include eight books such as Lower Productivity, Email-Free World, Digital Minimalism, and Deep Work.

Source: www.sciencefocus.com

Introducing the Smart Pill: Enabling Doctors to Examine and Treat Your Intestines Internally.

Emerging technologies enable doctors to leverage microorganisms for diagnosing and treating diseases through gut microbiota. Recent studies highlight these advancements.

Researchers successfully used smartphone apps to genetically alter bacteria, causing them to emit light signals in response.

If proven safe and effective in humans, this treatment could address several illnesses that are currently challenging to manage.

This method encompassed three key elements: bacteria, technology, and pigs. Under the guidance of senior author Hanzi Wang from Tianjin University in China, scientists modified E. coli bacteria to react to specific chemical and optical stimuli.

They created swallowable capsules controlled via Bluetooth that communicate with these photoresponsive bacteria, targeting pigs afflicted with colitis, a type of inflammatory bowel disease that results in intestinal swelling.

The experiment has commenced, allowing scientists to introduce engineered E. coli into the inflamed intestines of pigs through these capsules.

Nitrates, which the body produces during intestinal inflammation, serve as indicators of active colitis. When the modified E. coli come into contact with nitrates, they illuminate.

These smart capsules can detect the optical signal, alerting researchers to the presence of E. coli via Bluetooth.

Through a smartphone app, researchers can command the capsule to start emitting light signals, prompting the E. coli to release anti-inflammatory antibodies to combat colitis.

This innovative approach enables scientists to effectively communicate with the bacteria, ensuring targeted treatment delivery.

Three pigs were infected with colitis, a type of inflammatory bowel disease with few treatment options currently available – Credit: Connect images via Getty

“This represents a remarkable technological advancement,” stated Dr. Lindsey Edwards, a senior lecturer in Microbiology at King’s College London, as reported by BBC Science Focus. Dr. Edwards was not involved in the research.

“Methods like this enable precise, real-time interactions with gut bacteria and have the potential to revolutionize treatment,” she added.

“There is an urgent need for new tools that allow us to harness the full potential of our microbiota to enhance health and better understand and manage microbial infections.”

At present, colitis has no existing treatments, and options are scarce. Dr. Edwards believes that such future methods could “open new pathways” for treating not only inflammatory bowel disease but also other gut-related conditions, including type 2 diabetes, heart disease, and chronic fatigue.

However, Dr. Alexandre Almeida, from the Department of Veterinary Medicine at Cambridge University and not part of this research, warns that this possibility is still distant.

“This is still a preliminary proof-of-concept study,” he noted. “The technology has only been tested in animals and specifically for detecting certain conditions.”

“Before human applications, we must evaluate the safety of this technology and address significant questions, such as how these engineered microorganisms influence the natural balance of other gut bacteria.”

Dr. Nicholas Ilott, a senior researcher at the Oxford Microbiome Research Center who did not participate in the study, stated that the technology is “incredibly exciting” and could prove to be “very valuable” in future medical treatments.

Read more:

About our experts

Dr. Lindsey Edwards is a senior lecturer in microbiology at King’s College London, UK. Her research focuses on mucosal barrier immunology, host-microbe interactions, and the priming of adaptive immune responses, along with intestinal and liver diseases.

Dr. Alexandre Almeida is a Principal Investigator and MRC Career Development Fellow at the University of Cambridge, UK, specializing in bioinformatics and genomic approaches for biological discoveries related to human health.

Dr. Nicholas Ilott is a senior researcher specializing in bioinformatics at the Microbiome Research Centre, Nuffield Department of Orthopaedic Surgery, Oxford University, UK, concentrating on host-microbe interactions in chronic liver and inflammatory bowel diseases.

Source: www.sciencefocus.com

Is the Bee Crisis Really a Hoax?

In 1998, as I began my journey into the world of bees, it didn’t take long for me to develop a passion for them. However, I quickly observed that most people’s understanding was limited to simple facts like “bees make honey” and “they live in hives.”

While beeswax and queen bees received occasional mention, the general enthusiasm for these remarkable insects was mostly grounded in superficial knowledge and cultural associations.

Fast forward a decade, and I noticed a shift. The importance of pollination began to gain recognition, and honeybees were suddenly seen as crucial to food production.

Then, in 2007, disaster struck. Reports of a mysterious and dramatic decline in bee populations, particularly in the United States, started making headlines globally.

Colony Collapse Disorder (CCD) became a sensational topic, capturing media attention and sparking fears of a world devoid of bees. This concern even made its way into the long-running BBC series Doctor Who, showcasing just how dire the situation appeared.

Here we are, two decades later, and once again, headlines shout about the plight of bees. “Millions of bees are dying—so why does it matter?” asked the UK’s Independent, reporting that U.S. beekeepers lost 60-70% of their colonies this year and 55% last year.

Top beekeepers now warn of a “death spiral,” according to The Guardian, and funding cuts from the Trump administration have only heightened concerns.

However, much of the panic surrounding this issue is unfounded. Leading insect experts agree that the situation is often exaggerated and misinterpreted.

Colony Collapse

To grasp the current challenges, we must revisit the mid-2000s and CCD.

During this period, beekeepers noticed that a large portion of the worker bees had disappeared from their hives, leaving the queens, eggs, larvae, and a few bees to tend to them. While CCD predominantly captured American media attention, similar instances have been reported in Europe, Africa, and Asia.

The root causes of CCD remain uncertain but are likely a combination of disease, habitat loss, pesticide usage, and intensive management practices by beekeepers—all contributing factors.

It’s important to note that significant losses are not a new phenomenon. Beekeepers have documented similar events in the past, attributing them to various ailments and conditions.

Lavender is an excellent source of pollen and nectar for honeybees.

Unlike CCD, the recent issues affecting bees are less enigmatic. Early research suggests that many bee deaths are due to viruses transmitted by Varroa mites, which infest bees.

While these mites are known to cause harm and illness, they can generally be managed with pesticides. However, what appears to have happened is that these mites have developed resistance to the chemicals typically used against them.

This scenario might sound all too familiar. The development of resistance is almost an inevitable outcome across various fields, be it antibiotic treatment for bacteria, cancer therapies, or pest control in agriculture.

With the application of certain pesticides, genetic variability among pests means that some individuals may eventually withstand those chemicals better than others. Once these resistant individuals survive and breed, their offspring inherit this resistance.

A Nest Box as a Harvest

Pesticide and herbicide resistance are critical components of modern agriculture, central to understanding both chemical usage and the issues facing bees.

Globally, the majority of honeybees reside in hives, where they exist in semi-natural conditions that allow for efficient honey harvesting.

In the UK, beekeeping tends to be a hobby, but worldwide, commercial beekeeping operations manage thousands, if not tens of thousands, of hives.

Commercial beekeeping is often a highly technical and intensive agricultural practice, encompassing artificial insemination, requeening, feeding, migration to nectar sources, artificial wintering conditions, and disease management. While wild colonies exist, contemporary bees are primarily farmed species.

Bee Needs

While headlines may proclaim a crisis in bee populations, the data suggests otherwise. According to the United Nations Food and Agriculture Organization, as of 2023, the global population of honeybee colonies has increased by 45% since 1990, despite CCD. Another study indicated a 85% increase since 1960.

It seems likely that the global bee population is not decreasing as dramatically as some narratives suggest. Beekeepers can often recover colony numbers, mitigating the impact of poor harvests.

The cultural significance of honeybees makes them one of the few admired insects. People care about them, and stories of their decline resonate emotionally. In response to alarming headlines, many ask, “What can I do to help?”

For some, the natural conclusion is, “I’ll become a beekeeper!” However, as noted by renowned bee expert Professor Dave Goulson, if you hear about declining songbird numbers, would you consider becoming a chicken farmer?

Such declines cannot be solved by novice beekeepers. In fact, if they manage to keep bees successfully (which is harder than it looks), they may inadvertently outcompete wild bee species and potentially transmit diseases to them. Their efforts could unintentionally harm the very bees they seek to protect.

Hence, bees are not the issue at hand. Like other livestock, they face health challenges, but they do not require our intervention.

That said, the recent media focus on CCD has had a rippling effect, creating a narrative around the decline of other pollinators.

Solitary bees, wasps, hornets, and butterflies are beginning to garner attention as people recognize that these insects also play a role in pollination.

Other pollinators like butterflies are declining in the UK and the US.

As awareness spreads, these stories intersect with the broader issue of declining insect populations. In the UK, 42% of pollinator species have decreased in abundance since the 1980s. Some species are faring better, but overall, the trends for pollinators remain downward.

What can you do to support these wild pollinators? If you have gardens or land—whether it’s your own or a work patch—you can transform it into a refuge for insects.

Planting nectar and pollen sources is one of the most effective actions you can take. Numerous species, such as fruit trees and lavender, can serve this purpose. A comprehensive list of nectar plants can be found online through resources like the Wildlife Trust and the Royal Horticultural Society.

Additionally, resist the urge to prune excessively, minimize pesticide use, and ensure some areas remain untouched. Bug hotels are beneficial, but leaving dead trees and natural debris in your garden can offer shelter and potential nesting sites.

Creating a pond is another excellent idea. Adding some sticks alongside it ensures thirsty insects can safely drink on warm days.

While bees are capturing all the attention, they may not be the primary beneficiaries of our concern. If your aim is to support bees, consider becoming an advocate for all insects, rather than just taking up beekeeping.

Read more:

Source: www.sciencefocus.com

Tonight’s Meteor Shower: A Guide to Enjoying the Spectacular Perseid Meteor Show of 2025

The Perseid meteor shower is set to be one of the most prominent displays of 2025, providing a fantastic opportunity to gaze at the night sky.

These meteor showers are famous for their high meteor velocities, reaching up to 100 per hour under ideal conditions.

Moreover, if you wake up early to witness the meteor shower in the pre-dawn hours, you may catch another astronomical sight. On August 13th, Jupiter and Venus, the two brightest objects in the night sky after the moon, will make their closest approach of the year.

This guide has everything you need to enjoy the 2025 Perseid meteor shower to the fullest.

When will the Perseid meteor shower occur in 2025?

The Perseids will be active from July 17th to August 24th, peaking on the evening of August 12th.

This period will see the highest number of meteors, but if clouds or timing prevent you from witnessing the peak, you can still enjoy a good show between August 9th and 15th.

The best viewing times for the Perseids are from midnight until about an hour before dawn. However, even in the late evening, you might still spot a few meteors.

The Zenital Hourly Rate (ZHR) for the Perseids is estimated at 100-150 meteors per hour, but that doesn’t guarantee a large number of visible shooting stars.

“The ZHR represents the expected rate under ideal conditions, which are seldom met,” explains Pete Lawrence, an expert astronomer and presenter for Night Sky.

“Consequently, the actual number of visible meteors, or the visual hourly rate, is often lower. Nevertheless, a high ZHR indicates that good activity is possible.”

Where is the best place to view the Perseid meteor shower?

Meteors can appear anywhere in the sky, so your best bet is to find a clear area with as wide a view as possible.

While following the trails of the Perseid meteors, you’ll notice they all originate from the same point known as the Radiant in the constellation Perseus.

It’s advisable not to look directly at the Radiant; instead, gaze away from it to catch meteors with their long tails.

Finding Perseus is worthwhile as the constellation rises just as the sun sets and remains visible throughout the night in the northern sky.

The easiest way to locate it is to look for the W-shaped constellation Cassiopeia, which consists of prominent stars positioned higher in the sky; Perseus lies just below it.

What is the ideal location for observing meteor showers?

The prime spot to observe the 2025 Perseid meteor shower is a dark area with an unobstructed view of the sky.

Light pollution can wash out dim meteors, so it’s best to escape the urban sprawl and find a truly dark site. Ensure the location is safe and secure.

If you can’t get far, don’t fret; simply find a sheltered spot free from direct lighting. This could be your backyard or a local park where you can block out harsh streetlights.

Whenever possible, escape to a Dark Sky Site – Credit: Getty Images

How can I best view the Meteor Shower?

The optimal way to experience the meteor shower is to lie back and take in as much sky as possible.

Avoid using telescopes or binoculars as they limit your view; it’s best to watch with your own eyes.

Once you’re settled, allow your eyes to adjust to the darkness. This process takes about 30 minutes, although you’ll start noticing changes before that.

Be cautious — a single bright light can ruin your night vision, so ensure security lights are off and switch your phone to red light mode.

Does the moon affect visibility?

One uncontrollable form of light pollution is the moon.

The moon will be waxing in the days leading up to the August 9th peak of the 2025 Perseid meteor shower. On peak night, it will be about 88% illuminated and prominent throughout the night.

If possible, position yourself so that buildings or trees block the moon’s glare.

The moon rises in the east and ascends higher into the sky as the night progresses.

Top tips for enjoying the Perseid Meteor Shower

  • Choose a dark location. Whether it’s a designated dark sky area or a secluded part of your backyard, find a spot far from artificial light while enjoying unobstructed views of the sky.
  • Use red light on your phone. Red lights help preserve your night vision. Some phones can be set to red light mode, while others may need an app.
  • Dress warmly. Even in August, sitting still can get chilly at night. Layers will help you accommodate changing temperatures.
  • Make yourself comfortable. Staring at the sky can strain your neck. A sun lounger could support your head. Alternatively, lying on the ground with a blanket can provide cushioning and warmth.
  • Give your eyes time to adjust to the darkness. This takes about 20-30 minutes; the longer you wait, the more meteors you’ll likely see.

What triggers the Perseid meteor shower?

“A meteor shower occurs when Earth passes through sparse dust particles scattered along a comet’s orbit,” notes Lawrence.

In the case of the Perseids, the comet is 109P/Swift-Tuttle, which completes an orbit around the solar system every 133 years, last passing in 1995.

“The density of dust is greatest in the center of the stream and thins out in the outer regions,” adds Lawrence.

The dust grains, about the size of sand particles, travel through Earth’s atmosphere at an astonishing speed of approximately 215,000 km/h (130,000 mph).

This rapid motion causes the air to heat up to extreme temperatures, resulting in brilliant streaks of light across the sky.

The peaks of meteor showers occur when Earth traverses the densest parts of the dust stream.

“Earth will start to intersect with the broad dust stream of 109P/Swift-Tuttle around July 14th and continue through September 1st,” says Lawrence.

Read more:

Source: www.sciencefocus.com

Newly Discovered Giant Stick Insect Species in Australia

Australian entomologists unveil a remarkable new species from the Stick Insect genus Acrofella, identified from two female specimens and their eggs.



Holotype of Acrofera Alta in its natural habitat. Image credit: Ross M. Coupland.

Originally described in 1835, Acrofella is a genus of stick insects belonging to the tribe Phasmatini.

Species in this genus inhabit nearby regions including China, Australia, New Guinea, Tasmania, and Lord Howe Island.

The newly classified Acrofera species is found in the highlands of the Wettropic Bioregion in Queensland, Australia.

“Key locations include Lewis National Park, Evelyn Tableland (likely encompassing Maarlan National Park), Topaz, Upper Baron, Mount Hypamie, and Dumbra,” stated Professor Angus Emmott from James Cook University and his colleague Ross Coupland.

The new species, named Acrofera Alta, can reach lengths of up to 40 cm (16 inches) and weigh approximately 44 g.

Typically light brown in color, this species is exceptionally camouflaged despite its large size.

“Although there are long stick insects in this region, they tend to have relatively light bodies,” explained Professor Emmott.

“As far as we know, this is Australia’s heaviest insect.”

The eggs of Acrofera Alta were also crucial in distinguishing it as a new species.

“Every stick insect species has distinct egg characteristics,” noted Professor Emmott.

“Their surfaces, textures, and corrosion patterns vary. Shapes can differ as well.”

“Even the caps of the eggs are uniquely identifiable.”

Researchers speculate that Acrofera Alta may not have been discovered earlier due to the inaccessibility of its habitat.

“Their environment could explain their large body size,” Professor Emmott added.

“It is a cool, damp habitat.”

“Larger body weight might enable them to endure colder temperatures, which could have led to their evolutionary characteristics over millions of years.”

The identification of such a large new insect species highlights the critical need to conserve remaining biologically diverse habitats and ecosystems, with potential undiscovered species like stick insects awaiting description.

The discovery of Acrofera Alta has been documented in a study published in the journal Zootaxa.

____

Ross M. Coupland and Angus J. Emmott. 2025. New giant species of Acrofella Gray, 1835 (Fasmida: Fasmida), from the highlands of Wettropic, Queensland, Australia. Zootaxa 5647(4): 371-383; doi: 10.11646/zootaxa.5647.4.4

Source: www.sci.news

Unique Fossil of a Boy’s Chest Dragon Unearthed in Germany

Rhynchocephalians – These are members of the sister group to squamates (which include lizards, snakes, and worm lizards) and encompass living Tuataras (Sphenodon punctatus), dating back to the late Jurassic period in the Solnhofen Archipelago. They have been recognized for nearly two centuries, with an increasing number of specimens and species, yet their evolutionary development remains poorly understood. A well-documented marine rhynchocephalian genus, Plerosaurus, existed during the late Jurassic period about 150 million years ago, but clear juvenile specimens have yet to be identified among more than 15 known specimens (with several unlisted).

Plerosaurus is a remarkable long-swimming Rhynchocephalian that lived around 150 million years ago in what is now Germany during the late Jurassic period. Image credit: Roberto Ochoa.

“Genuine Plerosaurus is the most common rhynchocephalian found in the Late Jurassic deposits of Canjuers and Cerin, France, as well as in the Solnhofen Archipelago, Germany,” stated Dr. Victor Beccari from the SNSB-Bayerische Staatsammlung für Paläontologie and the Ludwig-Maximilians-Universität, along with his colleagues.

“This genus is characterized by an elongated triangular skull, a reshaped anterior jaw, an absence of a low anterior flange in the front part of the teeth, and reduced forelimbs.”

“Currently, there are two species within this genus: Pleurosaurus goldfussi and Pleurosaurus ginsburgi.”

“The specific distinctions are based on the count of anterior sacral vertebrae (50 and 57, respectively), the ratio of skull to appendix, and more advanced pelvic development in Pleurosaurus goldfussi.”

“Extensive research has been undertaken; however, in the more than 15 published specimens of Plerosaurus, no clear juvenile specimens have been recorded as of yet.”



Plerosaurus cf. P. ginsburgi: (a) Standard light photographs. (b) Photo under UV light. (c) Interpretation diagram of the specimen. Image credit: Beccari et al., doi: 10.1002/ar.25545.

In a recent study, researchers described a juvenile specimen of Plerosaurus.

The fossils were sourced from the Mörnsheim Formation near Müllheim, close to Solnhofen, Bavaria, Germany.

“This fossil is especially intriguing as it distinctly exhibits characteristics typical of young animals,” commented the paleontologist.

“Its teeth are small, show no signs of wear, its bones remain underdeveloped, and the vertebrae are still forming.”

“This small size, along with other features, makes it the first clearly identified juvenile Plerosaurus. These specimens bridge crucial gaps in understanding the growth and development of these extinct reptiles.”

Findings of juvenile Plerosaurus have significant implications for classifying another genus, Acrosaurus.

“Historically, some paleontologists have posited that Acrosaurus might represent a juvenile form of Plerosaurus, but until now, there was no substantial evidence to support this theory,” the researchers noted.

“These new fossils exhibit numerous similarities to previously identified Acrosaurus, suggesting that it is not a separate genus, but rather a hatchling form of Plerosaurus.”

“For years, I have sought to comprehend how these animals grew and developed, but I had never encountered such a young, well-preserved specimen,” remarked Dr. Andrea Villa from the Paleontologia Miquel Crusafont Institute.

The team’s paper was published in the March 2025 issue of Anatomical Records.

____

Victor Beccari et al. 2025. Young Pleurosauride (Rhynchocephalia) from the Titonians of the Mörnsheim Formation, Germany. Anatomical Records 308(3):844-867; doi:10.1002/ar.25545

Source: www.sci.news

Is It Possible to Capture Quantum Creepiness Without Entanglement?

Sure! Here’s the rewritten content with the HTML tags preserved:

Light particles seem to display quantum peculiarities even without entanglement

Wladimir Bulgar/Science Photo Library

Particles that appear unentangled achieved significant results in the renowned Entanglement test. This experiment offers fresh insights into the peculiarities of the quantum realm.

Nearly sixty years ago, physicist John Stewart Bell devised a method to determine whether our universe can be better explained through quantum mechanics or traditional theories. The pivotal distinction lies in quantum theory’s incorporation of “abbiotics,” or effects that can persist across vast distances. Remarkably, every experimental implementation of Bell’s tests to date supports the idea that our physical reality is non-local, indicating that we reside in a quantum world.

However, these experiments primarily focused on particles that are closely associated via quantum entanglement. Now, Xiao-Song Ma from Nanjing University in China, along with his team, claims they conducted the Bell Test without relying on entanglement. “Our research may offer a novel viewpoint on non-local correlations,” he states.

The experiment commenced with four specialized crystals, each generating two light particles, or photons, when exposed to a laser. These photons possess various properties measurable by researchers, such as polarization and phase, which describe their behavior as electromagnetic waves. The researchers guided the photons through an intricate arrangement of optical devices, including crystals and lenses, prior to detection.

A standard Bell test experiment involves two fictional experimenters, Alice and Bob, evaluating the properties of correlated particles. By correlating their observations with the “inequality” equation, Alice and Bob can ascertain whether the particles are linked in a non-local manner.

In the new experiment, Alice and Bob were represented by sets of optical devices and detectors instead of interlinked photons. In fact, the researchers incorporated devices in the setup to prevent the intertwining of particle frequencies and velocities. Nonetheless, when Alice and Bob’s measurements were analyzed using the inequality equation, the results indicated a stronger correlation among photons than what could be explained by local effects alone.

Mario Clen from the Max Planck Institute for the Light of Light in Germany suggests that this might be linked to another peculiar property of photons. They indicate it is impossible to identify which photons were “born” within the crystal and what paths they took, making them indistinguishable. Previously, Clen, along with colleagues, utilized this property, termed “distinguishability by path identity,” to entangle photons. However, in this scenario, they confirmed that only one type of quantum peculiarity remains indistinguishable.

The team has yet to formulate a definitive theory explaining how entanglement outcomes can manifest in the Bell test without entanglement actually being employed, but Ma proposes that several underlying quantum phenomena could be indistinguishable as a condition. Thus, even strategies that lack entanglements might serve as the fundamental components necessary to create non-local correlations.

Krenn and Ma express hope that fellow physicists will propose new alternative theories and identify experimental gaps within the Bell test. This mirrors the historical development surrounding the standard Bell test, where nearly five decades elapsed between the initial experiment and the establishment of quantum theory, successfully ruling out all alternative explanations.

One contentious aspect may be the “post-selection” technique utilized by the team. Stefano Paesani at the University of Copenhagen in Denmark argues that this raises questions about whether unentangled photons can be convincingly recognized as non-local within Bell’s tests. After the selection process, he contends that the experiments resemble more traditional scenarios where entanglement exists.

Jeff Randeen from the University of Ottawa, Canada, asserts that while the Bell test can create experiments to examine light, this “holds no profound significance concerning the nature of the universe or reality.”

In such circumstances, there exists the potential for Alice and Bob to act as identical observers or to generate correlations that researchers might misinterpret as non-local effects. Lundeen maintains that the new experiment doesn’t completely eliminate the possibility that Alice and Bob were colluding. “Thus, this experiment doesn’t quite carry the same weight as the renowned violation of Bell’s inequality,” he states.

“This represents one of the elegant extensions of a landmark finding from the ‘Glorious Age’ of the 1990s,” notes Aephraim Steinberg at the University of Toronto, Canada. Nevertheless, in his assessment, traces of entanglement remain in the new experiment—not at the photon level, but rather within the quantum field.

Looking forward, the team aims to enhance the apparatus to address some of these criticisms. For instance, by generating more photons from each crystal, researchers could avoid relying on selection thereafter. “Our collaborative group has already pinpointed several critical potential shortcomings, which we are eager to tackle in the future,” states Ma.

topic:

Source: www.newscientist.com

This summer’s relentless heat and suffocating humidity have taken a toll on me.

Sweltering, sticky, and unyielding: this has been the reality for numerous countries this summer, with over 12 states reporting elevated humidity levels in July.

Preliminary data indicates that most of the affected 48 states experienced significant humidity in the Midwest, East Coast, and parts of the Mid-Atlantic last month. Research compiled by Oregon State University.

While hot and humid weather is typical in summer, the combined “feels-like” heat index values have soared into triple digits for extended periods in states like Ohio, Illinois, Kentucky, Tennessee, and Florida last month.

Cities like Pittsburgh, Roanoke, Virginia, and Washington, D.C., all marked the most humid July on record. Data managed by Iowa Environmental Mesonet tracks precipitation, soil temperature, and various environmental conditions. New York City and Raleigh, North Carolina, also faced severe humidity levels, while humidity in Detroit and Cincinnati hit their third highest levels last month.

In Paducah, Kentucky, the extreme heat and humidity from July 16th to 30th shattered many records for the city.

“We have reached the end of Paducah’s longest sustained high humidity event in the last 75 years,” stated the local National Weather Service branch. This was mentioned in a post on X on Thursday, noting that the hours spent at “oppressive humidity levels” exceeded 300% of the normal for July.

As climate change progresses, days with high humidity are expected to become more frequent. A warmer atmosphere holds more moisture, leading to increased humidity levels which present significant risks to health and public safety.

Elevated heat index values raise the risk of heat-related illnesses and fatalities, especially among vulnerable populations such as children, the elderly, and those with pre-existing health conditions. A 2022 study from nonprofit Climate Central shows that a mixture of high heat and humidity can hinder the body’s ability to cool itself through sweating.

“In various regions across the country and globe, dangerous heat is often coupled with high humidity. I discussed this in an analysis.

Moreover, a warmer atmosphere can lead to more intense storms, which can unleash large amounts of rain and result in hazardous flash floods.

So far this year, over 3,000 flash flood warnings have been issued, as reported by Iowa State University data.

Tragic flooding last month claimed at least 120 lives in the Hill Country area of central Texas, while multiple storms in New Mexico caused repeated flooding throughout July. At the end of the month, a severe storm hit New York City and nearby Tri-state areas, creating chaos during evening commutes.

Source: www.nbcnews.com

DNA Analysis Uncovers the True Cause Behind the Demise of Napoleon’s Army in 1812

Napoleon’s retreat from Russia in 1812 Ary Scheffer

Iandagnall Computing / Alamy Stock Photo

During the retreat of Napoleon’s formidable 500,000-strong army from Russia in 1812, nearly half of the troops fell victim to disease, starvation, and freezing temperatures. Recent advanced DNA analysis is shedding light on the pathogens involved in this tragic demise.

In the summer of 1812, Napoleon amassed an army of 600,000 to invade Russia but was compelled to withdraw from Moscow, depleting the city of resources, and retreat toward the Polish border for the winter. From October to December 1812, around 300,000 French soldiers perished from famine, exposure, and illness.

Survivor accounts from that era indicate that typhoid fever and trench fever were leading causes of mortality and suffering among the troops, a premise that was further validated by genetic testing conducted nearly two decades ago.

Recently, Nicholas Rascovan and his team at the Pasteur Institute in Paris analyzed DNA extracted from the teeth of 13 soldiers interred in Vilnius, Lithuania.

The research team identified the presence of Salmonella enterica, which triggers peritoneal fever, and Borrelia recurrentis, a louse-borne pathogen that leads to recurrent fever.

Unlike earlier studies that relied on methods to amplify specific DNA sequences, Rascovan and his colleagues utilized advanced metagenomic techniques to detect genetic material from pathogens in the samples, allowing for a more extensive analysis.

“Considering our findings, it is plausible that the deaths of these soldiers were due to a combination of various illnesses, including fatigue, colds, lactophoreal fever, and louse-borne recurrent fever,” Rascovan and his team noted in an unpublished report. The team opted not to comment further on the story.

While not always lethal, louse-borne recurrent fever can considerably debilitate individuals who are already in a weakened state, according to the researchers.

Sally Wasef from the Queensland Institute of Technology in Australia opines that historical accounts of symptoms may correspond to multiple infectious diseases beyond those identified in the recent study.

Traces of microbial DNA were isolated from ancient remains, according to Wasef. “In my opinion, this implies that the conclusions drawn are more suggestive than definitive.”

Rascovan and his colleagues also acknowledge the necessity of examining a greater number of soldiers who perished during 1812.

The research underscores the potential of novel methodologies to identify possible infectious agents in historical populations, Wasef explains. She advocates for applying these techniques to study diseases in populations post-contact in regions like the US or Australia.

“Such research holds great promise for uncovering the impact of disease on historical population declines, particularly when written records are sparse or biased,” states Wasef.

topic:

  • Archaeology/
  • Infectious diseases

Source: www.newscientist.com

The Method We Use to Train AIs Increases Their Likelihood of Producing Nonsense

Certain AI training techniques may lead to dishonest models

Cravetiger/Getty Images

Researchers suggest that prevalent methods for training artificial intelligence models may increase their propensity to provide deceptive answers, aiming to establish “the first systematic assessment of mechanical bullshit.”

It is widely acknowledged that large-scale language models (LLMs) often produce misinformation or “hagaku.” According to Jaime Fernandez Fissac from Princeton University, his team defines “bullshit” as “discourse designed to manipulate an audience’s beliefs while disregarding the importance of actual truth.”

“Our analysis indicates that the problems related to bullshit in large-scale language models are quite severe and pervasive,” remarks FISAC.

The researchers categorized these instances into five types: “This red car combines style, charm, and adventure that captivates everyone,” Weasel Words—”Ambiguous statements like ‘research suggests that in some cases, uncertainties may enhance outcomes’; Essentialization—employing truthful statements to create a false impression; unverified claims; and sycophancy.

They evaluated three datasets composed of thousands of AI-generated responses to various prompts from models including GPT-4, Gemini, and Llama. One dataset included queries specifically designed to test the generation of bullshit when AIS was asked for guidance or recommendations, alongside others focused on online shopping and political topics.

FISAC and his colleagues first employed LLMs to determine if the responses aligned with one of the five categories and then verified that the AI’s classifications matched those made by humans.

The team found that the most critical truths posed challenges stemming from a training method called reinforcement learning from human feedback, aimed at enhancing the machine’s utility by offering immediate feedback on its responses.

However, FISAC cautions that this approach is problematic, as models “sometimes conflict with honesty,” prioritizing immediate human approval and perceived usefulness over truthfulness.

“Who wants to engage in the lengthy and subtle rebuttal of bad news or something that seems evidently true?” FISAC questions. “By attempting to adhere to our standards of good behavior, the model learns to undervalue the truth in favor of a confident, articulate response to secure our approval.”

This study revealed that reinforcement learning from human feedback notably heightened bullshit behavior, with inflated rhetoric increasing by nearly 40%, substantial enhancements in Weasel Words, and over half of unverified claims.

Heightened bullshitting is especially detrimental, as team member Kaique Liang points out, leading users to make poorer decisions. In cases where the model’s features were uncertain, deceptive claims surged from five percent to three-quarters following human training.

Another significant issue is that bullshit is prevalent in political discourse, as AI models “tend to employ vague and ambiguous language to avoid making definitive statements.”

AIS is more likely to behave this way when faced with conflicts of interest, as the system caters to multiple stakeholders including both the company and its clients, as the researchers discovered.

To address this issue, the researchers propose transitioning to a “hindcasting feedback” model. Instead of seeking immediate feedback post-output, the system should first generate a plausible simulation of potential outcomes based on user input, which is then presented to a human evaluator for assessment.

“Ultimately, we hope that by gaining a deeper understanding of the subtle but systematic ways AI may seek to mislead us, we can better inform future initiatives aimed at creating genuinely truthful AI systems,” concludes FISAC.

Daniel Tiggard of the University of San Diego, though not involved in the study, expresses skepticism regarding discussions of LLMs’ output under these circumstances. He argues that just because LLMs generate bullshit, it does not imply intentional deception, as AI systems currently stand. I left to deceive us, and I have no interest in doing so.

“The primary concern is that this framing seems to contradict sensible recommendations about how we should interact with such technology,” states Tiggard. “Labeling it as bullshit risks anthropomorphizing these systems.”

Topics:

Source: www.newscientist.com

What is Required to Rebuild Economics with Nature at its Core?

Shrimp Harvesting on a Farm in Southeastern Vietnam

Quang Ngoc Nguyen/Alamy

About Natural Capital
Parta Dasgupta (Witness Book) (UK, now); Mariner’s Book (USA, January 20, 2026)

How do environmental hazards associated with production influence costs? What implications does that have for the nation’s economy? Can we quantify the significance of a healthy living environment and the biodiversity surrounding us?

In 2021, Partha Dasgupta, emeritus professor of economics at Cambridge University, authored a comprehensive 610-page report addressing these inquiries for the UK government. His latest work, About Natural Capital: The Value of the World Around Us, aims to broaden its accessibility.

Your opinion of Dasgupta’s success may hinge on your interest in an analytical exploration of economic concepts interspersed with engaging narratives. His core thesis asserts that GDP’s utility in measuring economic success is fundamentally inadequate. Historical advancements in living standards have primarily stemmed from human innovations; as Dasgupta notes, “entrepreneurs have prioritized labor and capital-saving devices over natural savings devices.”

This is particularly evident with the latest advancements in artificial intelligence, a hallmark of humanity’s quest for “labor and capital savings.” High-tech billionaires behind AI tout extraordinary productivity gains, yet the substantial water consumption for the cooling of associated data centers is often overlooked.

Dasgupta notes in his original report that from 1992 to 2014, per capita human capital (encompassing our health, education, and skills) rose by about 13% globally, while per capita natural capital plummeted by nearly 40%. To remedy this disparity, he champions the widespread adoption of a metric for “global wealth per person” that incorporates nature.

The narrative can be further expanded by examining shrimp farms in Vietnam and Bangladesh. Dasgupta elucidates how these operations adversely impact the “natural capital” of those nations, effects that remain unaccounted for in the retail price of shrimp. The establishment of shrimp farms typically necessitates the destruction of mangroves and salt marshes, reducing carbon storage capabilities.

Notably, around 30% of the diet for these shrimp consists of soybeans cultivated in plantations that replace tropical forests. Dasgupta references a case study suggesting that if true environmental costs were factored in, shrimp export prices might rise by 15-20%. Essentially, affluent nations purchasing shrimp may be receiving an unfair bargain.

While I do not profess expertise in economics, I am generally apprehensive about pursuing economic gains at the expense of significant environmental degradation. So, what are the actionable steps we can take? In a concise chapter, Dasgupta proposes a method to value nature adequately. This could involve collecting fees from shipping companies navigating global waters, with proceeds allocated towards job creation to alleviate pressures on ecosystems worldwide.

These concepts resonate intuitively for me, but I find myself seeking more detailed explanations. Dasgupta alludes to the challenges of achieving collective agreement and the lack of enthusiasm surrounding global shipping fees. This is an area where I wished he presented a more impassioned argument. While his ideas are captivating, they lack the urgency many readers might desire.

About Natural Capital provokes a reevaluation of economic perspectives, though I yearn for a more emotive approach. Perhaps this expectation is excessive for such a publication, yet I remain concerned that crucial messages may not resonate with a broader audience.

Jason Arun Mruguez is a writer based in Newcastle upon Tyne, UK

New Scientist Book Club

Are you an avid reader? Join a warm community of fellow book enthusiasts. Every six weeks, we delve into exciting new titles, offering members exclusive access to excerpts, author articles, and video interviews.

Topic:

Source: www.newscientist.com

An Enchanting Artistic Representation of Marine Life Through the Ages

Strawberry squid, color lithograph

Smithsonian Library, Washington, DC

The world’s oceans, covering one-tenth of Earth’s surface, are the cradle of life, showcasing an astonishing variety of creatures with diverse shapes, colors, and evolutionary traits.

‘Pilchard (Argentina Carolina)’, hand-colored engraving from Mark Catesby

National Agricultural Library, Beltsville, Maryland

Marine biologist Helen Scale’s latest book, Ocean Art: From the Coast to the Deep, takes readers through 140 stunning photographs and illustrations of underwater vistas and their diverse inhabitants.

Yashima Gakutei, three crabs on the edge of the water

The Met Museum

The realm of art mirrors the diversity of marine life, and Scale expertly intertwines insights about artists with the wonders of oceanic life, blending marine biology with art history.

Cyphonophore (Forscaliatrod), illustration

Library, Woods Hole, MA

“It’s captivating to view the ocean through the perspectives of artists and craftsmen,” Scale noted. “They brilliantly convey the essence of life beneath the surface.”

Mycenaean stirrup vessel featuring an octopus, circa 1200 to 1100 BC

The Met Museum

Throughout history, culture has shown a deep fascination with marine life. The featured artworks include a lithograph of the Strawberry squid (Histioteuthis heteroopsis) from 1851, Catesby’s hand-colored sculpture of the Pilchard (Argentine Carolina) from 1743, a 1830 woodblock print from Japan showing crabs, an 1888 illustration of a siphonophore (Forskalia tholoides), and a Mycenaean jar illustrating an octopus from around 1200 to 1100 BC. Additional ceramic artifacts include lobster-shaped containers from Peru and crabs depicted in Nazca ceramic bowls from the 2nd to 4th centuries.

L: (Peru) “Lobster-shaped Stirrup Vessel”, R: Crab Ceramic Bowl

Left; Walters Art Museum. Right; The Met Museum

Ocean Art is scheduled for release in the UK on August 1st and in the US on September 26th.

Topics:

Source: www.newscientist.com

Ozempic: A Potential Key to Reversing Your Biological Age

Growing evidence of Ozempic’s extensive health benefits

David J. Phillip / Associated Press / Alamy Stock Photo

Ozempic, a medication for type 2 diabetes, has been linked to a deceleration in aging, with credible evidence emerging to support this claim.

Drugs like Ozempic and Wegovy, both of which contain semaglutide, have been increasingly recognized for their impact on obesity and are being researched for various conditions, including cardiovascular diseases, addiction, and dementia.

Previously, scientists speculated on their potential to slow biological aging, based primarily on animal studies and observational human data. However, recent clinical trial results offer direct evidence, according to Varun Dwaraka from Trudiagnostic, a diagnostics company based in Lexington, Kentucky.

To evaluate a drug’s impact on biological aging, researchers utilize epigenetic clocks, which highlight patterns of DNA methylation—a chemical modification that influences gene activity. These patterns evolve with age and can be adjusted by lifestyle factors, including diet. Essentially, an individual’s biological age might differ from their chronological age.

Dwaraka and his team examined 108 epigenetic clocks in individuals with HIV-related fat hypertrophy, a condition leading to excess fat accumulation and hastened cellular aging. In a randomized controlled trial, one group received Ozempic weekly for 32 weeks, while the control group received a placebo.


Using blood samples collected pre- and post-trial, the researchers determined the biological ages of 84 participants. “By the study’s conclusion, individuals administered semaglutide were, on average, biologically 3.1 years younger,” states Dwaraka. The placebo group showed no noteworthy changes. “Semaglutide not only decelerates aging but may also reverse it in certain participants,” he adds.

The research revealed that various organs and systems, particularly the heart and kidneys, exhibited slowed biological aging, with the most significant influences noticeable in the inflammatory system and brain.

Dwaraka attributes this phenomenon to semaglutide’s role in fat distribution and metabolic health. Excess fat surrounding organs can release pro-aging molecules that modify the DNA methylation of crucial age-related genes. Semaglutide effectively curtails low-grade inflammation, which is another contributor to epigenetic aging.

While the findings originated from individuals with HIV-associated fat hypertrophy, many of the biological pathways impacted by semaglutide are not unique to HIV. “Thus, similar effects on epigenetic aging may be expected in other populations,” asserts Dwaraka.

It’s not surprising that such drugs can decelerate aging, says Randy Shealy from the University of Michigan School of Medicine, as they alleviate metabolic stress on various cells and diminish inflammation—key drivers of aging throughout different cell types. However, he posits that much of the benefits arise from semaglutide improving overall health rather than direct cellular effects.

It remains to be seen if semaglutide should be taken to maintain biological youth. “It’s premature to widely recommend it as an anti-aging therapy,” Dwaraka cautions. Nonetheless, he believes this study will accelerate ongoing efforts to repurpose existing medications for age-related challenges, expediting approval processes while mitigating the risk of unforeseen side effects. “Semaglutide could become a leading candidate in this arena,” concludes Dwaraka.

Topics:

Source: www.newscientist.com

Cameras Mimicking Human Vision Could Enhance Astronomical Discoveries

Sirius Binary Star System Captured with a Neurotype Camera

Satyapreet Singh, Chetan Singh Thakur, Nirupam Roy, Indian Institute of Science, India

Neurotype cameras, designed to emulate human vision, offer significant benefits for astronomers by enabling the capture of both bright and dim celestial objects in a single frame. This allows for tracking swift-moving entities without the risk of motion blur.

Unlike conventional digital cameras that sample a grid of pixels multiple times per second, recording data for each pixel each time, neurotype cameras, or event cameras, function quite differently. Each pixel is activated only if there’s a change in brightness at that specific location. If the brightness remains constant, no new data is saved, resembling how the human eye processes visual information.

This innovative approach presents various benefits. By recording only changing pixels, less data is generated while maintaining a much higher frame rate. Furthermore, these cameras measure light on a logarithmic scale, enabling the detection of fainter objects next to brighter ones that may saturate conventional camera images.

To investigate the potential of this technology for astronomical applications, Chetan Singh Thakur and his team at the Indian Institute of Science in Bengaluru mounted a neurotype camera on a 1.3-meter telescope at the Aliyabatta Observatory in Uttarkhand, India.

They successfully captured meteoroids traveling between the Earth and the Moon and also obtained images of the Sirius binary system, which includes Sirius A, the brightest star in the night sky, and Sirius B.

Sirius A is approximately 10,000 times brighter than Sirius B, making it challenging to capture both in a single image using traditional sensors, as noted by Mark Norris from the University of Central Lancashire, UK, who was not part of the study.

According to Singh Thakur, neurotype cameras excel at tracking fast-moving objects due to their high frame rates. “For high-speed objects, you can capture their movement without blur, unlike conventional cameras,” he explains.

Telescopes typically utilize multiple sensors that can be swapped as needed. Norris points out that a neurotype camera could serve as an additional tool for viewing scenarios where both very bright and very faint objects need to be observed concurrently, or for quickly moving targets like the recently identified interstellar object 3i/Atlas.

Traditionally, to follow fast-moving objects, astronomers would need to pan the telescope. However, neurotype cameras can accurately track the movement of these objects precisely while maintaining background details and resolving their locations.

“Do you want to know the brightness of an object or its location? In quantum mechanics, you can’t ascertain both at the same instant,” Norris states. “This technology offers a potential method to achieve both simultaneously.”

While neurotype cameras provide unique advantages, they may not replace all sensor applications. Their resolution is typically lower than that of charge-coupled devices (CCDs), which are commonly used in digital cameras, achieving an efficiency of about 78% compared to the 95% efficiency of CCDs. This disparity makes traditional sensors more effective at capturing dim objects near their detection limits.

Topic:

Source: www.newscientist.com