Revolutionary Experiment Uncovers Major Unexpected Issues in Cloning Technology

Mice cloning study

Limited Lifespan of Cloned Mice

Xinhua/Zhou Qi/Imago/Alamy

Cloning involves creating genetically identical copies, yet extensive research over the last 20 years reveals unexpected complexities. Clones often accumulate additional mutations, and if the cloning process is repeated, these mutations can reach lethal levels. This discovery presents important implications for cloning in agriculture, conservation, and even medical applications involving humans.

The core issue lies in the numerous mutations within clones. Adult somatic cells may accumulate more mutations than gametes (egg or sperm cells). Researchers such as Teruhiko Wakayama from the University of Yamanashi in Japan suggest that the cloning process may also contribute to these mutations. “While we once believed clones were identical to their originals, the accumulated mutations present significant challenges,” Wakayama states. “Our goal is to confirm that these mutations do not lead to complications.”

Historically, cloning mammals was deemed implausible because cellular differentiation adds various chemical tags that regulate gene activity. The successful birth of Dolly the sheep in July 1996 demonstrated that transferring the nucleus of an adult cell into an empty egg could effectively reprogram the genome, enabling cell growth. Shortly after, in October 1997, Wakayama created the first cloned mouse, Kumulina.

To evaluate the efficacy of his team’s cloning technique, Wakayama initiated cloning experiments in 2005. “Similar to how a reproduced painting loses detail, we aimed to assess the quality of the clones against the original,” he explains.


By 2013, Wakayama’s team had successfully generated over 500 mice from a single donor across 25 cloning generations, claiming, “Each cloned mouse exhibited no physical anomalies and maintained normal lifespan and health.” However, this level of success has not been replicated in other species. Cloned dogs continue to face health complications, and no primate has been cloned using adult cells to date. Initially, Wakayama believed repeated cloning in mice could extend indefinitely, yet by the 58th generation, not one clone survived.

To uncover the reasons behind this decline, the research team sequenced the genomes of ten different mice from various generations. They found an average of over 70 mutations per clonal generation, three times higher than in the naturally bred control group. Notably, after the 27th generation, significant mutations began to accumulate, even leading to the loss of the entire X chromosome.

This issue may stem from evolutionary mechanisms that protect gametes from mutations while allowing adult somatic cells to accrue more mutations. Recent studies suggest mutations accumulate eight times faster in blood cells compared to sperm. Thus, if the original cloned adult cell harbored numerous mutations, so too would the resulting clones.

Wakayama also posits that the nuclear transfer process may induce additional mutations. “It’s plausible that physical shock during nuclear transfer can damage the DNA,” he remarks. “If we can devise gentler nuclear transfer techniques, we might lower the mutation rate in cloned embryos—but we’re still seeking solutions.”

Shukrat Mitalipov, a professor at Oregon Health and Science University, remains skeptical. “The mutation rate evident in cloned subjects probably reflects the genomic nature of donor cells rather than being an inherent consequence of nuclear transfer,” he states.

While human cloning is prohibited in many regions, researchers like Mitalipov are exploring nuclear transfer’s potential for generating tissues and organs that are compatible for treatments, as well as for creating sperm and egg cells for infertility therapies. Wakayama’s findings highlight the necessity of thorough donor cell screening to prevent deleterious mutations. “Evaluating donor cell populations for harmful mutations is vital; if needed, gene editing could correct identified issues.”

Nevertheless, if the cloning process itself is responsible for inducing mutations, it presents additional challenges. Nonetheless, these findings do not signal that cloning techniques entail insurmountable risks. The mutation rate per generation remains relatively low, and safety screenings can be conducted post-cloning. However, they underscore the complexities inherent in cloning technology.

Source: www.newscientist.com

Breakthrough in Mammal Brain Preservation: A Major Step Towards Resuscitation After Death

Brain Preservation Technique

Will we someday preserve our thoughts, emotions, and perceptions?

Thumbnail/Science Photo Library

Recent breakthroughs in brain preservation methods have enabled an entire mammalian brain to be successfully stored. This innovative technique will soon be accessible to terminally ill patients, aimed at gathering essential neural data to one day reconstruct the essence of the individual.

According to Boris Lobel of Nectome, a San Francisco-based company pioneering memory preservation, patients will need to donate their brains and bodies for scientific research. “Our vision is to preserve their bodies and brains indefinitely, with the hope that one day we can decode the information stored in their brains,” he stated.

Timing is critical for preserving the delicate structure of the brain; just minutes without blood flow can lead to irreversible damage as enzymes destroy neurons and cells begin self-digestion.

Typically, cryonics aims to preserve bodies at subzero temperatures post-mortem, allowing for the possibility of revival if future treatments are developed. However, rapid action is essential, as brain deterioration begins almost immediately following natural death.

To mitigate these challenges, Lobel and his team have created a physician-assisted protocol that allows terminally ill individuals to choose the timing of their passing. This ensures immediate intervention, enhancing the likelihood of maintaining the brain’s condition close to its living state.


Lobel’s team performed tests using pigs, which possess brain and cardiovascular systems similar to humans. The procedure involved inserting a cannula into the heart shortly after cardiac arrest, flushing out blood, and introducing a preservation solution. This concoction contains aldehyde chemicals that create molecular connections, effectively locking cellular activity.

A cryoprotectant is later introduced to replace water within the tissue, preventing ice crystal formation that could harm cells upon cooling. The treated brains are then cooled to approximately -32°C, allowing cryoprotectants to achieve a glass-like state for indefinite preservation.

To evaluate the technique’s success, researchers analyzed samples from the brain’s outer layer under a microscope. Initial trials commencing 18 minutes post-mortem indicated significant cellular damage, but when the delay was shortened to under 14 minutes, the tissue displayed excellent preservation of neurons and synapses.

Theoretically, Lobel suggested this protocol could aid in “reconstructing the three-dimensional map of neural connections,” referred to as the connectome, potentially illuminating how the brain generates thoughts, emotions, and cognitive functions. So far, scientists have achieved the mapping of only a fraction of the mouse brain, which took seven years to complete, as documented in this study.

Despite advancements in cryonics and computational technology, true “resuscitation” remains unfeasible. “Our method is akin to embalming, preserving the brain’s structural integrity without restoring biological viability,” explains João Pedro de Magalhães from the University of Birmingham. He further asserts that even a perfect mental replica would exist as a distinct entity.

Nonetheless, Lobel’s team is hopeful about the future, positing that human consciousness could eventually be recreated digitally or biologically. “We are open to various resurrection strategies, as we believe we can preserve all necessary information for this,” Wróbel asserts.

Nectome plans to invite terminally ill patients to Oregon, allowing them to spend time with family before undergoing the new preservation protocols. “They receive medications prescribed by an independent physician before we initiate the surgery,” Lobel notes.

This groundbreaking research brings forth profound philosophical inquiries regarding our understanding of death. “Declaring death based solely on the absence of blood circulation oversimplifies the complexities involved,” remarks Brian Wok, from 21st Century Medicine. “The ability to preserve the brain’s intricate structure and molecular makeup after circulation ceases raises essential questions about the nature of life and death.”

Topics:

Source: www.newscientist.com

Lawsuit Targets Trump Administration’s Plan to Dismantle Major Climate Research Institute in America

The University Corporation for Atmospheric Research (UCAR), which manages the largest federal climate research center in the U.S., has filed a lawsuit against the Trump administration’s attempts to dismantle the National Center for Atmospheric Research (NCAR).

View the lawsuit. This legal action disputes the administration’s decision to dismantle NCAR, alleging a “systematic campaign of punishment and coercion” against Colorado amidst ongoing tensions between President Donald Trump and Governor Jared Polis.

The report submitted by UCAR, a leading non-profit organization in climate science and weather modeling based in Boulder, Colorado, follows the Trump administration’s announcement in December about plans to dismantle the research center.

The lawsuit claims that “UCAR and NCAR are collateral damage” in this broader conflict.

The disagreement between Trump and Polis arises from concerns regarding mail-in voting in Colorado and the prosecution of a county clerk convicted of tampering with election equipment during the 2020 presidential election. According to the complaint, Trump pressured Polis to release the clerk while banning mail-in voting.

Filed in U.S. District Court in Colorado, the lawsuit details a purported “retaliatory campaign” targeting NCAR by multiple federal agencies, including the National Science Foundation (NSF) and the National Oceanic and Atmospheric Administration (NOAA).

So far, three named federal agencies have not provided comments regarding the lawsuit, except for the NSF, which stated it does not comment on ongoing litigation.

Additionally, Colorado is pursuing legal actions related to the alleged campaign of retribution against the state.

The lawsuit contends that the Trump administration’s decision to relocate the U.S. Space Command, cut $109 million in transportation funding, and impose new requirements on the Supplemental Nutrition Assistance Program (SNAP) is part of a punitive strategy against Colorado.

District judges have only ruled on one matter in this case concerning SNAP. The administration argued that there was sufficient fraud in Colorado to necessitate a pilot program; however, a district judge ruled in favor of the state by issuing a preliminary injunction, which outlined the reasons in a court order.

UCAR’s complaint shares similar allegations against the federal government, claiming that a “gag order” was issued to silence NCAR employees regarding the reorganization. It also points to the termination of a multimillion-dollar climate adaptation research contract and new unlawful reporting requirements imposed on NCAR and UCAR. Furthermore, the complaint details attempts to remove the center’s supercomputing facility from UCAR’s administration.

The complaint states, “The agency’s ultimate goal is the complete destruction of NCAR,” referencing a January NSF announcement about restructuring the agency while seeking public proposals for new uses for NCAR’s Boulder campus, including various public or private uses.

The allegations within the complaint argue that recent federal actions contravene the Administrative Procedure Act and request the court to halt specific lawsuits, such as the relocation of NCAR’s supercomputing facility and cancellation of NOAA grants.

UCAR and NCAR collectively employ around 1,400 scientists, engineers, and support personnel focusing on key areas like hurricane forecasting, wildfire monitoring, weather predictions, and space weather research. NCAR hosts advanced supercomputers essential for complex climate modeling tasks.

In a statement on their website, UCAR emphasized that the actions taken by the federal agencies pose significant threats to national security, public safety, and economic stability and jeopardize the U.S.’s leadership role in climate and weather forecasting.

UCAR has stated that it will refrain from further comments until the lawsuit is resolved.

Source: www.nbcnews.com

Utah Launches Major Great Salt Lake Rescue Project in Preparation for 2034 Olympics

Long-term drought has significantly contributed to the Great Salt Lake’s decline, but approximately 75% of the issue stems from human activities. According to research published in 2022, excessive water consumption by humans has taken a toll over the decades.

In 2022, state officials took decisive action to address the crisis. Lawmakers allocated $40 million to establish a water trust aimed at enhancing both water quality and quantity. Additionally, alterations to Utah’s water law now designate it as a “beneficial use” for farmers to redirect their allotted water into lakes, incentivizing donations and water transfers. Previously, any unused water rights could be lost.

National authorities also initiated modifications along the causeway dividing the lake’s northern and southern sections, enabling control over water and salt flow. Fortunately, this winter brought about double the normal snowfall in the mountains, which played a key role in the lake’s recovery.

Kevin Perry, an atmospheric scientist at the University of Utah specializing in the Great Salt Lake and its toxic dust, noted that these combined factors significantly lowered the lake’s salinity, effectively “saving it.”

According to Perry, “That huge snowpack buried and diluted all the salt in the southern part of the lake.”

The ecosystem is showing signs of recovery; “The seeds are back,” Perry remarked.

Baxter added, “This year’s flies were just tough.”

These changes were enough to temporarily avert a crisis, at least for now.

Joel Ferry, director of the Utah Department of Natural Resources, expressed relief, stating, “We dodged an environmental nuclear bomb. We put away the red button.”

However, water levels have yet to return to a healthy state, and the potential impact of this year’s excessive snowfall continues to pose challenges.

Source: www.nbcnews.com

Scientists Uncover 90 Million-Year-Old Dinosaur ‘Rosetta Stone’ in Major Paleontological Discovery

A groundbreaking discovery of a 90-million-year-old fossil in Argentina is reshaping our understanding of the evolutionary history of a unique group of bird-like dinosaurs. This find helps settle a longstanding debate regarding their distribution across the ancient world.

The fossils detailed in Nature belong to Arunachetri seropolisiensis, a member of the Alvarezaurus family. This small dinosaur is characterized by its tiny teeth and stout arms, which end in a prominent single thumb claw.

While most well-preserved Alvarezsaurus fossils have been discovered in Asia, the existence of Alvarezsaurus in South America raises intriguing questions due to the vast ocean separating these continents.







A nearly complete skeleton uncovered at the La Buitrera fossil site in northern Patagonia has provided remarkable evidence regarding this species. This region was also home to primitive snakes and small saber-toothed mammals.

“Creating a nearly complete, articulated animal from a fragmented skeleton is akin to discovering the Rosetta Stone of paleontology,” stated Peter Makowiecki, a professor at the University of Minnesota, and the study’s first author.

Unlike their later relatives, Arunashetri had longer arms and larger teeth. This indicates that Alvarezsaurids likely reduced their body size before evolving the characteristic small limbs and teeth suited for an ant and termite diet.

“Our study suggests that alvarezsaurids form a compact group of dinosaurs, with species sizes ranging from crows to humans,” Makowiecki told BBC Science Focus. “Body size appears to fluctuate within this limited range without a clear trend.”

Peter Makowiecki discovers fossilized bones in Patagonia’s La Buitrera Fossil Field – Photo credit: Minyoung Son, University of Minnesota

This discovery also addresses an intercontinental mystery. A detailed anatomical study of Arunashetri led Makowiecki and his team to examine fossil collections globally. “We found other Alvarezaurids hiding in plain sight,” he noted.

“These species, which existed during the Jurassic period in North America and the Early Cretaceous in Europe, enhance our understanding of Alvarezsaurus’s widespread presence prior to the major rift between the Northern and Southern Hemispheres.”

Approximately 200 million years ago, all of Earth’s continents formed a single supercontinent named Pangea. This landmass gradually fragmented over tens of millions of years, evolving into its current configuration while transporting its fauna along with it.

The research team is preparing additional specimens from the same site, though Professor Makowiecki has remained tight-lipped about their specifics. “The new specimen confirms some of our findings regarding size and specialization,” he disclosed. “Currently, we have no further plans.”

Read more:

This version maintains SEO optimization by incorporating keywords, improving readability, and retaining necessary HTML tags.

Source: www.sciencefocus.com

Unraveling Cosmic Mysteries: How a Unique Black Hole Could Solve Three Major Questions

Deborah Ferguson (UT Austin), Bhavesh Khamesra (Georgia Tech), Karan Jani (Vanderbilt University)/LIGO

The universe is expanding at an accelerating rate, leaving scientists perplexed about the source of this mysterious phenomenon known as dark energy, which comprises approximately 68% of the universe. Understanding dark energy is a critical challenge for astrophysics today.

Interestingly, some astrophysicists propose a link between black holes and dark energy. Supermassive black holes exert an incredible gravitational pull, drawing in matter, yet the underlying question remains: how can they contribute to the expansion of the universe?

The theory suggests that when matter falls into black holes, it transforms into a type of radiation that exerts pressure on the surrounding space, leading to an expansive force. Although these effects are minuscule individually, the sheer number of black holes could result in a significant cumulative impact, pushing galaxies away from each other.

Initially regarded as a fringe theory, this idea has gained traction amongst cosmologists who believe it could help elucidate several cosmic mysteries. “It’s controversial, but it’s gaining acceptance,” stated Kevin Crocker, a cosmologist at Arizona State University.

According to Nyaesh Afsholdi, a cosmologist at the University of Waterloo, black holes could be pivotal in understanding dark energy, given their complexity and the unusual nature of their singularities.

Understanding Black Hole Singularity

At the center of each black hole lies the astrophysical singularity, where gravity compresses matter to infinite density—a realm of physics not yet fully understood. As Gregory Tarr, a cosmologist at the University of Michigan, suggests, black holes prevent singularities from forming by converting collapsing material into dark energy.

Tarr elaborates that this process is reminiscent of the early universe, where radiant energy transformed into matter. In a black hole, the reverse process could occur, maintaining gravitational stability.

“Understanding how a single dust particle converts to radiation is complex,” explains Massimiliano Rinaldi, a physicist at the University of Trento, Italy. Yet, this conceptual transition may not be as far-fetched as it sounds.

This article is part of a special issue on the crisis in cosmology.
Check the complete package here

Traditionally, it was believed that black holes only influenced their immediate surroundings. However, as Croker points out, “It’s not just localized effects; the cumulative impact of numerous black holes can significantly alter cosmic dynamics.”

Even a large influx of matter into a single black hole may not propel universal expansion, but if black holes throughout the universe collectively absorb matter, their gravitational effects could accelerate cosmic inflation.

Evidencing Cosmologically Connected Black Holes

The first substantial evidence of cosmologically linked black holes emerged in 2023, suggesting mysterious expansions throughout the universe, aligning with observations of black holes maintaining growth relative to cosmic expansion. According to Crocker, despite their perceived dullness, even supermassive black holes actively participate in higher cosmic dynamics, as dark energy appears in tandem with their formation.

Critics argue that the precise behavior of these cosmologically connected black holes remains unknown. Rinaldi stresses the lack of exact mathematical models, complicating the understanding of their merger behaviors. However, as research progresses and new data emerges, hope for breakthroughs remains.

The evolution of this theory from fringe to mainstream reflects growing acceptance among cosmologists, especially in light of puzzling results from the Dark Energy Spectroscopy Instrument (DESI) in Arizona.

DESI Insights

DESI is mapping millions of galaxies across the universe, providing insights into cosmic expansion over time. Recent findings indicated that dark energy could be diminishing, challenging established cosmological models that assert its constancy. “Seeing such data was surprising,” remarked Tarr; “dark energy appears to vary over cosmic epochs.”

If dark energy originates from cosmologically linked black holes, the DESI observations reconcile several cosmic enigmas, aligning black hole formation trends with dark energy dynamics.

The interplay of dark matter and dark energy forms the framework of the universe.

Volker Springel/Max Planck Institute for Astrophysics/Scientific Photo Library

The Hubble tension, which highlights differing expansion rates derived from various cosmological measurements, underscores the need for clarity. Integrating cosmologically grouped black holes into current models could bridge gaps between conflicting data regarding cosmic expansion.

While numerous theories have attempted to address discrepancies surrounding dark energy, many rely on speculative elements beyond conventional physics. The concept of cosmologically connected black holes, however, remains a relatively conservative yet promising pathway to resolving ongoing mysteries.

Recent investigations by Tarr, Crocker, and colleagues have unveiled what they denote as a “three-legged chair” of evidence supporting their hypothesis, linking particle physics observations to cosmic expansion behaviors.

Neutrinos, often dubbed “ghost particles,” present a challenge in this model due to their elusive nature and negligible mass. Remarkably, if ordinary matter inside black holes can transform into dark energy, this might adjust the universal mass metrics, opening pathways for new discoveries.

Is this evidence sufficient to elevate the notion of cosmologically linked black holes from speculative to mainstream scientific theory? Crocker believes so: “We now possess three key pieces of evidence to lend credence to our hypothesis.”

Encouragingly, interest in this area of research is burgeoning, evidenced by the increased collaboration among physicists and cosmologists, underscoring the growing recognition of the potential importance of cosmologically connected black holes in the accelerating universe scenario.

As ongoing observations from DESI and other large-scale cosmic surveys yield fresh data, uncovering links between black holes and cosmic expansion continues to be a dynamic area of study. Nyaesh Afsholdi aptly characterizes this inquiry as a detective story, with more researchers joining the pursuit of understanding the enigmatic role black holes may play in the speeding expansion of our universe.

Topics:

Source: www.newscientist.com

NASA’s Artemis Moon Exploration Program: Major Reforms and Enhancements Unveiled

NASA’s Space Launch System

NASA’s Space Launch System Faces Challenges

Credit: NASA/Cory Houston

NASA is re-evaluating its Artemis moon exploration program. During a press conference on February 27, NASA Administrator Jared Isaacman revealed significant adjustments to the plans for sending humans to the moon for the first time since the Apollo program concluded in 1972.

The upcoming Artemis II mission, set to launch soon, has experienced two challenging tests. The Space Launch System (SLS) rocket faced fuel injection leaks, necessitating a return from the launch pad for thorough analysis and repairs. The SLS saw its last launch in 2022.

Artemis II aims to orbit astronauts around the moon in preparation for a crewed landing in the Artemis III mission, though that goal has now shifted. Artemis III will focus on testing the Orion crew capsule’s docking capabilities with the lander in lunar orbit, along with evaluating the spacesuit for eventual moon landings.

Despite these seemingly negative developments, NASA has laid out plans to increase launch frequency. The revised approach aims for Artemis IV and potentially Artemis V to achieve lunar landings by 2028.

“The entire series of Artemis flights should represent a gradual build-up of capability, with each step advancing our readiness for landing missions,” stated NASA official Amit Kshatriya in a recent statement. “Each phase should be substantial enough for progress, yet measured to avoid unnecessary risks based on our experiences thus far.”

Initially, there were plans to upgrade the SLS rocket’s upper stage for future endeavors. However, Isaacman highlighted a shift towards a “standardized” version, minimizing significant changes for every few missions. “We don’t aim for each rocket to be a work of art,” he said in the press briefing.


These changes denote a shift in the Artemis program’s philosophy, prioritizing thorough testing for every component of the rocket and mission strategy. This approach aims to facilitate swift, small steps rather than large leaps every few years, with Isaacman expressing optimism about reducing the delays that have historically burdened the Artemis program, ultimately promoting a safer and more efficient lunar exploration initiative.

Topics:

Source: www.newscientist.com

How a Major Collision with Titan Could Have Formed Saturn’s Rings

Discover Saturn’s Largest Moon, Titan: A Stunning View from the Cassini-Huygens Spacecraft

Photo Credit: ZUMA Press, Inc./Alamy

The origin story of Saturn and its spectacular rings may have been influenced by its largest moon, Titan. Approximately 400 million years ago, a collision involving an early proto-Titan and a smaller celestial body may have set off a chain reaction, resulting in the creation of Saturn’s iconic rings while altering the planet’s wobble and the orbits of its moons.

Saturn’s system is rife with enigmas. The rings are surprisingly younger than anticipated, the planet’s wobble is not correlated with Neptune’s gravitational influence as simulations suggested, and Iapetus, one of its moons, possesses a strangely tilted orbit. Titan itself is noted for its unique features, including a sparse number of craters and an eccentric orbit.

The collision that formed the Titan we observe today could elucidate many of these mysteries. “This creates a grand unified theory that addresses all primary issues,” said Matiya Chukku, the leader of the research team. “We had various hypotheses about each problem, and this could be the way they interconnect in one narrative that we can test.”

The theory begins with the proposition of a hypothetical moon named Chrysalis, located on the outer edge of Saturn’s system. Proposed in 2022, it was suggested to explain how Saturn’s wobble separated it from Neptune. It was theorized that Chrysalis was drawn towards Saturn, leading to a breakup and the formation of rings, thereby destabilizing Saturn’s wobble and the orbit of Iapetus. However, further simulations indicated that the most probable scenario would be for Chrysalis to collide with Titan.

This presents a complication, Chukku explains: “If Chrysalis collided with Titan, it couldn’t transform into rings.” Therefore, he and his team analyzed the ramifications of a potential impact with Titan. Their findings indicated that such a collision around 400 million years ago could have erased Titan’s craters, transformed its originally circular orbit into an elliptical one, and produced a cascade of debris. The smaller moon Hyperion might be formed from this debris, explaining why it appears significantly younger than Saturn’s other moons.

Over time, Titan’s orbital changes could have destabilized the smaller inner moons, causing them to collide and grind into the tiny particles now making up Saturn’s rings. “It all starts with Titan, leading to subsequent calamities in the internal systems,” Chukku states.

“If the collision in Titan’s early history can unravel many mysteries within the Saturn system, it underscores Titan’s significance in our understanding of Saturn as a whole,” adds Sarah Helst from Johns Hopkins University. “I value the elegance of resolving multiple Saturnian issues simultaneously.”

We are nearing the opportunity to gather evidence to confirm or refute this theory. NASA’s Dragonfly mission, set to launch in 2028 and arrive at Titan by 2034, will conduct comprehensive surface analyses of Titan, potentially elucidating whether Titan has merged with Chrysalis. Should this hypothesis hold, the peculiarities of Saturn may finally be explained.

Topics:

This optimized content retains the original structure and presents the information in a clearer manner, enhancing SEO with relevant keywords related to Titan and Saturn’s moons.

Source: www.newscientist.com

Impending Major Earthquakes: A Guide to Nepal and Northern India’s Seismic Risks

Core Samples from Nepal’s Lake Reveal Random Patterns of Historical Earthquakes

Zakaria Ghazoui-Schaus, BAS

While some experts argue that northern India and western Nepal are overdue for significant earthquakes, recent studies indicate this notion may be a myth. Historical data reveals small earthquakes have occurred randomly in the region for thousands of years.

Frequently, officials and media label densely populated fault-adjacent areas, such as Istanbul, Seattle, and Tokyo, as being “overdue” for a major earthquake. The last significant earthquake on the central Himalayan fault segment in India and Nepal was recorded in 1505. Some researchers suggest that earthquakes in the area occur approximately every 500 years, indicating that a major quake could be on the horizon, as highlighted in a study.

However, new findings reveal at least 50 earthquakes of magnitude 6.5 or higher have transpired in this region over the past 6,000 years, including 8 since 1505, according to this research. Notably, these earthquakes did not exhibit regular patterns, occurring randomly instead.

“It is essential to shift our focus from debating the periodicity of earthquakes in the Himalayas to acknowledging that they occur randomly, and assess the risks accordingly,” emphasizes Zakaria Ghazoui-Schaus of the British Antarctic Survey, who participated in the research.

The relentless collision of the Indian and Eurasian tectonic plates forming the Himalayas contributes to one of Earth’s largest seismic zones. This extensive 2,400-kilometer fault has generated powerful earthquakes, including the catastrophic 7.8 magnitude earthquake in 2015 that tragically claimed nearly 9,000 lives in and around Kathmandu.

Despite this, limited evidence of seismic activity has been found in the central fault section just west of Kathmandu, sparking concerns that pressure in this “seismic gap” could lead to a devastating magnitude 8 or 9 earthquake.

Ghazoui-Schaus suggests that this perception stems from a “knowledge gap” rather than tectonic inactivity. Traditional methods for locating earthquake evidence in the Himalayas often involve digging trenches to find surface cracks, which might detect major quakes but overlook smaller “shadow earthquakes” that did not cause surface damage.

Former British Geological Survey seismologist Roger Masson states, “Traditional paleoseismology only yields sparse records of the largest earthquakes, while historical catalogs generally suffice for earthquakes up to magnitude 4.” This bias leads to inflated estimates of long “occurrence intervals,” or “recurrence periods,” which represent the average time between earthquakes of a certain magnitude in an area.

To enhance the seismic record of the central Himalayas, Ghazoui-Schaus and his team visited Rara Lake in western Nepal in 2013, collecting a 4-meter sediment core using a rubber boat.

Research Team Prepares Equipment for Sediment Core Sampling at Rara Lake in Nepal

Zakaria Ghazoui-Schaus, BAS

The researchers analyzed sediment cores containing turbidites—layers that finely layer sediment on coarser sediments deposited on the lake bed by underwater landslides caused by earthquakes. Their analysis identified 50 earthquakes of magnitude 6.5 or greater over the past 6,000 years, each dated according to its core depth, likely releasing energy that alleviated fault tension, says Ghazoui-Schaus.

Statistical evaluations indicated that while earthquakes often occur in swarms, these swarms are random. This finding aligns with seismologists’ expectations based on contemporary records, marking one of the first confirmations through paleoseismological evidence.

If I were constructing a house in western Nepal, I would certainly prioritize building it more robustly,” notes Ghazoui-Schaus. Masson adds that despite the random occurrence of earthquakes, calculating the average interval between them remains valuable for anticipating seismic activity that could threaten vulnerable structures like bridges and dams.

“When planning for the next century, it’s crucial to estimate how many earthquakes of specific magnitudes may occur. Being prepared ensures we can withstand quakes whenever they strike, regardless of whether it’s next year or a decade from now,” he states succinctly.

Topic:

Source: www.newscientist.com

How Major AI Models Can Promote Hazardous Scientific Experiments: Risks and Implications

Scientific Laboratories: A Potential Hazard

PeopleImages/Shutterstock

Researchers caution that the implementation of AI models in scientific laboratories poses risks, potentially leading to dangerous experiments that could result in fires or explosions. While these models offer a convincing semblance of understanding, they might lack essential safety protocols. Recent testing on 19 advanced AI models revealed that all of them are capable of making critical errors.

Although severe accidents in academic laboratories are uncommon, they are not unheard of. Chemist Karen Wetterhahn tragically lost her life in 1997 due to dimethylmercury penetrating her protective gloves. In another incident in 2016, a researcher suffered severe injuries from an explosion; and in 2014, another scientist was partially blinded.

AI models are increasingly being utilized across various industries, including research institutions, for experiment and procedure design. Specialized AI tools have demonstrated success in various scientific sectors, such as biology, meteorology, and mathematics. However, general-purpose models often generate inaccurate responses due to gaps in their data access. While this may be manageable in casual applications like travel planning or cooking, it poses life-threatening risks when devising chemical experiments.

To assess these risks, Zhang Xiangliang, a professor at the University of Notre Dame, developed LabSafety Bench, a testing mechanism that evaluates whether an AI model can recognize potential dangers and adverse outcomes. This includes 765 multiple-choice questions and 404 scenario-based illustrations that highlight safety concerns.

In multiple-choice assessments, some AI models, like Vicuna, scored barely above random guessing, while GPT-4o achieved an 86.55% accuracy rate, and DeepSeek-R1 reached 84.49%. In image-based evaluations, models like InstructBlip-7B demonstrated less than 30% accuracy. The team evaluated 19 state-of-the-art large-scale language models (LLMs) and vision-language models and found that none surpassed a 70% overall accuracy.

Although Zhang expresses optimism about the future of AI in scientific applications, particularly in “self-driving laboratories” where robots operate autonomously, he underscores that these models are not yet equipped to plan experiments effectively. “Currently? In the lab? I don’t think so. These models are primarily trained for general tasks, such as email drafting or paper summarization, excelling in those areas but lacking expertise in laboratory safety,” he states.

An OpenAI representative commented, “We welcome research aimed at making AI safe and reliable in scientific settings, particularly where safety is a concern.” They noted that the recent tests had not included any of their major models. “GPT-5.2 is the most advanced scientific model to date, offering enhanced reasoning, planning, and error detection capabilities to support researchers better while ensuring that human oversight remains paramount for safety-critical decisions.”

Requests for comments from Google, DeepSeek, Meta, Mistral, and Anthropic went unanswered.

Alan Tucker from Brunel University in London asserts that while AI models may prove incredibly useful for aiding human experiment design, their deployment must be approached cautiously. He emphasizes, “It’s evident that new generations of LLMs are being utilized inappropriately because of misplaced trust. Evidence suggests that people may be relying too heavily on AI to perform critical tasks without adequate oversight.”

Craig Malik, a professor at UCLA, shared his recent experience testing an AI model’s response to a hypothetical sulfuric acid spill. The correct procedure—rinsing with water—was contrary to the model’s repeated warnings against it, which instead offered unrelated advice about potential heat buildup. However, he noted that the model’s responses had improved in recent months.

Malik stressed the necessity of fostering robust safety practices among new students due to their inexperience. Yet he remains more optimistic than some peers about the role AI could play in experimental design, stating, “Are they worse than humans? While it’s valid to critique these large-scale models, it’s important to realize they haven’t been tested against a representative human cohort. Some individuals are very cautious, while others are not. It’s conceivable that these models could outperform a percentage of novice graduates or even experienced researchers. Moreover, these models are continuously evolving, indicating that the findings from this paper may be outdated within months.”

Topics:

Source: www.newscientist.com

Newly Discovered Songbird Species in Bolivia: A Major Ornithological Breakthrough

Deep within Bolivia’s seasonally flooded savannah, a small olive-green songbird has eluded scientific classification for decades. After 60 years of misidentification, ornithologists have finally confirmed that this bird is not merely a regional variant within the genus Hylophilus. It represents a completely new species. This discovery adds to South America’s rich avian diversity and underscores the vast unknowns still present within even well-studied bird families.

The newly identified species belongs to the Hylophilus genus, part of the Vireonidae family, which includes vireos, greenlets, and shrikes.

With the scientific name Hylophilus moxensis (common name: Beni Greenlet), this bird thrives in the wet scrublands of Bolivia’s Beni Savannah, an ecologically unique area also known as Llanos de Moxos.

The species was first noted by ornithologists in 1960 but was initially thought to be an isolated population of two similar species found in Brazil: the Rufous-Clown Greenlet (Hylophilus poirotis) and the Gray Greenlet (Hylophilus amaurocephalus).

“Morphological differences among many Hylophilus Greenlet species are subtle. Most display shades of green, gray, yellow, and brown,” explains Dr. Paul Van Els, an ornithologist at the National Museum of History in La Paz, Bolivia. He and his colleagues detailed their findings in a recent paper.

“For certain species, iris color is one of the most effective traits to differentiate them from similarly appearing relatives.”

By analyzing one mitochondrial and three nuclear genes, the research team clarified the uncertainty surrounding this population.

Results revealed that the Beni population is distinct from known species and is more closely related to Hylophilus poirotis and Hylophilus amaurocephalus, which diverged approximately 6.6 million years ago.

In contrast, the latter two species separated from one another about 3.5 million years ago.

Van Els and his team also conducted comprehensive analyses of facial plumage, eye color, and vocalizations.

Research indicates that the Hylophilus moxensis can be uniquely identified by the absence of black or brown markings behind the ears, a trait consistently found in closely related species, along with uniformly dark brown eyes and a distinctive vocal pattern.

In vocal studies, researchers observed that this species’ calls feature “V-shaped notes,” and their vocalizations include overtones reminiscent of female Hylophilus amaurocephalus calls—a unique combination not shared with either comparative species.

The discovery of Hylophilus moxensis contributes to a growing list of endemic species found in the Beni savannah.

While scientists currently do not regard this species as threatened with extinction, they caution that extensive agricultural burning poses significant threats to the region’s biodiversity.

“Recognizing Hylophilus moxensis should enhance conservation priorities in this area,” the authors noted.

“Rampant agricultural burning poses a serious risk to the region’s biodiversity.”

“Though we cannot accurately estimate the population size of Hylophilus moxensis, we do not currently consider it at risk of extinction, as there remains extensive suitable habitat.”

“However, the relatively low number of sightings might indicate issues beyond mere observer rarity, potentially reflecting a truly localized population.”

The team’s paper was published online on January 1, 2026, in the journal Bird Systematics.

_____

Paul Van Els et al. 2026. A new species of greenlet from Bolivia: Hylophilus moxensis (Vireonidae). Bird Systematics, 3(3):17-37

Source: www.sci.news

Discovering a Triple System of Active Galactic Nuclei 1.2 Billion Light-Years Away: A Major Astronomical Breakthrough

A rare triple-merger galaxy, known as J121/1219+1035, hosts three actively feeding radio-bright supermassive black holes, as revealed by a team of American astronomers.



Artist’s impression of J121/1219+1035, a rare trio of merging galaxies, featuring three radioactively bright supermassive black holes actively feeding, with jets illuminating the surrounding gas. Image credit: NSF/AUI/NRAO/P. Vosteen.

The J1218/1219+1035 system is located approximately 1.2 billion light-years from Earth.

This unique galaxy system contains three interacting galaxies, each harboring supermassive black holes at their centers that are actively accreting material and shining brightly in radio frequencies.

Dr. Emma Schwartzman, a research scientist at the US Naval Research Laboratory, states: “Triple active galaxies like J1218/1219+1035 are incredibly rare, and observing them during a merger allows us a front-row seat to the growth of supermassive galaxies and their black holes.”

“Our observations confirmed that all three black holes in J1218/1219+1035 are emitting bright radiation and actively firing jets. This supports the theory of active galactic nuclei (AGN) and provides insight into the life cycle of supermassive black holes.”

Schwartzman and colleagues utilized NSF’s Very Large Array (VLA) and Very Long Baseline Array (VLBA) to study J1218/1219+1035.

The findings confirmed that each galaxy hosts a compact synchrotron-emitting radio core, indicating that all three harbor AGNs powered by growing black holes.

This discovery makes J1218/1219+1035 the first confirmed triple radio AGN and only the third known triple AGN system in nearby space.

“The three galaxies within J1218/1219+1035, located about 22,000 to 97,000 light-years apart, are in the process of merging, resulting in a dynamically connected group with tidal signatures indicative of their interactions,” the astronomers noted.

“Such triple systems are crucial in the context of hierarchical galactic evolution, wherein large galaxies like the Milky Way grow through successive collisions and mergers with smaller galaxies, yet they are seldom observed.”

“By capturing three actively feeding black holes within the same merging group, our new observations create an excellent laboratory for testing how galactic encounters funnel gas into centers and stimulate black hole growth.”

J1218/1219+1035 was initially flagged as an anomalous system through mid-infrared data from NASA’s Wide-Field Infrared Surveyor (WISE), which suggested the presence of at least two obscured AGNs within the interacting galaxies.

Optical spectroscopy confirmed one AGN in a core while revealing complex signatures in another, although the nature of the third galaxy remained uncertain due to the possibility of emissions from star formation.

“Only through new ultra-sharp radio imaging with VLA at frequencies of 3, 10, and 15 GHz did we uncover compact radio cores aligned with all three optical galaxies, confirming that each hosts an AGN bright in radio emissions and likely fueling small-scale jets and outflows,” the researchers explained.

“The radio spectra of the three cores exhibited traits consistent with non-thermal synchrotron radiation from the AGNs, featuring two sources with typical steep spectra and a third with an even steeper spectrum potentially indicative of unresolved jet activity.”

Source: www.sci.news

A Major Volcanic Eruption Could Have Triggered the Black Death

A recent study suggests that volcanic eruptions from several years prior may have contributed to the devastating impact of the Black Death on medieval Europe’s population.

The researchers discovered that a period of abnormally cold summers in the mid-1340s, potentially linked to one significant volcanic eruption or several smaller ones, led to severe famines throughout the Mediterranean.

They argue that this chain reaction ultimately caused disease-carrying fleas to arrive at European ports, resulting in mortality rates of up to 60 percent.

“This is something I’ve wanted to understand for a long time,” stated Professor Wolf Bungen, a paleoclimatologist from the Department of Geography at the University of Cambridge. “What were the origins and transmission factors of the Black Death, and how extraordinary were they?”

“Why did this event occur in this specific region, at this precise moment in European history? That is a fascinating question, yet one that requires collective insights to answer.”

Professor Ulf Bungen takes ring samples from trees in the Pyrenees – Credit: Ulf Bungen

Bungen noted that BBC Science Focus has provided clues through tree rings and ice cores—ancient ice layers that have preserved chemicals from historic volcanic eruptions—indicating that volcanic activity contributed to the extreme climatic conditions.

“If a particular year experiences unusual cold, heat, dryness, or wetness, we aim to uncover the reasons behind it,” Bungen remarked to BBC Science Focus.

“Volcanoes emit substantial amounts of sulfur into the upper atmosphere, prompting collaborations with ice core experts to gain insights on past eruptions.

“This can lead to subsequent cold summers, a phenomenon known as post-eruption cooling.”

This close-up image of tree rings shows the “blue rings” of 1345 and 1346, during the cold and wet summers – Credit: Ulf Büntgen

It was left to climate historian Dr. Martin Bauch from the Leibniz Institute for the History and Culture of Eastern Europe in Germany to correlate this climate data with historical events.

He found that the harsh cold resulted in significant famine across the Mediterranean, and the responses of the Italian republics of Venice, Genoa, and Pisa eventually facilitated the plague’s arrival in Europe.

“For over a century, these influential Italian city-states established extensive trade networks throughout the Mediterranean and Black Seas, employing an effective system to stave off starvation,” Bauch explained. “However, this ultimately contributed to even greater disasters.”

The fleas carrying the plague bacterium Y. pestis likely reached Mediterranean ports aboard these grain ships, transferring to rats, cats, and humans, and quickly propagating the disease across Europe, decimating its population.

The study concluded that volcanic activity initiated a sequence of events culminating in the plague throughout medieval Europe.

Bungen noted that this narrative continues to resonate in today’s world, over seven centuries later.

“While the coincidental convergence of factors leading to the Black Death may be rare, the probability of zoonotic disease outbreaks and pandemics amidst climate change is likely to escalate in our interconnected world,” he explained.

“This is particularly crucial in light of our recent experiences with COVID-19.”

read more:

Source: www.sciencefocus.com

Coral Reefs Triggered Major Global Warming Events in Earth’s History

Corals construct their skeletons from calcium carbonate, releasing carbon dioxide as a byproduct.

Reinhard Dirscherl/Alamy

For the last 250 million years, coral reef systems have been crucial to the Earth’s climate, but perhaps not in the manner you might assume.

Coral reefs generate excess carbon dioxide because the formation of calcium carbonate, which constitutes coral skeletons, involves the release of greenhouse gases.

Certain plankton species utilize calcium carbonate to form their shells, and when these organisms perish, the mineral becomes buried on the ocean floor. In ecosystems dominated by coral, calcium and carbonate ions that typically nourish deep-sea plankton are rendered inaccessible.

Tristan Salles and his team at the University of Sydney conducted a modeling study on the interactions among shallow corals and deep-sea plankton over the last 250 million years, incorporating reconstructions of plate tectonics, climate simulations, and variations in sediment contribution to the ocean.

They determined that tectonic activity and geographic features foster periods with extensive shallow continental shelves, which provide optimal conditions for reef-building corals, thereby disrupting the coral-plankton dynamics.

As the area covered by coral reefs diminishes, calcium and alkali levels accumulate in the ocean, enhancing plankton productivity and increasing the burial of carbonate in the deep ocean. This shift contributes to lower CO2 concentrations and cooler temperatures.

The study revealed three significant disruptions in the carbon cycle over the past 250 million years. During these events—specifically in the Mid-Triassic, Mid-Jurassic, and Late Cretaceous—extensive coral reefs consumed vast amounts of calcium carbonate, resulting in notable ocean temperature increases.

Once the balance between shallow-sea corals and deep-sea plankton is disrupted, realignment can require hundreds of thousands to millions of years, noted Salles.

“Even if the system recovers from a significant crisis, achieving equilibrium will be a prolonged process, significantly extending beyond human timelines,” Salles elaborated.

On a brighter note, Salles observes that corals excel at absorbing excess nutrients to aid in reef building, even if planktonic nutrient growth gets excessive.

Currently, human-induced carbon dioxide emissions are driving unprecedented global warming and ocean acidification, endangering both corals and plankton, according to Salles. While the outcomes remain uncertain, the potential impact on ecosystems could be catastrophic.

“The feedback mechanisms we modeled span deep time and may not be relevant today. The current rate of change is too rapid for carbonate platform feedbacks to maintain similar significance.”

Alexander Skiles from the Australian National University in Canberra remarks that this research illustrates a “profoundly interconnected feedback cycle between ecosystems and climate.”

He suggested that while species are presumed to evolve and adapt to the climatic conditions dictated by “immutable physical and chemical processes,” it is increasingly evident that certain species are actively shaping the climate itself, leading to co-evolutionary feedback loops.

“Beyond corals, ancient microbial colonies like stromatolites have significantly influenced atmospheric carbon regulation,” Skiles pointed out.

“It is well-recognized that carbon is accelerating climate warming at an alarming rate. Corals contribute to this dynamic over extensive geological time, which may elucidate fluctuations between warmer and cooler periods.”

Source: www.newscientist.com

How Major Tech Firms Are Cultivating Media Ecosystems to ‘Shape the Online Narrative’

The introduction to tech mogul Alex Karp’s interview on Sourcely, a YouTube show by the digital finance platform Brex, features a mix of him waving the American flag accompanied by a remix of AC/DC’s “Thunderstruck.” While strolling through the company’s offices, Karp avoided questions about Palantir’s contentious ties with ICE, focusing instead on the company’s strengths while playfully brandishing a sword and discussing how he re-buried his childhood dog Rosita’s remains near his current residence.

“It’s really lovely,” comments host Molly O’Shea as she engages with Karp.

For those wanting insights from key figures in the tech sector, platforms like Sourcery provide a refuge for an industry that’s increasingly cautious, if not openly antagonistic, towards critical media. Some new media initiatives are driven by the companies themselves, while others occupy niches favored by the tech billionaire cohort. In recent months, prominent figures like Mark Zuckerberg, Elon Musk, Sam Altman, and Satya Nadella have participated in lengthy, friendly interviews, with companies like Palantir and Andreessen Horowitz launching their own media ventures this year.

A significant portion of Americans harbor distrust towards big tech and believe artificial intelligence is detrimental to society. Silicon Valley is crafting its own alternative media landscape, where CEOs, founders, and investors take center stage. What began as a handful of enthusiastic podcasters has evolved into a comprehensive ecosystem of publications and shows, supported by some of the leading entities in tech.

Pro-tech influencers, such as podcast host Rex Fridman, have historically fostered close ties with figures like Elon Musk, yet some companies this year opted to eliminate intermediaries entirely. In September, venture capital firm Andreessen Horowitz introduced the a16z blog on Substack. Notable author Katherine Boyle highlighted her longstanding friendship with JD Vance. This podcast has surged to over 220,000 subscribers on YouTube, featuring OpenAI CEO Sam Altman last month. Andreessen Horowitz is a leading investor.

“What if the future of media is shaped not by algorithms or traditional bodies, but by independent voices directly interacting with audiences?” the company posited in its Substack announcement. Previously, it invested $50 million into digital media startup BuzzFeed with a similar ambition, which ultimately fell to penny stock levels.

The a16z Substack also revealed this month its new eight-week media fellowship aimed at “operators, creators, and storytellers shaping the future of media.” This initiative involves collaboration with a16z’s new media team, characterized as a collective of “online legends” aiming to furnish founders with the clout, flair, branding, expertise, and momentum essential for winning the online narrative.

In parallel to a16z’s media endeavors, Palantir launched a digital and print journal named Republic earlier this year, emulating the format of academic journals and think tank publications like Foreign Affairs. The journal is financially backed by the nonprofit Palantir Foundation for Defense Policy and International Affairs, headed by Karp, who reportedly contributes just 0.01 hours a week, as per his 2023 tax return.

“Too many individuals who shouldn’t have a voice are amplified, while those who ought to be heard are sidelined,” remarked Republic, which boasts an editorial team comprised of high-ranking Palantir executives.

Among the articles featured in Republic is a piece criticizing U.S. copyright restrictions for hindering AI leadership, alongside another by two Palantir employees reiterating Karp’s affirmation that Silicon Valley’s collaboration with the military benefits society at large.

Republic joins a burgeoning roster of pro-tech outlets like Arena Magazine, launched late last year by Austin-based venture capitalist Max Meyer. Arena’s motto nods to “The New Needs Friends” line from Disney’s Ratatouille.

“Arena avoids covering ‘The News.’ Instead, we spotlight The New,” reads the editor’s letter in the inaugural issue. “Our mission is to uplift those incrementally, or at times rapidly, bringing the future into the present.”

This sentiment echoes that of founders who have taken issue with publications like Wired and TechCrunch for their overly critical perspectives on the industry.

“Historically, magazines that covered this sector have become excessively negative. We plan to counter that by adopting a bold and optimistic viewpoint,” Meyer stated during an appearance on Joe Lonsdale’s podcast.

Certain facets of emerging media in the tech realm weren’t established as formal corporate media extensions but rather emerged organically, even while sharing a similarly positive tone. The TBPN video podcast, which interprets the intricacies of the tech world as high-stakes spectacles akin to the NFL Draft, has gained swift influence since its inception last year. Its self-aware yet protective atmosphere has drawn notable fans and guests, including Meta CEO Mark Zuckerberg, who conducted an in-person interview to promote Meta’s smart glasses.

Another podcaster, 24-year-old Dwarkesh Patel, has built a mini-media empire in recent years with extensive collaborative discussions featuring tech leaders and AI researchers. Earlier this month, Patel interviewed Microsoft CEO Satya Nadella and toured one of the company’s newest data facilities.

Skip past newsletter promotions

Among the various trends in the tech landscape, Elon Musk has been a pioneer in adopting this method of pro-tech media engagement. Following his acquisition of Twitter in 2022, the platform has restricted links to key news entities and established auto-responses with poop emojis for reporter inquiries. Musk conducts few interviews with mainstream media yet engages in extensive discussions with friendly hosts like Rex Fridman and Joe Rogan, facing minimal challenge to his viewpoints.

Musk’s inclination to cultivate a media bubble around himself illustrates how such content can foster a disconnect from reality and promote alternative facts. His long-standing criticism of Wikipedia spurred him to create Grokipedia, an AI replica generating blatant falsehoods and results aligning with his far-right perspective. Concurrently, Musk’s chatbot Grok has frequently echoed Musk’s opinions, even going to absurd lengths to flatter him, such as asserting last week that Musk is healthier than LeBron James and could defeat Mike Tyson in a boxing match.

The emergence of new technology-centric media is part of a broader transformation in how celebrities portray themselves and the access they grant journalists. The tech industry has a historical aversion to media scrutiny, a trend amplified by scandals like the Facebook Files, which unveiled internal documents and potential harms. Journalist Karen Hao exemplified the tech sector’s sensitivity to negative press, noting in her 2025 book “Empire of AI” that OpenAI refrained from engaging with her for three years after a critical article she wrote in 2019.

The strategy of tech firms establishing their own autonomous and resonant media mirrors the entertainment sector’s approach from several years back. Press tours for film and album promotions have historically been tightly monitored, with actors and musicians subjected to high-pressure interviews judged by shows like “Hot Ones.” Political figures are adopting a similar framework, granting them access to fresh audiences and a more secure environment for self-promotion, as showcased by President Donald Trump’s 2024 campaign engaging with podcasters like Theo Fung, and California Governor Gavin Newsom’s introduction of his own political podcast this year.

While much of this emerging media does not aim to unveil misconduct or confront the powerful, it still holds certain merits. The content produced by the tech sector often reflects the self-image of its elite and the world they aspire to create, within an industry characterized by minimal government oversight and fewer probing inquiries into operational practices. Even the simplest of questions offer insights into the minds of individuals who primarily inhabit secured boardrooms and gated environments.

“If you were a cupcake, what kind would you be?” O’Shea queried Karp about Brex’s sauces.

“I prefer not to be a cupcake, as I don’t want to be consumed,” Karp replied. “I resist being a cupcake.”

quick guide

Contact us about this story





show


The best public interest journalism relies on first-hand reporting from those in the know.

If you have something to share regarding this matter, please contact us confidentially using the methods below.

Secure messaging in the Guardian app

The Guardian app has a tool to submit story tips. Messages are end-to-end encrypted, concealed within the daily activities of all Guardian mobile apps, preventing observers from knowing that you are in communication with us.

If you don’t have the Guardian app, please download it (iOS / android) and go to the menu. Select “Secure Messaging.”

SecureDrop, instant messenger, email, phone, mail

If you can safely utilize the Tor network without being monitored, you can send messages and documents to the Guardian via our SecureDrop platform.

Lastly, our guide at theguardian.com/tips lists multiple ways to contact us securely, discussing the advantages and disadvantages of each.

Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Four Major Stages of Brain Development from Birth to Age 90

The wiring of our neurons evolves over the decades

Alexa Mousley, University of Cambridge

Our brain’s functionality isn’t static throughout our lives. We know that our capacity for learning and the risk of cognitive decline fluctuate from infancy to our 90s. Recently, scientists may have uncovered a possible reason for this change. The wiring of our brains seems to experience four key turning points at ages 9, 32, 66, and 83.

Previous studies indicate that our bodies undergo three rapid aging cycles around the ages of 40, 60, and 80. However, the complexity of the brain complicates our understanding.

The brain consists of distinct regions that communicate through white matter tracts. These tracts are wire-like structures formed by long, slender projections known as axons, which extend from neurons, or brain cells. These connections significantly influence cognitive functions, including memory. Nevertheless, it was uncertain if this substantial change in wiring transpires throughout one’s life. “No one has combined multiple metrics to characterize stages of brain wiring,” states Alexa Mousley from Cambridge University.

In an effort to bridge this knowledge gap, Maudsley and his team examined MRI scans of roughly 3,800 individuals from the UK and US, primarily white, spanning ages from newborns to 90 years. These scans were previously gathered as part of various brain imaging initiatives, most of which excluded individuals with neurodegenerative diseases or mental health issues.

The researchers discovered that the brain wiring of individuals reaching 90 years old typically progresses through five significant stages, separated by four primary turning points.

In the initial stage, from birth to age nine, the white matter tracts between brain areas seem to become longer, more intricate, and less efficient. “It takes time for information to travel between regions,” explains Mausley.

This may be due to the abundance of connections in our brains as young children. As we age and gain experiences, we gradually eliminate unused connections. Mausley notes that the brain prioritizes making broader connections, beneficial for activities like piano practice, though at the expense of efficiency.

However, during the second stage, from ages 9 to 32, this trend appears to reverse, potentially driven by the onset of puberty and hormonal shifts affecting brain development. “Suddenly, your brain’s connections become more efficient. Connections become shorter, allowing information to traverse more swiftly,” says Mausley. This could enhance skills such as planning and decision-making, along with improved cognitive abilities like working memory.

The third stage, which spans from 32 to 66 years, is the longest phase. “During this stage, the brain continues to change, albeit at a slower rate,” Mausley explains. Specifically, she notes that connections between regions have a tendency to become less efficient over time. “It’s unclear what exactly triggers this change; however, the 30s often involve significant lifestyle alterations, like starting a family, which may play a role,” she adds. This inefficiency might also stem from general physical wear and tear, as noted by Katia Rubia from King’s College London.

From ages 66 to 83, the connections between neurons in the same brain area tend to remain more stable than those among different regions. “This is noteworthy, especially as the risk of developing conditions like dementia increases during this period,” Mausley remarks.

In the final stage, from ages 83 to 90, connections between brain regions weaken and rely more frequently on “hubs” that link multiple areas. “This indicates that there are fewer resources available to maintain connections at this age, leading the brain to depend on specific areas to serve as hubs,” Mausley explains.

Understanding these alterations in the brain could provide insights into why mental health issues arise, typically before the age of 25, and why individuals over 65 are particularly vulnerable to dementia, she states.

“It’s vital to comprehend the normal stages of structural changes in the brain throughout the human lifespan, so future research can explore deviations that occur in mental health and neurodegenerative disorders,” Rubia notes. “Grasping the causes of these deviations can assist us in pinpointing treatment strategies. For instance, we might examine which environmental factors or chemicals are responsible for these differences and discover methods to counteract them through treatments, policies, and medications.”

Nevertheless, Rubia emphasizes the need for further research to determine whether these findings apply to a more ethnically and geographically diverse population.

topic:

Source: www.newscientist.com

British Firms Poised to Seize a Major Share of the AI Chip Market

TThe UK holds a unique and advantageous position to contribute significantly in the new era of artificial intelligence, provided it seizes the chance to establish the production of millions of computer chips, an area that is often misunderstood.

AI technology demands a vast quantity of chips, and a collaborative national initiative could fulfill up to 5% of the global requirement.

Our legacy in chip design is unparalleled, beginning with the first general-purpose electronic computer, the initial electronic memory, and the first parallel computing system. Presently, Arm, based in Cambridge, is a prominent player that designs over 90% of the chips found in smartphones and tablets worldwide.

Given this background, it is certainly plausible that British companies can capture a notable share of the AI chip market. A target of 5% is both conservative and achievable. Our distinguished universities, a flourishing foundational AI company like DeepMind, and a strong innovation ecosystem equip the UK with the tangible resources necessary to compete.

The potential gains are tremendous. The global market for AI chips is expected to soar to $700 billion (£620 billion) annually by 2033, surpassing the entire current semiconductor market. Achieving that 5% share would translate to an influx of $35 billion in new revenue and the creation of thousands of high-paying jobs.

AI is set to transform not only the economy but also societal structures and security. Unfortunately, many do not grasp where its true value and strategic influence lie.

In this contemporary gold rush, real wealth is accessible not only to those mining digital gold but also to those who provide the tools for the task. I witnessed this firsthand from 1997 to 2006 when Gordon Moore and Andy Grove helped establish Intel’s board and founded the company in California. They set the groundwork for the first technology revolution, much like Nvidia is doing today on an even larger scale.

UK engineers, intellects, businesses, and investors excel in this domain. However, government collaboration is crucial.

While consumers are captivated by the generative marvels of OpenAI, the true market winner is Nvidia, the entity that provides the advanced chips facilitating such achievements. OpenAI’s estimated value stands at merely 1/10th that of Nvidia. AMD, a semiconductor design company, holds a distant second place, while emerging firms like Cerebras and TenTrent strive for a share of the market.

All AI models and applications, ranging from autonomous robots to real-time translation services, depend heavily on advancements in chip technology. Chips are the new oil of the digital economy, dictating the speed and efficiency with which future applications can be developed. Currently, the only major players in the AI field seeing true profitability are chip manufacturers.

Concerns have arisen that China may commoditize AI chips similar to its approach with solar technology, leading to dramatic price fluctuations and undercutting existing companies. The situation is more complex. U.S. export controls will restrict China’s access to advanced chip manufacturing technology for the next decade, significantly curtailing its capacity to dominate the high-end AI chip arena. This reality positions the U.S. as a key player and creates a substantial opportunity for its closest ally, the UK, which excels in chip design.

The UK has already birthed several companies in this sector, such as Fractile, Flux, and Oriole. However, we lack the necessary scale to capitalize on the opportunity. Instead of competing with Nvidia in data center computing, we should focus on specialized applications that usher in innovation, like robotics, factory automation, medical devices, and autonomous vehicles.

These domains offer ample opportunities for inventive architectures and new competition.

Skip past newsletter promotions

Too frequently, Britain’s industrial strategy is impeded by national insecurity and a lack of confidence. This must change. Primarily, governments must advocate decisively for our intent to excel in AI chips.

Secondly, we should aim to double our chip design workforce from the current 12,000 within a decade and encourage more talented individuals to pursue electrical engineering and computer science through generous scholarships. A target of 1,500 new students each year is achievable. Universities must offer relevant courses, and governments need to enhance financial support.

Thirdly, the UK should fully utilize its investment instruments: the Sovereign AI Fund, the British Business Bank, the National Wealth Fund, and the Ministry of Defence’s initiatives to ‘buy British’.

Fourthly, the UK-US strategic partnership must serve as a foundation for greater collaboration with leading US chip manufacturers and facilitate access to their state-of-the-art sub-3 nanometer manufacturing technologies. Collaborating with our U.S. partners to develop a robust supply chain and innovation pipeline is essential.

If the UK commits fully, the emerging age of AI could be characterized not only by code but also by silicon, leaving a distinctly British legacy.

Source: www.theguardian.com

Amazon Sees Biggest Cloud Growth Since 2022 Following Major Outage

For the first time since its cloud computing unit experienced a significant failure that impacted various services from smart beds to banks, Amazon has made its financial data public.

Despite this global outage, Amazon Web Services (AWS) continues to thrive, reporting a 20% year-over-year revenue growth for the quarter. Analysts on Wall Street predict that AWS will generate a net revenue of $32.42 billion in the third quarter, while Amazon’s actual reported revenue stands at $33 billion.

“AWS is growing at a rate not seen since 2022,” CEO Andy Jassy mentioned in a statement during the earnings call.


Following the third-quarter earnings report that exceeded analysts’ forecasts, the company’s stock surged by approximately 9% in after-hours trading.

The earnings announcement underscored Amazon’s ambition to compete more effectively with corporations that have successfully capitalized on the AI boom. Amazon’s stock performance has trailed behind some major tech competitors, and its e-commerce operations are particularly vulnerable to the far-reaching and unpredictable tariff policies of the Trump administration compared to companies driven by software.

Value at roughly $2.4 trillion, Amazon reported that it significantly outperformed Wall Street’s expectations, largely due to the expansion of its cloud computing services. Analysts had anticipated earnings of $1.58 per share with net sales of $177.82 billion, whereas Amazon announced sales of $180.17 billion and earnings per share of $1.95.

AWS is facing mounting rivalry from alternative providers like Google Cloud and Microsoft Azure, the latter of which has established a partnership with OpenAI and reported robust growth in its cloud segment, boosting its stock prices.

Nevertheless, AWS remains a crucial component of the modern Internet, and the extent of its influence was inadvertently highlighted earlier this month when a glitch in its cloud services rendered websites, apps, cutting-edge products, and critical communication systems, including electronic health records, inoperable. The outage affected millions and lasted several hours, revealing how integral Amazon’s services are to everyday life.

During the earnings call, Amazon executives promoted the integration of AI tools like shopping assistant Rufus into its services. They also discussed Zoox’s plans to expand its robotaxi business, with self-driving service trials scheduled to commence in Washington, D.C., later this year.

Earlier this week, Amazon announced plans to cut 14,000 jobs at its headquarters, with more layoffs anticipated across the organization. This decision was publicly communicated through a blog post titled “Staying Agile and Continuing to Strengthen Our Organization,” which cited advancements in AI as a key reason, stating that the company aims to “function like the world’s largest startup.”

“We must remember that the world is rapidly evolving,” the Amazon post noted. “This generation of AI represents the most transformative technology since the Internet, allowing businesses to innovate unprecedentedly faster.”

Skip past newsletter promotions

Jassy indicated in a blog post earlier this year that the company’s investments in AI would lead to a “reduction in personnel for some roles currently held.”

However, during a conference call with investors, Jassy clarified that the significant layoffs were not driven by AI, asserting that they stemmed from “culture” and that the company is focusing on a more flexible, startup-like approach.

“The announcement we made a few days ago wasn’t purely financial and hasn’t been so far—it’s not primarily AI-driven either. It’s fundamentally about our culture,” Jassy stated.

quick guide

Contact us about this story

show

The best public interest journalism relies on first-hand accounts from people in the know.

If you have something to share regarding this matter, please contact us confidentially using the methods below.

Secure messaging in the Guardian app

The Guardian app has a tool to submit story tips. Messages are end-to-end encrypted and hidden within the daily activities performed by all Guardian mobile apps. This prevents observers from knowing that you are communicating with us, much less what you are saying.

If you don’t already have the Guardian app, please download it (iOS/android) and go to the menu. Select “Secure Messaging.”

SecureDrop, instant messenger, email, phone, mail

If you can safely use the Tor network without being monitored, you can send messages and documents to the Guardian through our SecureDrop platform.

Finally, our guide at theguardian.com/tips lists several ways to contact us securely and discusses the pros and cons of each.

Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Microsoft Posts Strong Earnings Despite Major Azure Outage

On Wednesday, Microsoft addressed worries regarding excessive spending on AI, showcasing increased profits despite interruptions in its Azure cloud services and 365 office software. This strong earnings report follows a deal with OpenAI that raised the tech leader’s valuation to over $4 trillion.

Following disruptions to both the Xbox and Investor Relations pages, Microsoft issued a statement, noting, “We are actively resolving an issue affecting Azure Front Door, impacting the availability of certain services.”

Despite the service interruption, the company’s financial outlook remained robust. Microsoft reported first-quarter earnings of $3.72 per share, surpassing analysts’ expectations of $3.68, with revenue reaching $77.7 billion against an estimate of $75.5 billion, as per Bloomberg consensus.

This marks an increase from $3.30 per share and $65.6 billion in sales during the same period last year.

The Azure cloud division, closely monitored by Microsoft, exhibited approximately 40% growth, exceeding forecasts. Operating income rose 24% to $38 billion, surpassing expectations, with net income reported at $27.7 billion.

“Our global cloud and AI factory collaborates with co-pilots across high-value sectors to promote widespread adoption and tangible impact,” stated Satya Nadella, Microsoft Chairman and CEO.

“This is why we are continuously enhancing our investments in AI, in both capital and talent, to seize significant future opportunities.”

The company revealed spending a remarkable $34.9 billion on AI initiatives during the quarter, a 74% increase from the previous year.

Microsoft’s earnings report arrives as investors are responding positively to modifications in its contract with OpenAI. This shift will transition the once nonprofit AI organization into a for-profit entity, further integrating Microsoft with the company.

Under the amended agreement, Microsoft will possess 27% of OpenAI Group’s PBC shares, amounting to approximately $135 billion, while OpenAI’s nonprofit division will hold $130 billion in stock of the profit-making enterprise.

The earnings report offers Wall Street an updated perspective on the company’s growth in AI and cloud services. Nvidia recently became the first company to surpass a $5 trillion market capitalization, coinciding with favorable signs for a U.S.-China trade agreement. Earlier this week, the overall U.S. stock market achieved record levels, spurred by substantial investments in AI.

Microsoft’s earnings hit the headlines as the week unfolds with reports from the Magnificent Seven, a group of the world’s most valuable publicly traded companies, including Meta Inc. and Alphabet, Google’s parent company.

Amid growing apprehensions about a potential market bubble in AI-related investments reminiscent of the overinvestment seen in the late 1990s, it is suggested that bubbles may not be apparent until they burst.

Skip past newsletter promotions

On the earnings call, Microsoft CFO Amy Hood attempted to ease concerns regarding a potential AI investment bubble, stating that the company’s rapid expansion of AI capabilities (up 80% this year alongside a plan to double its data center size in two years) is to fulfill already booked demand.

“The necessity for ongoing infrastructure development is extremely high, driven by business already booked, not new business,” Hood explained, noting that the company had been experiencing capacity shortages for several quarters.

“I hoped to catch up, but it didn’t happen,” Hood remarked. “Demand is escalating, and usage is growing quickly. When demand signs are visible and you know you’re lagging, spending is essential. But we’re investing with assurance based on our usage patterns and reservations, and we feel positive about that.”

Nonetheless, she cautioned that Microsoft is likely to remain “capacity constrained.”

According to Reuters, the collective valuation of AI and cloud computing firms is projected to hit $20 trillion, with the overall market return reaching 18%, or around $3.3 trillion, by 2025. Investors typically look for signs that AI capital expenditures meet expectations as the market continues to hit new highs.

Major tech firms like Microsoft, Alphabet, Meta, and Amazon are anticipated to invest hundreds of billions in capital next year, primarily directed at developing data centers and infrastructure for artificial intelligence. While investors might be unfazed by a lack of robust revenue growth, they may find reassurance in indicators of strong AI adoption. The Dow Jones Industrial Average reached a notable milestone of $47,943 on Wednesday morning.

“As five of the Magnificent Seven report this week, the market is eager for affirmation that all these AI capital investments are being made and that they are ensuring observable revenue and profit from AI,” commented Scott Wren, senior global market strategist at Wells Fargo Investment Institute in St. Louis, Missouri, to Reuters this week.

Elements of the AI economic surge might stem from cost-saving measures. Microsoft announced approximately 9,000 job reductions at the start of summer, while Amazon is reportedly considering cutting up to 30,000 corporate positions, or 10% of its white-collar workforce, to mitigate overhiring during peak pandemic demand.

As AI technology adoption increases, business leaders are increasingly tasked with justifying human hires, including roles in human resources and other executive positions that entail additional costs like health insurance and pensions, particularly when positions could be executed by AI. Consequently, human resources departments are likely to be among the initial areas downsized as AI continues to grow.

Source: www.theguardian.com

Hurricane Melissa Signals a Concerning New Norm for Major Hurricanes

Hurricane Melissa, which has recently impacted both Jamaica and Cuba, has become emblematic of the increasing frequency and intensity of major storms in a warming world.

Historically rare devastating storms characterized by extreme winds and heavy rainfall are now becoming more frequent, a trend accelerated by climate change. This shift is revealing intriguing patterns in the behavior and timing of these formidable hurricanes.

Before making landfall in Jamaica as a powerful Category 5 storm, Melissa, similar to other hurricanes over the past decade, exhibited exceptional strength in warmer waters. This rapid intensification has marked it as a major force of the current Atlantic season, tying it for the most formidable landfall recorded in Atlantic history.

After impacting Jamaica, the storm weakened and delayed rainfall—another indication of how climate change influences hurricane behavior. Notably, Melissa’s occurrence came later in the season, demonstrating a shift as hurricane activity typically peaks in early September, but this year persisted into the fall when ocean temperatures remain elevated.

Experts suggest that these patterns signify a new normal for hurricanes with Melissa representing this change.

“This storm differs significantly from those observed in previous decades,” stated Shel Winkley, a meteorologist affiliated with the Climate Central research group.

This is a critical change that meteorologists and officials in hurricane-prone areas are vigilantly observing.

intensified all at once

One of the most striking features of Melissa is its extraordinary rate of intensification. In a mere 18 hours, it escalated from a tropical storm to a Category 4 on Sunday, achieving Category 5 status early Monday morning.

Climate change is heightening the likelihood of such “rapid intensification,” defined by the National Hurricane Center as an increase in wind speeds of 35 miles per hour or more within a 24-hour timeframe.

In Melissa’s case, Winkley noted that notably warm sea surface temperatures in the Caribbean, coupled with elevated atmospheric moisture, triggered “extremely rapid intensification.”

“We’ve become adept at predicting significant increases in hurricane intensity, but Melissa surpassed even our most optimistic forecasts regarding wind speeds,” he explained.

Winkley added that the storm traversed Caribbean waters that were 2.5 degrees Fahrenheit above average, with climate change making its occurrence up to 700 times more likely.

“While 2.5 degrees Fahrenheit might seem minor, such small variations can noticeably impact storm behavior,” Winkley stated.

A number of recent hurricanes have exhibited rapid intensification. For instance, Hurricane Milton’s wind speeds surged by 90 miles per hour in roughly 25 hours, and Hurricane Ian in 2022 experienced rapid strengthening prior to making landfall in Florida. Similar patterns were observed in Hurricanes Idalia in 2023, Ida in 2021, and Harvey in 2017.

If there are fewer hurricanes, the impact will be greater.

Over the past 35 years, the annual incidence of hurricanes and tropical cyclones has decreased.

“Our research indicates that the number of hurricanes, including typhoons, around the globe has significantly dropped since 1990,” remarked Phil Klotzbach, a hurricane researcher at Colorado State University.

However, this overall decline is largely attributed to a reduction in Pacific cyclone activity, Klotzbach noted. In contrast, Atlantic hurricane activity has seen an increase primarily due to a long-term La Niña effect, which tends to weaken the upper-level winds that inhibit hurricane formation.

“If you enjoy hurricanes, La Niña is beneficial for the Atlantic,” Klotzbach said.

Hurricane Melissa on October 27, 2025.Noah / Shira

If a hurricane forms, it is increasingly likely to develop into a significant storm due to rising ocean temperatures.

“We’ve observed a rise in the frequency of hurricanes reaching categories 4 and 5,” Klotzbach noted.

Melissa was the third Category 5 hurricane to form this year, marking the first instance in two decades where two or more such hurricanes occurred in a single season.

Zachary Handros, an atmospheric scientist at the Georgia Institute of Technology, explained that warmer oceans will likely contribute to increased hurricane activity moving forward; however, atmospheric changes may alter upper-level winds, potentially hindering some storms. “It’s not a straightforward answer,” he added.

The ongoing evolution of these trends is a subject of active research and scientific inquiry.

Hurricane season gets longer

Experts concur that this season’s top hurricane struck just days before Halloween.

“At this point, we are quite late in the season, and typically things should be easing,” remarked Derrick Herndon, a researcher at the University of Wisconsin’s Tropical Cyclone Research Group.

While the Caribbean has always been known for powerful late-season hurricanes, Klotzbach indicated that the likelihood is increasing. He recently submitted a peer-reviewed study suggesting that hurricane seasons may commence earlier.

Workers, community members, and business owners clean up debris after Hurricane Helen on September 30, 2024, in Marshall, North Carolina.Javin Botsford/The Washington Post, with files from Getty Images

Klotzbach noted that the pattern of fall hurricanes is influenced by a long-term swing toward a La Niña pattern, likely a result of both climate change and natural variability.

La Niña diminishes upper-altitude winds while Caribbean waters remain warm, facilitating storm formation into late October and early November. “The odds are stacked for a powerful hurricane,” he said.

Hurricane Melissa further complicated matters with warmer-than-usual ocean waters off Jamaica’s southern coastline.

“If we anticipate a particularly strong Atlantic hurricane, it is likely to develop in this region,” Herndon stated.

In previous years, such storms would generally pull up cooler waters from the depths, thereby limiting their growth. However, with ocean heat surging both at the surface and at depths of 60 meters, Melissa has been able to tap into increased heat and energy, according to Andy Hazelton, a hurricane modeler and associate scientist at the University of Miami’s Oceanic and Atmospheric Cooperation Institute.

the storm is stagnant

Research indicates that hurricanes are more prone to stalling just before or after making landfall, resulting in significant rainfall. This conclusion has been supported by a study published last year. Other research suggests that the overall forward speed of storms has decreased, but this remains a topic of debate.

Residents of Guanimal, a coastal town in Cuba southwest of Havana, navigate flooded streets after Hurricane Helen in 2024.Yamil Raji/AFP from Getty Images File

Following this pattern, Hurricane Melissa gained strength before stalling offshore from Jamaica. On Tuesday morning, the day of its initial landfall, the storm was traveling at a mere 2 miles per hour. Forecasters anticipated up to 30 inches of rain in some areas of Jamaica, surpassing one-third of the yearly average.

The scientific community remains divided regarding why certain storms slow down, though some hypothesize that climate change may be weakening atmospheric circulation patterns.

Hurricane Harvey in 2017 vividly illustrated the consequences of such stalls, as the storm lingered over Houston, leading to rainfall of nearly 5 feet in some locations. This phenomenon is especially concerning as a warmer atmosphere can retain and release more moisture.

“For every degree Fahrenheit that the environment warms, the atmosphere can contain 4% additional moisture,” Winkley stated. “Rising ocean temperatures amplify not only the strength of hurricanes but also enable greater evaporation, resulting in more moisture available for these storms to absorb and then release.”

Source: www.nbcnews.com

Major Revelation: Amazon Web Services Outage Highlights UK Government’s £1.7 Billion Reliance on Tech Giant

Amazon’s CEO Andy Jassy wore a broad smile while meeting Keir Starmer in the gardens of Downing Street to announce a £40bn investment in the UK this past June. Starmer shared his enthusiasm, stating, “equally passionate”. He remarked, “This transaction demonstrates that our transformation strategy to attract investment, stimulate growth, and enhance people’s financial well-being is succeeding.”

However, just four months later, the company faced a massive global outage on Monday that halted thousands of businesses and underscored its reliance on Amazon Web Services (AWS), the cloud computing platform utilized by the British government.

Data gathered for the Guardian indicates that the UK government is increasingly dependent on the services of U.S. tech giants. These companies have come under fire from trade unions and politicians for their working conditions in logistics and online retail.

Since 2016, AWS has secured 189 contracts with the UK government valued at £1.7bn and has billed approximately £1.4bn during this timeframe, according to data from public procurement intelligence firm Tassel.

The research group reported: “Currently, 35 public sector authorities utilize AWS services across 41 contracts totaling £1.1bn. The primary ministries involved include the Home Office, DWP, HMRC, the Ministry of Justice, Cabinet Office, and Defra.

Screenshot of the out-of-service HMRC website on Monday, October 20th. Photo: HMRC.gov.uk/PA

Tim Wright, a technology partner at law firm Floodgate, noted that the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) have consistently warned about the risks associated with concentrating cloud services for regulated enterprises.

“Recent efforts by the Treasury, the PRA, and the FCA to impose direct oversight on ‘significant third parties’ aim to mitigate the risk of outages like those faced by AWS,” he said. “However, until we see substantial diversification and the establishment of sovereign clouds, the UK government’s approach contradicts the resilience principles that regulators advocate for.”

The House of Commons Treasury Committee has reached out to Chancellor of the Exchequer Lucy Rigby to inquire why Amazon wasn’t classified as a “significant third party” within the UK financial services sector, a designation that would have subjected the tech giant to regulatory scrutiny.

Skip past newsletter promotions

Committee Chair Meg Hillier noted that Amazon recently informed the committee that its financial services clients rely on AWS for “resilience” and that AWS offers “layers of protection.”

This week’s outage impacted over 2,000 businesses around the globe, leading to 8.1 million reports of issues, with 1.9 million in the U.S., 1 million in the UK, and 418,000 in Australia, according to internet outage tracker Downdetector.

Only HMRC confirmed it was affected by the outage, stating customers were “experiencing difficulties accessing our online services” and recommended they call back later due to busy phone lines.

While many websites restored their services after a few hours, some continued to experience problems throughout the day. By Monday evening, Amazon announced that all cloud services had “returned to normal operations.”

Trade unions have long questioned whether Amazon should be excluded from government contracts because of its reputation for subpar working conditions in its large warehouses.

Andy Prendergast, national secretary of the GMB union, stated: “Amazon has a dismal record regarding fair treatment of workers. Shocking conditions in their warehouses have resulted in emergency ambulance calls, with employees claiming they are treated like robots, forced to work until exhaustion, all while being compensated with poverty wages until they strike for six months.”

“In this context, wasting nearly £2 billion of public funds is deplorable.”

AWS has not provided a comment. A spokesperson from Amazon’s fulfillment centers stated that the “vast majority” of ambulance calls at their facilities are not “work-related.”

Source: www.theguardian.com

Major Direct Action on Actor Image Use in AI Content Poses Fairness Concerns

The performing arts union Equity has issued a warning of significant direct action against tech and entertainment firms regarding the unauthorized use of its members’ likenesses, images, and voices in AI-generated content.

This alert arises as more members express concerns over copyright violations and the inappropriate use of personal data within AI materials.

General Secretary Paul W. Fleming stated that the union intends to organize mass data requests, compelling companies to reveal whether they have utilized members’ data for AI-generated content without obtaining proper consent.

Recently, the union declared its support for a Scottish actor who alleges that his likeness contributed to the creation of Tilly Norwood, an “AI actor” criticized by the film industry.

Bryony Monroe, 28, from East Renfrewshire, believes her image was used to create a digital character by the AI “talent studio” Xicoia, though Xicoia has denied her claims.

Most complaints received by Equity relate to AI-generated voice replicas.

Mr. Fleming mentioned that the union is already assisting members in making subject access requests against producers and tech firms that fail to provide satisfactory explanations about the sources of data used for AI content creation.

He noted, “Companies are beginning to engage in very aggressive discussions about compensation and usage. The industry must exercise caution, as this is far from over.”

“AI companies must recognize that we will be submitting access requests en masse. They have a legal obligation to respond. If a member reasonably suspects their data is being utilized without permission, we aim to uncover that.”

Fleming expressed hope that this strategy will pressure tech companies and producers resisting transparency to reach an agreement on performers’ rights.

“Our goal is to leverage individual rights to hinder technology companies and producers from binding collective rights,” Fleming explained.

He emphasized that with 50,000 members, a significant number of requests for access would complicate matters for companies unwilling to negotiate.

Under data protection laws, individuals have the right to request all information held about them by an organization, which typically responds within a month.

“This isn’t a perfect solution,” Fleming added. “It’s no simple task since they might source data elsewhere. Many actors are behaving recklessly and unethically.”

Ms. Monroe believes that Norwood not only mimics her image but also her mannerisms.

Monroe remarked, “I have a distinct way of moving my head while acting. I recognized that in the closing seconds of Tilly’s showreel, where she mirrored exactly that. Others observed, ‘That’s your mannerism. That’s your acting style.'”

Liam Budd, director of recorded media industries at Equity UK, confirmed that the union takes Mr. Monroe’s concerns seriously. Particle 6, the AI production company behind Xicoia, claimed it is collaborating with unions to address any concerns raised.

A spokesperson from Particle 6 stated, ‘Bryony Monroe’s likeness, image, voice, and personal data were not utilized in any way to create Tilly Norwood.’

“Tilly was developed entirely from original creative designs. We do not, and will not, use performers’ likenesses without their explicit consent and proper compensation.”

Budd refrained from commenting on Monroe’s allegations but said, “Our members increasingly report specific infringements concerning their image or voice being used without consent to produce content that resembles them.”

“This practice is particularly prevalent in audio, as creating a digital audio replica requires less effort.”

However, Budd acknowledged that Norwood presents a new challenge for the industry, as “we have yet to encounter a fully synthetic actor before.”

Equity UK has been negotiating with UK production industry body Pact (Film and TV Producers Alliance) regarding AI, copyright, and data protection for over a year.

Fleming mentioned, “Executives are not questioning where their data originates. They privately concede that employing AI ethically is nearly impossible, as they are collecting and training on data with dubious provenance.”

“Yet, we frequently discover that it is being utilized entirely outside established copyright and data protection frameworks.”

Max Rumney, deputy chief executive of Pact, highlighted that its members must adopt AI technology in production or risk falling behind companies without collective agreements that ensure fair compensation for actors, writers, and other creators.

However, he noted a lack of transparency from tech firms regarding the content and data used for training the foundational models of AI tools like image generators.

“The fundamental models were trained on our members’ films and programming without their consent,” Rumney stated.

“Our members favor genuine human creativity in their films and shows, valuing this aspect as the hallmark of British productions, making them unique and innovative.”

Source: www.theguardian.com

Just 1% of the Global Population Follows Healthy and Sustainable Eating Habits, Major Report Reveals

Here’s a rewritten version of your content while keeping the HTML tags intact:

Recent global assessments of the food system reveal that fewer than 1% of individuals consume diets beneficial to both the planet and human health.

Nevertheless, adopting a healthier dietary approach could prevent up to 15 million premature deaths annually and could decrease global greenhouse gas emissions by as much as 20%.

The findings are part of a 2025 Report by the Eat-Lancet Committee, which consolidates insights from nutritionists, climate experts, economists, physicians, social scientists, and agricultural scholars from over 35 countries.

The research team evaluated the effects of current food systems on human health and the environment, concluding that food production poses risks to five crucial Earth systems that are essential for human survival.

These five critical threats include climate change, land degradation, water scarcity, nitrogen and phosphorus pollution, and human-induced contaminants like pesticides and microplastics.

However, transforming the food system to ensure healthy diets for everyone could restore these systems to a safe state and enhance human well-being.

“If everyone adopts a healthy diet, by 2050, 100 billion people could sustain themselves on 7% less land than what is currently utilized,” stated Dr. Fabrice Declerck, EAT’s Chief Science Officer, in an interview with BBC Science Focus. “This has never happened in the history of food production. We have very few resources needed to feed more individuals.”

Justice was a significant aspect of the report, emphasizing the need for equitable wages for food workers and fairer access to food resources – Credit: Anuchasiribisanwan via Getty

Scientists have estimated that 6.9 billion individuals consume excessive amounts of food, particularly meat, dairy, sugar, and ultra-processed items, while 3.7 billion struggle to find access to nutritious food.

As a result, the report advocates for adherence to a planetary health diet (PhD), which emphasizes fruits, vegetables, nuts, legumes, and whole grains.

In a PhD, half of your plate should consist of vegetables, fruits, and nuts, while 30% should be dedicated to whole grains. The remaining portion should be a protein source, with a focus on legumes like beans and lentils.

Meat, fish, and dairy are optional within the PhD framework, with established limits, but the diet allows for flexibility. For instance, one can remain within guidelines even with a weekly intake of up to 200g of beef.

Declerck notes that the diet is adaptable to individual tastes, encouraging people to incorporate their cultural preferences.

“In fact, I believe traditional diets often more accurately reflect health,” he mentioned.

The planet’s healthy food guidelines aim to enhance human health while also benefiting the environment, as stated in the report – Credit: Carl Hendon

Currently, only 1% of individuals meet the report’s dietary suggestions. Declerck emphasized that scientists are not ready to pinpoint the locations of these individuals, given the numerous variations among countries.

“But these individuals reside in societies where they can access healthy diets and earn a livable wage,” he added.

Declerck further remarked that the best examples of healthy eating are often found in middle-income countries, particularly within the Mediterranean basin, the Indian subcontinent, and Southeast Asia.

For middle-income nations, the challenge lies in avoiding a shift toward a Western diet while maintaining cultural dietary traditions.

Amidst concerns regarding the climate crisis, Declerck stated that the report presents a “surprising” opportunity to enhance both human health and environmental well-being simultaneously.

“We encourage individuals to consume a wider variety of foods, celebrate their own cultural contributions, explore diverse culinary traditions, and enjoy the richness of food diversity,” he asserted. “This is beneficial not only for your personal health but also contributes significantly to the health of our planet as a whole,” Declerck concluded.

The research’s co-author, Professor Johann Lockstrom, co-chair of the committee and director of the Potsdam Institute for Climate Impact Research, stated: “The evidence is irrefutable. It is not only feasible to transform the food system, but it is crucial for ensuring a safe, fair, and sustainable future for all.”

Justice formed another key component of the report, highlighting the fact that the wealthiest 30% of the population accounts for over 70% of food-related environmental impacts.

“Those of us who are unhealthy and walk blocking others’ rights to a secure environment must take action,” the report emphasized.

The findings call for immediate measures to reform the global food system for the benefit of human health, justice, and environmental sustainability.

read more:

Source: www.sciencefocus.com

How “Beauty Factory” Addresses Two Major Cosmological Mysteries

“B-mesons assist us in unraveling significant cosmic queries. Why is there a predominance of matter over antimatter?”

sakkmesterke/alamy

Did you know that in the realm of physics, there are facilities dubbed beauty factories? This term doesn’t refer to aesthetics; rather, it describes an experiment where electrons collide with their antimatter equivalents, positrons, to create B-mesons.

B-mesons are constructed from quarks, the building blocks of normal matter. Typically, everyday matter comprises up-quarks and down-quarks, while B-mesons are made up of beauty quarks combined with up, down, charm, or strange quarks.

This unique configuration results in B-mesons having a fleeting existence, seemingly detached from common life. However, their significance lies in the potential answers they hold regarding universal enigmas, such as the imbalance of matter versus antimatter.

We understand that all particles have corresponding antiparticles. Yet, when we observe the universe, we see a predominance of particles, like electrons, overshadowing their antiparticle counterparts, positrons, which are merely identical but with reversed charges.

Mesons are particularly intriguing as they inhabit the space between the prevalent matter and antimatter realms. This positions them as potential keys to unlocking the mystery of the disparity between the two. Grasping this could clarify why the universe holds such a favorable balance of matter when encounters between matter and antimatter typically result in annihilation. The formation of B factories arises from the desire to decode this cosmic puzzle.

The complexity deepens when considering mesons and their own antiparticles. Each B-meson consists of beauty quarks paired with up, down, charm, or strange quarks. Neutral B-mesons, devoid of charge, exhibit oscillatory behavior as they transform between mesons and their antiparticles. In essence, neutral B-mesons exemplify a spontaneous non-binary state.

These neutral B-mesons are pivotal in addressing the asymmetry of matter and antimatter. Their non-binary characteristics are anticipated within the standard model of particle physics, which catalogs known particles. However, we must determine whether these oscillatory states are evenly distributed. Are collisions more likely to yield a meson or its antiparticle? Disparities in these oscillations may shed light on the core asymmetries of matter and antimatter.


B factories could illuminate the nature of an elusive component: dark matter, which remains unseen in laboratories.

In 2010, researchers from the Fermilab Dzero collaboration identified a 1% deviation, although subsequent studies haven’t corroborated this result. The exploration of these discrepancies continues to intrigue, particularly as variances emerge in unrelated vibration studies.

B factories may also expand our comprehension of dark matter, an entity detected only through its gravitational effects on visible matter. Approximately 85% of the universe’s mass seems to consist of this invisible material, which the standard model has yet to account for.

Crafting a theory to explain dark matter necessitates postulating new particles or forces, some of which might interact subtly with known particles, complicating detection. These interactions often hinge on mediators—entities that facilitate such connections. While these mediators are elusive, under optimal conditions, they may not be directly observable. However, we can anticipate witnessing decay products, such as electron-positron pairs, serving as indicators. This is where B factories play a crucial role; they are engineered to analyze the outcomes of electron-positron collisions.

In addition to collider physics, the longevity of data acquisition and experiments is particularly captivating. For instance, the BABAR experiment at the SLAC National Accelerator Laboratory closed in 2008, yet researchers continue to sift through its data, educating the next generation of physicists.

In 2022, Brian Schub and his undergraduate team at Harvey Mudd College near Los Angeles revisited ideas involving nearly two-decade-old BABAR data. They proposed that virtual particles, referred to as axions, may function as mediators between visible and dark matter. Long-time readers may recognize that axion research is a focal point of my work.

So, do these hypotheses regarding our universe’ mechanics hold water? This inquiry aligns with our quest to comprehend matter-antimatter asymmetry.

What I’m reading

I’ve just finished Wasim, a student of Gazan physics. Witness to the Hellfire of Genocide, A tragic memoir.

What I’m watching

I’m finally watching The Wire after years of avoidance.

What I’m working on

I am reexamining cosmological perturbation theory.

Chanda Prescod-Weinstein is an associate professor of physics and astronomy at the University of New Hampshire. She is the author of The Disordered Cosmos and future works Edges of Space Time: Particles, Poetry, Boogie in the Universe Dreams

Source: www.newscientist.com

Why a Major Saudi-Led Contract Matters for EA, Regardless of Your Gaming Involvement

when Microsoft revealed plans to acquire Activision-Blizzard in 2022 for over $68 billion, the industry was stunned. This announcement echoed the recent significant shifts in the sector: prominent publishers known for iconic sports titles like Madden and EA Sports FC (formerly FIFA) have opted for a private acquisition often dubbed “the largest leveraged buyout in history.” This $55 billion deal is backed by a trio of investors resembling a final boss lineup on paper.

Introducing Player 1: Saudi Arabia’s Sovereign Wealth Fund. The Saudi royal family has made substantial investments in gaming over the years and leads an astute gaming group headed by Crown Prince Mohammed bin Salman, known for his controversial record with domestic issues and the assassination of journalist Jamal Khashoggi. Player 2: Affinity Partners, an investment firm led by Jared Kushner, the son-in-law of the current U.S. president. Player 3: Silver Lake, a notorious private equity firm, which owns a significant stake in game engine developer Unity. Stephen Totilo from Game File refers to the Affinity Partners logo as a mirror image, reminiscent of the Evil Corporation in the Assassin’s Creed series. It feels almost surreal.

You might be curious about Saudi Arabia’s extensive investment in gaming. They have heavily funded eSports and even launched the ESports World Cup in Riyadh. They’ve acquired the manufacturer of Monopoly GO and purchased shares in numerous gaming companies, including Pokémon GO and Nintendo. (Game file provides a thorough overview of Saudi capital in the gaming industry.) The motivation behind these investments parallels their funding of sports, media, and lately, comedy. It serves as a strategy for whitewashing perspectives, or in this instance, game-washing, showcasing the cultural clout of video games.

Mohammed bin Salman. Photo: Royal Saudi Court/Reuters

Regarding Affinity Partners and Silver Lake: There’s potential for profit. EA reported over $2 billion in profit last fiscal year, primarily from sports franchises. EA also owns The Sims and Battlefield, two franchises that could yield significant returns. Previously, EA was a more diversified publisher, with a rich portfolio including Dragon Age and Titanfall. However, under current CEO Andrew Wilson, their focus has shifted mainly to the most lucrative sports franchises.

Critics of this acquisition often highlight Saudi Arabia’s involvement. Thousands of developers and millions of gamers within EA feel unsettled (especially since The Sims has a significant LGBTQ+ following). Opinions among business journalists and analysts vary. Kotaku’s Ethan Gach discussed with several of them in this article. One notable quote from NYU’s Joost Van Dreunen states, “The center exhibits an irrational financial logic concerning power, fame, and the implications of Saudi Arabia’s role in American entertainment.”

Business analysts pointed out that the acquisition places EA under a staggering $20 billion in debt, as reported by Bloomberg. Questions arise about how the new EA ownership intends to manage this debt. Will there be more layoffs or budget cuts? Will they reduce profits from popular features like Ultimate Team Mode in EA Sports FC? Or might they abandon the flagging mobile gaming sector? For both players and EA employees, returning to normal business operations seems uncertain.

Electronic Arts is not the industry’s favorite publisher, and it doesn’t have the best reputation. However, it’s vital to remember the thousands of dedicated employees behind the scenes. Despite EA’s business practices that may frustrate gamers, it’s essential to consider the talents and projects of these people across the gaming industry. Even without oppressive ownership, such private equity takeovers often harm both employee morale and the industry’s overall health. Fans of FIFA, for instance, might reflect on the financial struggles of clubs like Manchester United post-acquisition, plagued by immense debt.

Nonetheless, one individual relishing this deal is CEO Andrew Wilson. “This moment embodies your creativity, innovation, and passion. Everything we’ve accomplished, and everything ahead, is for you,” he proclaimed in a public statement. “Our values and commitment to players and fans globally remain unchanged. We will maintain operational excellence and rigor, enabling team creativity, accelerating innovation, and pursuing transformative opportunities to secure EA’s leadership in the future of entertainment.”

Interestingly, Wilson holds tens of millions of EA shares, currently valued at £157 should the acquisition go through. Doesn’t that warm even the most cynical of hearts?

What to play

Yotei’s ghost. Photo: Undefined/Sony/Soccer Punch

It’s fascinating that two stunning, highly-priced historical fiction games set in Japan have launched within just six months of each other. Yet, here we are.

The Ghost of Yotei releases tomorrow, featuring a female warrior on a quest for revenge across the most breathtaking landscapes ever created in gaming, echoing the essence of Assassin’s Creed. I enjoyed Shadows earlier this year; its beauty and performance are undeniable. However, I find Yotei even more enthralling. It’s far more engaging without reliance on maps or magical visions for locating enemies, instead compelling players to follow the sounds of birds, foxes, and those in need. The minimalistic mechanics, like igniting campfires or crafting sumi-e art, delight. Combat feels exhilarating, embodying an old-school vibe similar to Soul Calibur during duels. I’m pleasantly surprised by how much I appreciate this game, considering Ghost of Tsushima felt less fresh five years ago. Our protagonist, Atsu, seems far less tormented than Ghost of Tsushima’s protagonist, Jin.

Available on: PlayStation 5
Estimated playtime:
Over 30 hours

Skip past newsletter promotions

What to read

Promotional images from bullies. Photo: Rockstar Game
  • IGN Interview with Rockstar co-founder Dan Hauser, who recently appeared at LA Comic Con. He’s known for being elusive and shared that his favorite title is Red Dead Redemption 2, also expressing regret over not following through on the boarding school satire.

  • Insomniac Games’ Wolverine has finally revealed its gameplay trailer. Given how well Spider-Man was adapted, I’m optimistic about this release slated for next year. As a fan of Housemarque’s thrilling sci-fi title Returnal, I eagerly watched the follow-up footage for Saros, set for a March 2026 launch.

  • Rog Xbox Ally is a poorly named yet highly anticipated handheld Xbox-compatible device priced at £500/£800. Microsoft has confirmed it, placing it in competition with the Steam Deck.

  • In Nintendo news, US President Doug Bowser has announced his impending retirement. He will be succeeded by Devon Pritchard, who has served at Nintendo for 19 years. Rumor has it that she may change her name to Devonganon. Moreover, there’s an upcoming pop-up store of Japanese department stores set to open in London later this month, with fans gaining access in March.

What to click

Question block

Astro Playroom. Photo: Sony

This week, reader Kevin asks:

“At age 68, I’ve developed an interest in gaming. I purchased a PS5 Pro and am currently waiting for its arrival. Could you provide a guide on how to use the controller?”

Welcome to the world of gaming, Kevin! It’s fantastic to hear someone is taking the plunge into gaming, especially if it involves pressing buttons!

For mastering the PS5 controller, I highly recommend Astro’s Playroom. It’s a delightful and engaging experience featuring small robots living within PlayStation. This short yet enjoyable game serves as an excellent tutorial for the unique functionalities of the PS5 controller and has even assisted my two sons in navigating more complex controls. If you find it enjoyable, be sure to check out the full-length sequel, expected to be a contender for Game of the Year in 2024.

If you have a question for the “Question Block” or any comments about the newsletter, feel free to reply or reach out to us at butingbuttons@theguardian.com.

Source: www.theguardian.com

The 6100-Qubit Device: A Major Leap Towards Quantum Computing Advancement

Quantum computers can be developed using arrays of atoms

Alamy Stock Vector

Devices boasting over 6000 qubits are setting new records and represent the initial phase of constructing the largest quantum computer ever.

At present, there isn’t a universally accepted design for creating quantum computers. However, researchers assert that these machines need to incorporate at least tens of thousands of qubits to be truly functional. The current record holder is a quantum computer utilizing 1180 qubits, with Hannah Manetsch from the California Institute of Technology and her team endeavoring to build a 6100 qubit system.

These qubits are made from neutral cesium atoms that are chilled to near absolute zero and manipulated using a laser beam, all arranged neatly on a grid. According to Manetsch, they have fine-tuned the properties of these qubits to enhance their suitability for calculations, although they have yet to carry them out.

For instance, they modify the laser’s frequency and power to help the fragile qubits maintain their quantum state, thus ensuring the grid’s stability for more precise calculations and extended runtimes of the quantum machine. The research team also assessed how efficiently the lasers could shift qubits around within the array, as noted by Ellie Bataille at the California Institute of Technology.

“This is a remarkable demonstration of the straightforward scaling potential that neutral atoms present,” he remarks. Ben Bloom from Atom Computing also employs neutral atoms in their technologies.

Mark Suffman from the University of Wisconsin-Madison emphasizes that new experiments are vital, providing proof that neutral atomic quantum computers can achieve significant sizes. However, further experimental validation is necessary before considering these setups as fully developed quantum computers.

Research teams are currently investigating optimal methods for enabling qubits to perform calculations while employing error-reduction strategies, mentions Kon Leung at the California Institute of Technology. Ultimately, they envision scaling their systems to 1 million qubits over the next decade, he states.

topic:

Source: www.newscientist.com

Even Major Brands May Struggle to Save America’s Most Iconic Gaming Events | Games

eSince my journey began in 1988, the annual Game Developer Conference (GDC) has taken place in California each year. It started modestly as a cozy gathering in the living room of Atari designer Chris Roford, hosting just 27 attendees. By the mid-90s, the event outgrew Chris’ home and expanded to over 4,000 participants. In 2005, it found a permanent venue at the Moscone Center in San Francisco. Nowadays, nearly 30,000 game development professionals attend annually. The GDC Vault online is a valuable resource, offering insights into the history of game development and practical tips across gaming disciplines.

However, GDC has faced challenges in recent years. Rising costs have become a significant barrier for developers, with conference passes exceeding $1,500, and expenses for travel and accommodation in one of the world’s most expensive cities can quickly escalate to between $5,000 and $10,000—even for small hotel rooms.

Additionally, following Trump’s re-election, many members of the global video game development community have expressed reluctance to visit the United States. The atmosphere at the conference has been dampened by the loss of funding throughout the gaming industry, alongside the pressures brought on by AI developments and ongoing layoffs. If securing funding for games is challenging, why should professionals spend thousands on travel for meetings with thousands?

As Jon Ingold, founder of UK Studio Inkle, remarks, “GDC, as an industry networking event, currently lacks financial viability and job opportunities. The United States feels like an inhospitable environment.”

This may be a reason behind the event’s recent rebranding. It was announced on Monday that the Game Developers Conference will now be known as the Festival of Gaming: GDC, promising a “week of opportunity” linked to a comprehensive B2B game ecosystem in a vision presentation. The key takeaway appears to be that obtaining a pass will be more accessible, with events being hosted not only at the Moscone Center but also across the city.

Calling California… Within GDC: Game Festival.

Unfortunately, this rebranding has not addressed long-standing worries among developers—that the conference is not accessible enough, and that San Francisco (or the U.S. at large) is an unsuitable venue for global gaming events. “Despite clear evidence from the COVID era that GDC could have integrated digital access, the exorbitant ticket prices reflect [organizer] Informa’s focus on profits rather than accessibility,” says independent game developer Rami Ismail, who has advocated for the global developer community on GDC’s issues.

Even when a visa is obtained, safety concerns regarding firearms, crime, and healthcare expenses linger. Furthermore, the Trump administration’s right-leaning populism has rendered the U.S. unwelcoming for many.

This concern is valid. Visitors to the U.S. face risks of deportation and even detention since Trump’s reelection. Many choose to carry burner phones and clean their social media profiles, with numerous European developers and journalists, myself included, feeling hesitant about traveling to the U.S. under the current administration. For individuals coming from Arabic or South American countries, these fears are intensified.

A consensus seems to be forming within the global game development workforce: the U.S. no longer serves as a crucial industry hub. While San Francisco remains home to top companies and studios, many feel the city has lost its creative spirit, hollowed out by the relentless pursuit of Silicon Valley’s interests.

There are viable alternatives. Canadian tax incentives make it an attractive destination for game development, and the current government is welcoming to foreigners. From Brighton in the UK to Game Connect in Australia, various regions host local developer gatherings. Events like Gamescom in Cologne, along with an increasing number of developer-centric events around the world, underscore this shifting landscape. While the GDC organizers cannot control U.S. policy, maintaining relevance as a professional game nexus will require more than a rebrand.

What to Play

Unpleasant and funny… consumes me. Photo: Jenny Jiao Hsia

The video game landscape continues to expand with exciting choices. Hades II, a visually stunning and challenging action game developed by Supergiant, is eagerly anticipated. Meanwhile, the horror reboot Silent Hill F has garnered positive feedback from many critics.

I am currently immersed in Consuming Me, an entertaining and occasionally uncomfortable game by developer Jenny Jiao Hsia about navigating high school amidst the pervasive diet culture of the 2000s. It features a quirky mini-game where players must focus on class and manage walking their dog while dealing with awkward conversations about weight with their parents.

This topic can be triggering for many, as it evokes painful memories of the 2000s’ beauty standards for women. If you’ve ever wrestled with disordered eating (or know someone who has), finding enjoyment might be challenging. However, the game addresses sensitive issues with humor, empathy, and plenty of satirical jabs, making the discomfort worth exploring.

Available on: PC
Estimated playtime:
5 hours

Skip past newsletter promotions

What to Read

Stardew Valley-style sim… Palpharm from Palworld. Photo: PocketPair
  • The developers behind last year’s hit, Palworld, are currently entangled in legal disputes with Pokémon companies over similarities to their creature-collecting games. They’ve announced a new game: Palpharm, which fuses adorable creatures with Stardew Valley-like gameplay.

  • I’ve thoroughly enjoyed video game memoirs from TV comedy writer Mike Drucker. His latest release, Good Game, No Rematches, is now available in the UK. It offers a fascinating perspective on growing up with Nintendo across the Atlantic during the NES era, detailing how a young gamer turned into a game writer.

  • For years, our game correspondent Keith Stuart has tackled the question: Why do some people choose to invert the controls? His 2020 article on the subject prompted scientists to delve into the matter, and they have finally determined that it relates to how our brains perceive 3D space.

Question Block

Words of wisdom…The Legend of Zelda. Photo: Nintendo

Following up on last week’s discussion about video game dialogue, reader William asks:

“I believe there are quotes from various video games that serve as life advice. Two of my favorite quotes are: ‘When the time comes, just act’ (Wolf O’Donnell, Star Fox Assault) and ‘Anyone who is stubborn enough can survive. Anger is an anesthetic hell’ (Zaeed Massani, Mass Effect 2). What video game wisdom resonates with you?”

This may be a contentious viewpoint, but I often find that video game quotes are profound by coincidence. The most memorable lines frequently emerge from translation quirks and voice acting inconsistencies (“I used to be an adventurer…but I took an arrow in the knee,” “Your bass is all ours,” “Jill, Master of Rocking”). They stick with us not necessarily for their deep meaning, but for their absurdity.

That said, the phrase “It’s dangerous to go alone” from the original Legend of Zelda somehow strikes me as genuinely supportive, while “The right man in the wrong place can make all the difference in the world” also comes to mind.

I invite readers to share: Are there any video game quotes that genuinely carry significance for you?

If you have a lingering question or want to include your favorite game quotes in the newsletter, please reply to this or email us at buttons@theguardian.com.

Source: www.theguardian.com

The Importance of Breakfast Timing for Longevity, According to Major Studies

As individuals age, having breakfast may be linked to a higher risk of early death, particularly for those in poor health. Recent research involving 3,000 adults indicates this connection.

After tracking participants for an average of 22 years, scientists observed that those who usually ate breakfast later in the morning had a slightly better survival rate in the following year compared to those who ate earlier.

Study participants typically consumed breakfast around 8:20 am, but those who waited until after 9 am were more prone to issues like depression, fatigue, or oral health problems.

“These findings provide new insight into the saying ‘breakfast is the most important meal of the day,’ especially for seniors,” stated the authors, including Dr. Hassan Dashti, a nutrition scientist at Massachusetts General Hospital.

“Our research implies that the timing of meals, particularly breakfast for older adults, can be a simple marker for assessing overall health.”

“Moreover, promoting a regular dietary schedule among older adults could be part of a larger strategy to enhance healthy aging and longevity.”

Participants were observed for over 20 years, during which they reported their health status, meal times, and occasionally provided blood samples.

Over time, researchers noticed that people were shifting their breakfast and dinner times later in the day, thereby shortening their overall eating window.

Since this study was observational, it does not definitively prove that delaying breakfast leads to health issues or early mortality; rather, it hints at a potential correlation.

Furthermore, researchers have determined that individuals genetically predisposed to “night owl” behavior are likely to rise and sleep later, consequently eating their meals later as well.

Individuals who practice intermittent fasting often eat breakfast later in the day, allowing their bodies longer periods without food – Credit: via Getty

The authors emphasized the significance of their findings, especially considering the rising trend of intermittent fasting.

“The timing of subsequent meals, particularly delayed breakfast, is connected to health challenges and an increased risk of death among older adults,” Dashti concluded.

Read more:

Source: www.sciencefocus.com

“Major Migration” Necessitates Far Fewer Wild Taxes Than Expected.

Serengeti wildebeest migrations may involve fewer animals than previously believed

Nicholas Tinnelli / Aramie

The “great migration” in East Africa is often estimated to consist of around 1.3 million wildebeest. However, a recent AI analysis of satellite images reveals that fewer than 600,000 animals make this yearly journey across the Serengeti Mara landscape.

This significant migration includes wild zebras and antelopes, as they traverse between feeding and breeding areas in both Kenya and Tanzania, while also evading predators such as lions, crocodiles, and hyenas.

Determining the number of migrating animals is a challenging process, traditionally accomplished through aerial surveys with crew members. These surveys typically cover limited areas, necessitating the use of statistical models to estimate animal density across larger regions.

In contrast, satellite surveys offer a solution to these challenges since a single image can encompass extensive areas, minimizing the chances of double-counting and eliminating the need for metabolic calculations. While manually counting wildebeests over such vast expanses is impractical, AI can aggregate the data effectively. “AI automation enhances count consistency and accuracy,” says Isla Duporge from Oxford University.

In a new study, Duporge and her team developed two deep learning models (U-Net and Yolov8) to identify wildebeest using a dataset of 70,417 manually labeled images. These models were then applied to high-resolution satellite images spanning over 4000 square kilometers, with capture dates of August 6, 2022, and August 28, 2023.

The two AI models returned comparable results: counting 324,202 and 337,926 wildebeests in 2022, and 502,917 and 533,137 in 2023. The apparent disparity between the counts from 2022 and 2023 highlights that the surveys were conducted at different times in August. “[What’s encouraging is that deep learning models with differing methodologies have produced consistent findings,” notes Duporge.

Since the 1970s, earlier estimates of 1.3 million were derived from aerial surveys and have remained largely unchanged. “If we can accurately count all individuals with zero errors based on our results, we estimate the true population size to be around 800,000,” Duporge remarked. “We believe the aerial estimates are inflated, and our count likely reflects a slight underestimation. Some animals may be hidden under trees or outside the survey area, but it’s quite surprising that the count doesn’t exceed 533,137.”

A lower count doesn’t necessarily indicate that the wild population is declining; they may have adjusted their migratory routes. Nevertheless, wildebeests face serious threats, such as habitat loss and fragmentation due to agricultural expansion. Accurately estimating their populations is crucial for implementing effective conservation strategies.

The researchers had previously trained AI models to identify elephants using satellite data, marking the first instance of such a method for conducting individual mammal censuses across large, dispersed populations. The team is now working on a similar approach for detecting and counting African rhinoceroses.

“We should shift towards satellite and AI methods for assessing wildlife populations, particularly for species that inhabit large and diverse landscapes,” suggests Duporge.

The researcher’s model code is now accessible at https://github.com/sat-wildlife/wildebeest

Topic:

Source: www.newscientist.com

How Google Avoided a Major Split – And Why OpenAI Values This Move

Greetings and welcome to TechScape. I’m your host, Blake Montgomery, currently working on the audiobook rendition of Don DeLillo’s White Noise.

In today’s tech segment, Artificial Intelligence finds itself in the courtroom spotlight as Google’s pivotal antitrust trial unfolds, coinciding with significant settlements involving the book’s author.

Why Did OpenAI Assist Google in Skirting the Chrome Sale?

Google has evaded a major crisis thanks to its largest competitors. A judge recently ruled against forcing the sale of Chrome, the most popular web browser globally, allowing the tech giant to maintain its place.

Judge Amit Mehta, who concluded in 2024 that Google has maintained an illegal monopoly in internet search, indicated last week that the US government’s attempt to sell Chrome was not necessary. While the company cannot strike exclusive distribution deals for search engines, it still retains the ability to distribute on certain conditions, including sharing data with competitors. Although an appeal is likely, Sundar Pichai can breathe a little easier for now.

Many critics deemed this decision a light penalty, often referring to it as merely a “wrist slap.” This phrase echoed through numerous responses I received after the ruling was announced.

The leniency in the ruling stems from the emergence of real competition against Google, underscoring the significance of this case. While United States v. Google targets search specifically, its implications ripple into the developing realm of generative artificial intelligence.

“The rise of generative AI has altered the trajectory of this case,” remarked Mehta. “The remedies now focus on fostering competition among search engines and ensuring that Google’s advantages in search do not translate into the generative AI sector.”

Mehta noted that previous years saw little investment and innovation in internet searches, allowing Google to dominate unchecked. Today, various generative AI companies are securing substantial investments to introduce products that challenge conventional internet search advantages. Mehta particularly commended OpenAI and ChatGPT, mentioning them numerous times in his ruling.

“These firms are now better positioned, both financially and technologically, to compete with Google than traditional search entities have been for decades,” he stated. “There’s a hope that if a groundbreaking product surfaces, Google cannot simply overshadow its competitors.” This suggests a prudent approach before imposing serious disadvantages on Google in an increasingly competitive landscape.

For nearly two decades, Google has served as the default search engine for Safari since the iPhone’s launch. In contrast, competition in generative AI mirrors Apple’s dealings with both Google and OpenAI. In June 2024, Apple announced a collaboration with OpenAI for iPhone features. However, by August 2025, discussions with Google about utilizing Gemini for Siri’s overhaul surfaced. Bloomberg. May the best bot triumph.

Back in April, I speculated that OpenAI might emerge as a potential buyer for Chrome, predicting that ChatGPT’s creators would benefit from Google’s vulnerabilities. Later that month, OpenAI executives confirmed their intentions to pursue exactly that.

It’s almost poetic that OpenAI’s success has inadvertently saved Google. The startup seems to owe a debt of gratitude to its predecessors, as a research paper crafted by Google scholars laid the groundwork for ChatGPT back in 2017.

With Google valued at $2.84 trillion and OpenAI emerging as a David worth around $500 million, the narrative shifts to a classic underdog story. Stay tuned; OpenAI is not merely Google’s biggest competition. In December 2022, Google’s management team acknowledged the threat posed by ChatGPT, labeling it a “Code Red” for a profitable search business. Pichai even redirected many Google employees to focus on AI projects.

Unlike Goliath, who underestimated his challenger, Google recognized that the launch of ChatGPT—the moment generative AI entered mainstream consciousness—redefined the competitive landscape. The threat was indeed substantial.

While Google is racing to catch up with OpenAI in the AI arena, David still features the advantage of being the first mover. ChatGPT has become synonymous with generative AI, potentially representing AI in general. However, Google remains a formidable player, engaging billions daily through search engine AI features.

Thanks to Mehta’s ruling, Google narrowly averted a disaster, keeping Chrome in its portfolio. However, looming challenges await, as the tech giant faces another antitrust hearing later this year concerning its advertising business, essential to its financial success. Google controls the online advertising distribution channels and the platforms for digital sales.

Coincidentally, the European Union imposed a fine of approximately 3 billion euros on Google for exploiting its dominant position in advertising technology in the same week as Mehta’s verdict, threatening to dismantle its AdTech division.

Read More

Skip past newsletter promotions

British Technology

Significant Payment Hopes to Secure Authors Cash from AI

On July 25, 2023, Dario Amodei, CEO of Anthropic, testifies before the Senate Judicial Subcommittee on Privacy, Technology, and Legal Trials in Washington, DC. Photo: Valerie Press/Bloomberg via Getty Images

Recently, Anthropic, the creator of the Claude Chatbot, agreed to a $1.5 billion payout to an authors’ group, settling allegations that they used millions of books to train their AI. This landmark settlement is hailed as the largest copyright restoration attempt ever. While Anthropic did not admit fault, they allocated $3,000 for each of approximately 500,000 authors, totaling $1.5 billion.

The company acknowledged training on roughly 7 million books acquired from various unauthorized sources in 2021. Following burgeoning copyright threats, they have since obtained and scanned physical copies of these works. Destruction of these items was lamentable.

For creative professionals concerned about AI’s existential threats, this settlement is a hard-won victory, addressing unauthorized use that threatens livelihoods. British writers have raised alarms about AI generating original text and are advocating for accountability from tech giants like Meta. However, hostility from the government appears unlikely, given Meta’s CEO’s close ties to the current US president.

The aftermath of Anthropic’s settlement has already had ripple effects, with authors filing lawsuits against Apple for allegedly using similar training methods.

Nonetheless, this outcome isn’t an unqualified triumph for writers. The central issue revolved around copyright infringement, which, while serious, had precedent under fair use, allowing Anthropic to utilize copyrighted books for AI training. Judge William Allsup suggested that using these books was akin to “readers wishing to become writers.” This outcome indicates that AI companies may have initially secured stronger positions than believed.

Read More: Anthropic did not infringe copyright when training AI on books without permission, court rules.

Moving forward, Meta appears to be the next prime litigation target for authors, given its similar practices to Anthropic in training models using unauthorized databases. While Meta emerged relatively unscathed in its recent copyright dispute, the Anthropic settlement could prompt Meta’s legal team to expedite resolving pending lawsuits.

Other key AI players remain unencumbered by lawsuits. While OpenAI and Microsoft face accusations regarding unauthorized usage of Books3, no substantial evidence has been established against them, unlike Anthropic and Meta.

This legal scrutiny extends to various media, with recent lawsuits against AI entities like MidJourney from Warner Bros. Discovery and Disney.

Wider Technology

Source: www.theguardian.com

Doctors Create AI Stethoscope Capable of Identifying Major Heart Conditions in Just 15 Seconds

A doctor has successfully created an AI-powered stethoscope that can identify three cardiac conditions in just 15 seconds.

The classic stethoscope, which was invented in 1816, has been crucial for listening to internal body sounds and has remained a vital tool in medical practice for over two hundred years.

The research team is now working on a sophisticated AI-enhanced version that can diagnose heart failure, heart valve issues, and irregular heartbeats.

Developed by researchers at Imperial College London and Imperial College Healthcare NHS Trust, this innovative stethoscope can detect minute variations in heartbeat and blood flow that are beyond the capacity of human ears, while simultaneously performing quick ECG readings.


The details of this groundbreaking advancement that could enhance the early diagnosis of these conditions were shared with thousands of doctors during the European Heart Association Annual Meeting in Madrid, the largest cardiac conference globally.

Timely diagnosis is crucial for heart failure, heart valve disease, and irregular heart rhythms, enabling patients to access life-saving medications before their condition worsens.

A study involving around 12,000 patients from a UK GP practice tested individuals exhibiting symptoms like shortness of breath and fatigue.

Those who were evaluated using the new technology were twice as likely to receive a diagnosis of heart failure compared to similar patients who were not subjected to this method.

Patients were three times more likely to be diagnosed with atrial fibrillation—an irregular heart rhythm that heightens the stroke risk—and nearly twice as likely to be identified with heart valve disease, characterized by malfunctioning heart valves.


The AI-led stethoscope identifies subtle differences in heartbeat and blood flow that are imperceptible to the human ear while recording ECG. Photo: Eko Health

Dr. Patrick Bectiger from Imperial College London remarked:

“It’s amazing to utilize a smart stethoscope for a quick 15-second assessment, allowing AI to promptly provide results indicating whether a patient has heart failure, atrial fibrillation, or heart valve disease.”

Manufactured by Eko Health in California, the device resembles a credit card in size. It is placed on a patient’s chest to record electrical signals from the heart while a microphone picks up the sound of blood circulation.

This data is transmitted to the cloud—an encrypted online storage space—where AI algorithms analyze the information to uncover subtle heart issues that may be overlooked by humans.

Results indicating whether a patient should be flagged for any of the three conditions will be sent back to a smartphone.

While breakthroughs like these can carry risks of misdiagnosis, researchers stress that AI stethoscopes should only be employed for patients presenting heart-related symptoms, not for routine screening in healthy individuals.

However, accelerating the diagnosis process can ultimately save lives and reduce healthcare costs.

Dr. Mikhilkelsiker, also from Imperial College, stated:

“This test demonstrates that AI-enabled stethoscopes can make a significant difference, providing GPs with a rapid and straightforward method to detect issues early, ensuring patients receive timely treatment.”

“Early diagnosis allows individuals to access the necessary treatment to enhance their longevity,” emphasized Dr. Sonya Babu Narayan, clinical director of the British Heart Foundation, which sponsored the research alongside the National Institute of Health and Therapy (NIHR).

Professor Mike Lewis, Director of the Innovation Science Department at NIHR, remarked, “This tool represents a transformative advance for patients, delivering innovation right into the hands of GPs. AI stethoscopes empower local practitioners to identify problems sooner, diagnose patients within their communities, and address leading health threats.”

Source: www.theguardian.com

Major Health Implications for 97% of Autistic Adults Over 60 Who Remain Undiagnosed

A major new review indicates that elderly individuals are significantly less likely to receive an autism diagnosis.

The survey estimates that around 89% of individuals with autism aged between 40 and 59 have never been diagnosed. This figure rises to 97% for those over 60.

Our analysis compiled various studies on how autism impacts individuals later in life. The findings revealed that older autistic individuals face a high prevalence of both physical and mental health challenges, are less likely to have been adopted, and generally report poorer health.

Seniors on the autism spectrum encounter difficulties in accessing healthcare and building strong relationships, both of which are closely tied to health outcomes.

While autism is thought to affect roughly 1 in 100 people, the recorded diagnoses drastically drop for individuals over the age of 40.

This review highlighted U.S. data showing that autistic individuals experience higher rates of nearly all physical ailments compared to their non-autistic counterparts, including cardiovascular issues, immune disorders, and gastrointestinal problems. Furthermore, over half of older individuals with autism reported having at least one psychiatric issue, such as anxiety or depression.

“People with pronounced autistic traits, despite lacking a formal diagnosis, experience similar challenges,” stated Dr. Gavin Stewart, who led the King’s College London review.

“Being autistic yet undiagnosed can carry significant implications. Access to necessary support systems becomes limited for many undiagnosed autistic individuals, preventing them from addressing mental health concerns,” he explained to BBC Science Focus.

This lack of support complicates the ability of individuals with autism to navigate medical systems. Characteristics such as diverse communication styles, sensory sensitivities, and specific daily needs can make interactions with modern healthcare environments challenging.

For instance, autistic individuals may struggle to convey their symptoms to a non-autistic physician, particularly when overwhelmed by the sensory input of a noisy, brightly lit waiting area.

An enhanced sense means that some autistic individuals find busy and noisy environments challenging.

“Many autistic individuals express that it’s challenging to exist in a world that doesn’t accommodate their needs,” Stewart noted.

The challenge of forming relationships also contributes to greater social isolation among people with autism, leaving them without necessary support networks as they age.

“While many autistic individuals are socially motivated and cultivate fulfilling relationships, societal expectations can create obstacles that lead to their alienation,” Stewart added.

This study aims to highlight the lack of research on adults with autism, noting that a mere 0.4% of studies have focused on the condition in older populations.

“Rates of underdiagnosis are alarmingly high among older adults. Much of our research systematically overlooks a significant portion of the autistic population, resulting in a knowledge gap regarding how autistic individuals age and a deficiency in relevant policies and services,” Stewart commented.

“This oversight stems from the fact that many older autistic individuals today were likely missed due to the narrow diagnostic criteria used in their youth.”

Most autism diagnoses occur in childhood, yet the condition has only been recognized in diagnostic manuals since the 1960s.

“Since then, the criteria have shifted from a rare condition defined by narrow standards to a broader, more inclusive framework,” remarked Stewart.

Moreover, older autistic individuals are more prone to misdiagnosis; a 2019 study found that one-quarter of adults with autism were initially diagnosed with mental health disorders such as anxiety or personality issues before their autism was recognized.

Increasing awareness among educators, healthcare providers, and the general public has led to more individuals identifying autism symptoms in both children and adults.

About our experts

Dr. Gavin Stewart is a postdoctoral researcher at King’s College London, co-leading research with Professor Francesca Happé at the Respect Lab, focusing on autism across the lifespan.

Read more:

Source: www.sciencefocus.com

Major Hits, Board Games, and the Mundane: Why Parents Are Embracing 1999 Again

wReflecting on childhood in the 1990s stirs up feelings of nostalgia. We roamed far and wide without supervision, rode our bikes, crafted burrows, and swam in streams. Post-school hours were spent crafting and playing board games; while the internet existed, my parents encouraged me to use a landline phone. Media was tangible—cassettes, CDs, VHS tapes—and often enjoyed together as a family. The memory of going to the video store to select a movie still thrills me.

These feelings are common, especially when you have a child of your own, and social media algorithms tap into this nostalgia. Three years post the birth of my son and starting a parenting column for The Guardian, I noticed my interest in “parenting in the 90s.” This phenomenon seems to have gained traction this year, with former 90s kids pondering how to raise their own children. It appears that significant technological advancements have resulted in valuable losses. But is it feasible to reclaim what was lost? And how have we adapted parenting since then?

“Absolutely, it’s a total pause,” states Justin Fromm, a father and content creator based in Las Vegas.
We’ve created a very successful replica video store in one of the rooms in his house for his daughter. “The whole family would pile into the car, head to the video store, and wander the aisles, deciding what to watch,” he reminisces about his childhood. “It was exhilarating and filled with possibilities. Scrolling online doesn’t compare.” There’s something special about physically going somewhere to select a movie together, the long-awaited anticipation of finally watching it. It felt like a true event. “Everyone remembers the ritual of choosing a movie together in a blue and yellow themed store, the carpet, the excitement.”

Although not everyone has the space or budget like Justin, the motivations behind his choices resonate widely. “As a parent, I consistently shield my kids from content I don’t deem healthy for their minds,” he explains. “We lean towards older films and shows, primarily due to their healthier pacing. They are not overly mixed or stimulating.” His daughters adore classic films like *Harriet the Spy* and *Dennis the Menace*, with *George of the Jungle* being the current favorite. Likewise, I found myself gravitating toward 90s media thanks to my son, with *The Many Adventures of Winnie the Pooh* capturing my attention. The contrast between the narrative pace of 1997’s *Teletubbies* and modern programming is striking.

Justin’s acclaimed video room represents a conscious effort to define screen time, aligning with the 90s parenting ethos. “Back then, people criticized TV for damaging brains, but it had its place in the living room,” he notes. “Now, media pursues us relentlessly, everywhere. In my household, media consumption happens at specific times and places.”



Composite: Getty Images

As concerns mount regarding the impact of screen time, alongside various campaigns advocating for childhoods free from smartphones, it’s understandable that many of us are reflecting on the era just before everything changed. Some parents, such as
schools, have introduced landlines for children, while a parent group in South Portland, USA, allows kids to call each other, thereby forming a “retro bubble” against screens. Browsing through 90s parenting-themed reels on Instagram (ironic, I know), I stumbled upon
a video depicting adults and children participating in a backyard water battle, all organized via landlines, of course, as they left their smartphones in bowls atop a high cupboard. Back when we weren’t glued to screens, our summers were often spent engaged in extensive neighborhood water fights, with mothers signaling the action when they appeared with buckets or garden hoses.

Jess Russell strongly values the importance of play. A former primary educator and special needs coordinator, Jess stays at home with her two children, aged one and three, and actively promotes learning through play on her Instagram account
@playideasforlittles. “I grew up in a rural setting, always outdoors. My mother was a stay-at-home parent, and we engaged in numerous arts and crafts,” she shares, striving to replicate this experience for her children. They spend ample time playing in the garden, engage in board games like *Hungry Hungry Hippos*, and watch TV as a family.

Part of Jess’s motivation for her current lifestyle stems from her disillusionment with educational directions that steer away from play and towards outcome-oriented systems. She feels fortunate to be at home with her children, a choice more attainable in the 90s when single incomes could usually support housing expenses. I share similar sentiments about working part-time. The nostalgia for the 90s reflects the struggles modern parents face trying to balance work and family time, all while fostering a playful environment.

It boils down to more than just screens; it encompasses connections, family moments, and shared time. “Parenting in the 90s exemplified ‘slow’ parenting,” Jess observes, explaining that days weren’t packed with scheduled activities. Boredom, as emphasized by Melanie Murphy, a mother of two from Dublin, is vital.
Instagram: “Your Nostalgic Millennial Mom’s Friend”. “Those extended, unstructured periods were a surprise gift. I desire that for my children. I don’t want an overly scheduled life for them.”



Composite: Getty Images

When her two- and four-year-olds experience boredom, their imaginations are activated, Melanie recounts. “They construct forts, turn floors into lava, and convert tables into dragon nests. We don old clothes and delve into dirt in search of bugs. They prepare meals and tidy the house… We groove to music and have dance parties. We watch my childhood DVDs on the projector. Sure, structured activities and adult-led plans create chaos, but as long as the kids are content, I’m fine with it.”

Certainly, kids from the 90s would chuckle at this.
One humorous video highlights the absurdities of 90s childhood, featuring a kid dashing after his mother in a changing room and giving himself a haircut in the kitchen. When I ask a friend if her parenting style resembles that of the 90s or if she knows someone whose does, she ponders: “Hmm, like sleep training, lots of TV, and burnt pancakes.”

She’s spot on. Yet, each advocate of 90s parenting I encounter acknowledges the allure of rose-colored glasses. I ask Melanie what elements of the 90s she’d prefer to leave behind. “Physical discipline. You were taught to ‘toughen up’ emotionally rather than to process feelings holistically,” she says, pointing to practices like sleep training and the “naughty step.” The negatives included “secondhand smoke everywhere, mental health neglect… The gender stereotypes were overwhelming, alongside diet culture and ultra-processed food norms.” Her mother counted calories with Weight Watchers, and even back then, Melanie found herself counteracting with junk food. Best left in the past. She also emphasizes that not every family enjoys movie nights together; for some, media consumption lacks supervision, and children might encounter highly inappropriate or traumatic content.

Skip past newsletter promotions

At times, the carefree approach of 90s parenting can veer towards negligence. Yet, I cherish how “free-range” my childhood was. Justin shares similar sentiments. “My parents weren’t always aware of my whereabouts, and that was okay,” he recounts. “I got hurt sometimes; my kids occasionally break their bones. We learned to avoid the actions leading to those breaks.” (Indeed, I broke my bones too.)



Composite: Getty Images

“That type of risk-taking is crucial; it’s how kids learn to assess situations. We’ve been scrutinizing our childhoods so much lately that we need to reclaim those teachings,” he remarks. He believes there’s been an overcorrection, and people “hunger for something freer, something resembling non-fear-driven parenting.”

In essence, we are in pursuit of balance. “We are more informed now—about emotions, neurodiversity, health, and nutrition—which undeniably has its merits,” adds Melanie. “It’s not a time machine I yearn for, but a beautiful fusion of the relaxed spirit of 90s parenting blended with today’s emotional intelligence.”

I thought it would be insightful to converse with someone who actually parented in the 90s, so Jess connected me with her mother, Lynn. “I savored those moments with my children, cherishing every hour spent outdoors or with friends,” she reminisces about the long days. She emphasizes that stores closed on Sundays, a simple joy allowing family time. “Many parents today yearn for that simplicity in family bonding.”



Composite: Getty Images

“We lived in a modest two-bedroom home and managed just fine… Now, it seems people must meet a certain living standard. That pressure is something I truly regret for them.” When I ask Lynn if there’s anything she admires about today’s parenting, she struggles to pinpoint anything specific. “I genuinely respect the balance modern moms seem to achieve. I never had that. It was undeniably straightforward.”

Initially hesitant about the concept of 90s parenting, Jessica admits it felt somewhat sentimental and was likely fueled by millennial nostalgia on social media. Yet, if that’s true, why write this? Is it sadness stemming from the fatigue of juggling work and parenting (especially with her son’s struggles to sleep)? Even reminiscing about the 1996 classic *Space Jam* evokes deep emotions tied to selecting a VHS at the local store. I can’t shake this longing for simpler times, perhaps indicating a need for more fun in my life. I’m thinking it might be time for a water fight. Who’s ready?

Parent-Child Relations: Rhiannon Lucy Cosslett’s Baby Raising
Published by September Publishing (£18.99).
Get support from The Guardian and reserve your copy

Guardianbookshop.com
. Shipping charges may apply.


Source: www.theguardian.com

Study: Neanderthal-Inherited Genetic Mutations Decrease Major Muscle Enzyme Activity

An AMPD1 variant from Neanderthals reduces enzyme activity by 25% in lab-produced proteins and up to 80% in muscles of genetically modified mice. This variant is present in all sequenced Neanderthals but absent in other species. It entered the modern human gene pool through interbreeding approximately 50,000 years ago, leading to its presence in up to 8% of today’s Europeans.

Maccak et al. Research indicates that genetic variants inherited from Neanderthals impair essential enzyme functions in muscle performance. Image credit: Holger Neumann/Neanderthal Museum.

The enzyme AMPD1 is crucial for muscle energy production and overall muscle function.

A decrease in its activity due to genetic mutations is the leading cause of metabolic myopathy in Europeans, with a prevalence of 9-14%.

In a recent study led by Dr. Dominik Macak from the Max Planck Institute for Evolutionary Anthropology, researchers compared ancient Neanderthal DNA with modern human genomes.

They discovered that all Neanderthals have specific AMPD1 variants absent in other species.

Enzymes produced in the lab with this variant exhibited a 25% decrease in AMPD1 activity.

In genetically modified mice, this reduction in muscle tissue activity reached 80%, negatively affecting enzyme performance.

Moreover, the study shows that modern humans acquired this variant from Neanderthals who lived in Europe and Western Asia before interacting with modern humans around 50,000 years ago.

Currently, approximately 1-2% of non-African individuals carry Neanderthal DNA.

The Neanderthal AMPD1 variant is found in 2-8% of Europeans today, indicating general acceptance in the gene pool.

“Interestingly, most individuals with these variants do not face serious health concerns,” noted Dr. McCuck.

“However, enzymes seem to significantly influence athletic performance.”

Analysis of over 1,000 elite athletes across diverse sports showed that those with non-functional AMPD1 are less likely to reach the highest athletic levels.

“Having defective AMPD1 enzymes decreases the chances of achieving elite athletic ability by half,” Dr. McCuck said.

While AMPD1 activity appears to have moderate significance in contemporary Western societies, it becomes crucial under extreme physical conditions, such as those faced by athletes.

Researchers highlight the need for studying genetic variation within physiological and evolutionary contexts to grasp biological implications.

“Cultural and technological advancements in both modern humans and Neanderthals may have lessened the necessity for extreme muscle performance,” explains Dr. Hugo Zeberg, a researcher at the Max Planck Institute for Evolutionary Anthropology and Karolinska Institute.

“Understanding how current gene variants influence human physiology can yield valuable insights into health, performance, and genetic diversity.”

Survey results were published in the journal Natural Communication on July 10, 2025.

____

D. McCuck et al. 2025. Muscle AMPD1 exhibited reduced deaminase activity in Neanderthals compared to modern humans. Nat Commun 16, 6371; doi:10.1038/s41467-025-61605-4

Source: www.sci.news

Potential for Major Earthquakes Beneath North America, Study Indicates

Recent research suggests that the concealed structural weaknesses in the Yukon, Canada, may be primed to trigger a significant earthquake of at least magnitude 7.5, as outlined in the latest study.

The Tintina Fault, stretching from northeastern British Columbia to central Alaska, has been silently accumulating tension for over 12,000 years. A new investigation previously deemed relatively harmless indicates that it remains very active.

Regrettably, scientists are unable to predict when the next major quake will strike.

“Our findings indicate that the fault is active and continues to build strain,” said Dr. Theron Finley, the lead author of the study published in Geophysical Research Letters, in a statement to BBC Science Focus. “I expect it will eventually rupture again.”

The Tintina Fault is classified as a “right-lateral strike-slip fault,” where two blocks of the Earth’s crust slide horizontally past each other. If one side moves to the right during an earthquake, it’s identified as right-lateral.

Over the ages, one side of the fault has shifted approximately 430 km (270 mi), during a geological period that spanned roughly 560 to 33.9 million years ago, predominantly in the Eocene epoch.

The Tintina Fault extends 1,000 km (600 mi) from northeastern British Columbia to Alaska. – Credit: National Park Bureau

While minor earthquakes occasionally occur in the region, the Tintina Fault has generally been considered dormant.

“There have been small earthquakes in the 3-4 magnitude range detected along or near the Tintina Fault,” Finley noted. “However, nothing has strongly indicated that a larger outbreak is likely.”

This perspective changed when Finley and his team revisited the fault with advanced technology. By integrating satellite surface models with drone-mounted Light Detection and Ranging (LiDAR) data, researchers uncovered hidden seismic activity within the dense Yukon forests.

The landscape revealed cliffs associated with the fault, forming long, narrow terrains created when a quake pushed material to the surface, often collapsing in the process. These features can span dozens or even hundreds of kilometers, but are typically only a few meters tall and wide.

“In the case of the Tintina fault, these features appear as a series of intriguing mounds,” Finley stated.

By dating these surface formations, researchers determined that the fault has ruptured multiple times over the last 2.6 million years, though no significant earthquakes have occurred in the past 12,000 years.

Fortunately, the region is sparsely populated. However, if the fault does rupture, Finley cautioned that major landslides, infrastructure damage, and impacts on nearby communities would be highly probable.

“We want to emphasize that we don’t have a precise sense of how imminent an earthquake is,” he noted. “Our observations indicate it has been a long time since the last significant quake, but there’s no way to know if one is more likely in the near or distant future.”

Finley remarks that the fault has been confirmed as active, and the next step is to better estimate the frequency of large earthquakes in the area. This could help provide a more reliable timeline, even though scientists cannot accurately forecast when the next rupture may happen. Stay tuned.

“Earthquakes don’t necessarily occur on a regular basis, but they can give us a clearer understanding of how often we can expect significant events,” Finley explained. “Regardless, when the Tintina fault finally releases, it won’t be inconsequential.”

Read more:

About our experts

Theron Finley is a geologist at the Yukon Geological Survey. He recently obtained a doctorate from the University of Victoria in Canada and has conducted research on active faults in Western Canada, utilizing remote sensing, structural geology, and paleoseismology.

Source: www.sciencefocus.com

Top Bananza! Donkey Kong’s Anticipated Comeback is a Major Smash

wIt’s hard to picture Hen and Donkey Kong without thinking of Nintendo. The iconic monkey, who kicked off the gaming revolution, brought forth Donkey Kong’s signature tie-dong and barrel-throwing antics, almost leading Nintendo to bankruptcy. Yet, despite Donkey Kong’s firm footing in gaming history, his platformer adventures have been absent for several console generations. Enter Donkey Kong Bananza, marking DK’s first solo journey in over a decade.

Mario has soared through the cosmos and cleverly defeated enemies with a whimsical hat, but DK’s thrilling return taps into primal fury. Utilizing similar voxel technology as seen in Minecraft, DK’s Switch 2 Adventure swaps the thoughtful Lego-style construction for joyful chaos, enabling players to obliterate vibrant environments.

Players can smash through walls, floors, and ceilings, dig down to hidden treasures, and create new paths of destruction. It’s a refreshing, chaotic spin on the traditionally structured Nintendo platformers.

“Bananza kicked off when my boss, Onomura, approached our team about crafting a 3D Donkey Kong game,” recalls Kenta Motokura, producer of Donkey Kong Bananza and director of Super Mario O’Dacy. He describes it as a pivotal moment tied to his early experiences playing Donkey Kong on plastic bongos. “When Donkey Kong transitioned to 3D, I began my journey in developing 3D games,” he reflects. “With Onoumi’s direction on Donkey Kong Jungle, I gleaned insights about embracing challenges and truly understanding Donkey Kong.”

The focal point became Donkey Kong’s last major 3D venture on Nintendo 64—where would Nintendo orbit next with its beloved monkey mascot? The team soon turned to DK’s massive, furry hands, gathering wisdom from Mario creators Miyamoto Island and Tomita. “Miyamoto, who worked on the original and subsequent DK titles at Rare, emphasized showcasing Donkey Kong’s power and actions, like handclaps.” They tested voxel technology initially employed in Super Mario Odyssey and believed merging that with Donkey Kong’s destructiveness would create a perfect synergy.




King Kong…DK is back on top. Photo: Nintendo

Tomichuan and the Super Mario Odyssey team brought vast 3D platforming experience, but game director Takahashi, primarily skilled in open-world RPGs, faced unique pressures to resurrect Donkey Kong.

However, even with a talented platform team, the challenge of voxel-based destruction was a first for Nintendo’s Tokyo crew. “There was no blueprint for a game where everything can be destroyed,” explains Takahashi. “We encountered numerous challenges, striving to keep levels enjoyable without disruptions.”

Thankfully, they avoided blind spots with the aid of All-Star Play Testers. “I had Miyamoto check the games periodically,” Motokura shares. “Instead of progressing, he’d get engrossed in smashing one spot over and over. It was great to see; it showed player engagement.”

While many Nintendo enthusiasts associate Mario and Donkey Kong with legendary figures like Miyamoto and Tegashi, the new wave of developers prepares to carry forward their legacy. “Established developers such as Miyamoto and Tezuchuka are open to collaborating with younger minds. This exchange of ideas is invaluable,” Motokura highlights. “Up-and-coming talents will continue to nurture Nintendo’s developmental legacy.”

“Joining this team was a joy, and I embraced the challenge with enthusiasm,” Takahashi reflects. “Nintendo encourages exploration of new, bold concepts. In Bananza, we had the freedom to discover our own shortcuts… leading to an entirely new gaming experience compared to Odyssey.”

What to Play




Time to shred… Tony Hawk’s Pro Skater 3+4. Photo: Iron Galaxy

As a millennial, I find myself reliving nostalgic gaming memories through Tony Hawk’s Pro Skater 3+4. Though it lacks some classic tracks and offers a stripped-down version of the original’s sandbox mode, the thrill of performing tricks across Rio, London, Canada, and Alcatraz is incredibly satisfying. While it might not feature early 2000s artists like Papa Roach, Denzel Curry, Turnstyle, and more do a commendable job of bridging the gap.

This time, I opted to embrace Nintendo’s latest gem, the shiny Switch 2. I’m excited about the forthcoming titles for the new console. While it may not have the same affection as Vicarious Visions’ 2020 remake, once you get into the groove, the high score thrill makes Pro Skater an exhilarating ride.

Available on: Switch 2, PS5, Xbox, PC
Estimated playtime:
20-2,000 hours based on your zeal

Skip past newsletter promotions

What to Read




Defend your rights… After Ubisoft shut down the servers for the online-only racing game The Crew, the “kill the game” movement has begun. Photo: Ubisoft
  • Stop killing the game, a petition for online media preservation, garnered 1.2 million signatures and spurred a response from Nicolae Öřtef Nunugane, a VP of the European Parliament. This initiative emphasizes consumer rights amid the complexities of ownership when live service games are terminated. It’s a commendable cause, surprising that this movement stems from The Crew of all games. For further reading, check out PC Gamer.

  • Missed out on something from the PS5 30th Anniversary Range last year? Fear not, retro PS1-inspired controllers and consoles are set for restock on July 21st. I’ve grown fond of my anniversary controllers and wanted to share the news. Get all the details with Eurogamer.

  • In the aftermath of mass layoffs, some Xbox Employees added salt to the wound with two insensitive posts on LinkedIn. One suggested the remaining team members Need to rely on AI for career advice, while another advertised a job posting using AI-generated images. A poignant recap of the situation is available here.

What to Click

Question Block




The shock of serotonin…Ast Robot. Photo: Sony/Team Asobi

Leader P Holck poses this question about bridging generational gaps in gaming.

“I really enjoyed my son’s Civilization III. Now I bought a PlayStation 5 and thought I would play a modern, more active game. But what I tried is simply too difficult! I’m stuck and don’t know how to move forward! Which games do you recommend for players over 70?”

First off, congrats on taking the plunge into gaming! Like discovering a new music genre or entering anime, reconnecting with gaming may initially feel overwhelming. Finding the right genre can be tricky, especially with complex controls and mechanics that seasoned players might take for granted.

Though not action-packed, I’d highly recommend Tetris Effect—a classic block-dropping puzzle adorned with psychedelic visuals, offering a surprisingly deep journey. Action titles like Uncharted 4: A Thief’s End and 2018’s God of War serve as accessible starting points, presenting engaging stories without overwhelming complexity, especially on easier settings.

For a deeper experience, The Witcher 3 is an immersive RPG. Baldur’s Gate 3 allows pausing during combat to ease the action flow. Additionally, the Mass Effect Trilogy provides a balanced mix of turn-based RPG elements and third-person action. Last year’s Ast Robot delivers a vibrant, platformer experience. For some thrills, Resident Evil 4 Remake and The Last of Us Part I are modern masterpieces, again with lower difficulty settings for accessibility. Happy gaming!

If you have a question or feedback about the newsletter, feel free to reply or email us at butingbuttons@theguardian.com.

Source: www.theguardian.com

Major Study Links Nighttime Light Exposure to Heart Disease Risks

SEI 2575076471

Optimizing Darkness in Your Night Environment

Tero Vesalainen/Shutterstock

Exposure to light at night significantly raises the risk of heart disease, according to extensive research.

Various environmental and behavioral signals synchronize the body’s circadian rhythms, the internal clocks that manage physiological functions. However, contemporary lifestyles often disrupt these biological mechanisms, heightening sensitivity to health issues.

Light is a primary regulator of circadian rhythms and has been linked to numerous health implications. For instance, shift workers exposed to nighttime light face a higher risk of heart disease.

Previous studies utilizing satellite data have indicated associations between residents of brightly lit urban areas and heart disease, focusing solely on outdoor light at night. Daniel Windred, from Flinders University in Adelaide, and his team sought to determine if overall light exposure impacts cardiovascular health.

They monitored approximately 89,000 individuals without pre-existing cardiovascular conditions, equipping them with light sensors for a week between 2013 and 2016. “This represents the largest research effort on personal light exposure patterns affecting cardiovascular health to date.”

The sensors captured both natural and artificial light sources, including emissions from mobile phones. Over the eight-year period, participants who experienced the brightest nights showed a 23-56% increased risk of developing cardiovascular disease compared to those exposed to darker nights.

For example, individuals in the highest light exposure category included those who activated overhead lights for an hour from midnight to 6 AM. “This scenario places them within the 90th to 100th percentiles of nighttime light exposure,” Windred noted. He emphasized that the body continues to react to artificial light even after it is turned off, and short exposures can disrupt circadian rhythms.

Researchers accounted for factors such as gender, age, smoking habits, and shift work. They also demonstrated that the connection between light exposure and heart disease risk remained constant, regardless of sleep duration, sleep efficiency, or genetic predisposition.

Interestingly, although women generally have a lower incidence of heart disease at the same age as men, exposure to bright nighttime light can neutralize this protective effect due to estrogen. Evidence suggests that women experience more significant melatonin suppression in response to bright light, making their circadian systems more sensitive compared to men.

Disruption of circadian rhythms can compromise glucose tolerance, elevating the risk for type 2 diabetes, which is a risk factor for heart disease. Such disruption also influences blood pressure and can increase the risk for abnormal cardiac rhythms due to conflicting signals between the brain and heart.

“The significance of these findings must not be understated,” stated Martin Young from the University of Alabama at Birmingham. “As a 24/7 society increasingly disrupts our circadian systems, this study underscores the notable health risks linked to such exposure.”

Windred suggests that individuals strive to maintain a darker nighttime environment. “Optimize your sleep schedule to ensure darkness during bedtime. If you awaken during the night, utilize dim lighting and avoid bright overhead lights.”

Topics:

Source: www.newscientist.com

Nuclear Fusion Disasters: Why They’re Not a Major Concern

Modern atomic energy technologies primarily utilize nuclear fission. In this process, the nuclei of heavy atoms, such as uranium, are bombarded with neutrons, causing them to split apart and release lighter nuclei along with significant energy.

However, a major drawback of fission energy is that the resultant waste is often far more radioactive than the original fuel, with its hazardous nature persisting for extended periods. Moreover, managing the rate of fission reactions is crucial for ensuring safety.

A failure in this context can lead to catastrophic consequences.

An alternative to nuclear fission is fusion energy. In this process, lighter elements, specifically isotopes of hydrogen, merge to form heavier nuclei, releasing substantial energy in the process.

This is the fundamental reaction that powers stars, including our sun.

The byproducts of the fusion reaction are generally safe, primarily producing inert helium, though some mildly radioactive substances are also generated, but they are short-lived. The challenge with fusion energy lies in achieving the conditions required to initiate the reaction.

It necessitates temperatures in the millions of degrees, along with the incorporation of ultra-high-pressure fuel (usually within a magnetic field), which presents significant technical hurdles.

Like any industrial process, there are inherent risks, but the nature of a fusion reactor means that any failure would quickly halt energy production.

As a result, fusion energy “disasters” are considerably less probable than conventional industrial accidents; indeed, they lack the potential for the environmental and ecological crises associated with fission energy meltdowns.


This article responds to the inquiry (made via email by Brandon Harris) regarding “What does a Fusion Energy Disaster look like?”

Feel free to email us with your questions at Question @sciencefocus.com or reach out via Facebook, Twitter or Instagram (please remember to include your name and location).

Explore our ultimate Fun fact page for more fascinating science content.


Read more:


Source: www.sciencefocus.com

Study Suggests Major Challenges Ahead for Electric Car Boom in Five Years

The electric vehicle (EV) revolution is new research published in Cell Reports Sustainability.

The accelerating demand for lithium, an essential element of EV batteries, is expected to outstrip domestic supply in major markets by the decade’s end.

This analysis highlights China, the US, and Europe, which collectively represent 80% of current EV sales. Researchers caution that without significant changes, these regions may not fulfill their lithium requirements from local sources by 2030, leading to an increased reliance on imports and a heightened risk of global shortages.

“Many previous studies have examined the lithium necessary for low-carbon transitions,” said Dr. Andre Manberger, a co-author of the new study, in an interview with BBC Science Focus.

“The issue is that often we compare projected lithium demand with current mining rates and existing reserves. However, there’s a gap in the existing literature concerning mining feasibility.”

Globally, EV sales surpassed 17 million in 2024, marking a 25% increase from the previous year.

The International Energy Agency forecasts that electric vehicles could represent 40% of all car sales by 2030. However, this expansion hinges on a stable supply of lithium carbonate equivalents (LCE).

The study indicates that by 2030, annual LCE demand will reach 1.3 million tonnes in China, 792,000 metric tonnes in Europe, and 692,000 in the US. Yet, even if all current and planned mining projects are considered, domestic supply remains inadequate: China could produce up to 1.1 million tonnes, the US 610,000, and Europe only 325,000.

This shortfall could intensify global competition for lithium, primarily sourced from Australia, Chile, and Argentina. In 2023, these three countries accounted for nearly 80% of the world’s lithium.

Almost 50% of the world’s lithium was mined in Australia in 2023.

China currently dominates the global lithium market, and an increase in its imports could negatively impact other buyers. Researchers found that should China’s imports rise by 77%, the US and European imports could drop by 84% and 78%, respectively.

“Commodity trading tends to have a lot of continuity and path dependence,” Månberger explains.

“This is due to the established supply chain, contracts, and overall inertia in the market.”

Nonetheless, there are reasons for optimism. Increasing lithium prices may drive investments in new mining initiatives and motivate manufacturers to create more efficient battery technologies. Alternatives like sodium-ion batteries could also contribute to a more diverse market.

In the long term, recycling could assume a more substantial role. As first-generation EVs reach the end of their lifespans in the 2030s, materials extracted from older batteries could mitigate the need for new lithium extraction.

“I’m very optimistic,” says Månberger. “Historically, while it’s often straightforward to forecast potential bottlenecks and supply risks, innovations tend to emerge unpredictably when these challenges arise.”

Read more:

About our experts

Andre Manberger is a senior lecturer in Environmental and Energy Systems Studies at Lund University, Sweden. He leads the Misttra Mineral Governance Research Program, initiated in 2024, focusing on the rising demand for critical raw materials and addressing conflicts of interest in the low-carbon transition.

Source: www.sciencefocus.com

Could the Competition Among Microscope Manufacturers Spark the Next Major Breakthrough?

Feedback presents the latest updates in science and technology from new scientists, highlighting recent developments. Share items that may captivate readers by emailing Feedback@newscientist.com.

Get Ready…

Attention athletics fans, there’s an intriguing new competition to check out: Sperm Race.

It’s been reported that male birth rates are on the decline, with reduced sperm motility (movement speed) being a significant contributing factor. To raise awareness, a teenage founder has introduced sperm racing as a sport. As they say: “We’re creating the first racecourse for sperm: two competitors, two samples, one microscope finish line.”

Their site showcases “microscopic racetracks” that mimic reproductive systems, using “high-resolution cameras” to “track all microscopic movements.” They claim, “It’s all streamed live,” suggesting the phrase choice is deliberate, with the victor being “the first sperm to cross the finish line, confirmed via advanced imaging.”

The inaugural race on April 25th featured entries from two California universities. Readers may question why feedback on this topic emerged slowly. It’s due to a twist in the tale post-event.

Unfortunately for organizers, journalists like River Page, Reporter at Free Press, revealed that “the winner was predetermined. The ‘race’ was computer-generated.”

The issue is that microscopes can’t function that way. To have tracks long enough for sperm to swim competently, tracking them on camera is impractical. In film, a cameraman can follow Tom Cruise sprinting along the roof of a moving train. Yet, focusing a microscope can be challenging, even when the cells are nearly stationary.

The creators apparently ran a real race in a private setting, relying on computer-generated imagery to “depict” sperm racing for paying spectators.

This has led to speculation that a second round of the sperm race is improbable. I can’t help but recall how millions relish completely fabricated “sports entertainment” like wrestling, and outcomes in football often hinge on which teams have the wealthiest billionaires. Perhaps sperm racing could indeed be the next big sensation.

Water-Based Cooking

Feedback loves to explore the latest food trends, from cutting carbs to eating only lean meats, salt, and water! There’s even talk of “Air Protein,” which involves “microbial organisms that harness carbon dioxide.”

Just when I thought there couldn’t be more to discover, I stumbled upon “water-based cooking.” Given that living organisms are thought to be 60% water, my initial thought was that this might just be another way to say “cooking.” However, I later uncovered articles titled: “Food Trends and Science – Why Cooking in Water May Help Slow Aging.” and “What is Water-Based Cooking? And Why is it Healthier?”. It’s time to delve deeper.

Essentially, water-based cooking means utilizing water for cooking whenever possible, in favor of oil. Think boiling, stewing, or steaming over stir-frying or roasting. This method reduces the formation of harmful advanced glycation end products (AGEs) found in the crispy bits of fried foods known to be linked to health complications. Hence, water-based cooking enthusiasts should steer clear of those.

Driving this trend is Michelle Davenport, a UCSF and NYU-trained nutrition scientist and the former founder of Digital Children’s Food Company. She educates followers on Instagram on how to manage metabolic health through water-based cooking inspired by family recipes.

Read TikTok posts like: “You’ve switched to water-based cooking, and now your skin is clear, your digestion is thriving, and illness recovery is rapid.”

Feedback perceives this might revolve around minor details, but it fits perfectly within wellness culture: if you’re not in peak health, it’s certainly your choice. Regardless, we find ourselves empathetic toward Elle from Bruski, who aptly stated: “It’s just soup. They’re making soup.”

Pizza Insights

We sought examples of “obvious” scientific inquiries that tend to extend far beyond what one might have already guessed. The first query involved research indicating that an SUV poses a greater risk to pedestrians than a compact car.

In response, reader Roger Eldem shared a collection of findings that were decidedly unsurprising. One notable study, from Journal of Knee Surgery, led by Steven Defroda, published a paper stating: “NFL players sustain a higher incidence of knee extensor tears during brief periods of rest compared to normal intervals.” Alternatively, check a press release here. This essentially confirms that “NFL players are prone to knee injuries following shortened rest phases.” Well, yes.

Eldem’s second intriguing find came from research published in Nutrients, led by Iizuka. Its captivating title read: “The Type of Food, Not the Sequence, Influences Meal Duration, Chewing Frequency, and Pace.”

This study examines whether specific food types are consumed more quickly, potentially contributing to obesity later. A related article in MedicalXPress states: “Studies reveal that pizza is consumed more rapidly compared to meals that require chopsticks.” Clearly, food tasks can indeed be time-consuming.

Have you provided feedback?

Send your stories to feedback at feedback@newscientist.com. Please include your home address. Past and current feedback can also be found on our website.

Source: www.newscientist.com