Parallel channels known as linear dune canyons can be observed within some of Mars’ dunes. Contrary to what their name suggests, these canyons are frequently quite winding. It was previously believed that these landforms were created through debris flow processes influenced by liquid water. However, recent satellite imagery has revealed that they are active during the local spring due to processes involving carbon dioxide ice. During the Martian winter, ice accumulates on the dunes, breaking off at the top as temperatures rise in early spring. In new experiments conducted in the Mars Chamber, planetary researchers from Utrecht University, the University of Le Mans, the University of Nantes, the Grenoble Institute of Astrophysics, and the Open University have demonstrated that linear dune canyons form when blocks of carbon dioxide and ice slide or submerge into the sandy slopes of dunes, or shift downwards with considerable force, draining the nearby sand. This drilling action is triggered by a powerful gas flow generated by the sublimation of carbon dioxide ice, as it transitions into carbon dioxide gas. The movement of sliding carbon dioxide ice blocks contributes to the formation of shallow channels, while the excavation of carbon dioxide ice results in the development of deep, winding channels in Martian dunes.
Two examples of Martian dunes with linear dune gullies: (a) linear dune gullies in the dune field of Gall Crater; (b) A linear dune canyon in the dune field of an unnamed crater in the center of the Hellas Plain. Image credit: Roelofs et al., doi:10.1029/2024GL112860.
Linear dune canyons are remarkable and enigmatic formations located in the mid-latitude sand dune regions of Mars.
Despite their designation, these parallel and often meandering waterways, characterized by sharp bends, limited source areas, distinct banks, and hole-like channel terminations, have no equivalent on Earth.
They differ significantly from the conventional canyon topography found on steep slopes both on Mars and Earth, which typically features erosional alcoves, channels, and sedimentary aprons that are often larger than linear dune canyons.
“In our simulations, we observed how high gas pressures cause the sand to shift in all directions around the blocks,” stated Loneke Roelofs, a researcher at Utrecht University and lead author of the study.
“Consequently, the blocks become lodged into the slope and get trapped within cavities, surrounded by small ridges of settled sand.”
“However, the sublimation process persists, leading to continued sand displacement in all directions.”
“This phenomenon drives the block to gradually descend, resulting in a long, deep canyon flanked by small sand ridges on either side.”
“This is precisely the kind of canyon we find on Mars.”
In their research, Dr. Roelofs and colleagues merged laboratory experiments that let blocks of carbon dioxide and ice slide down sandy slopes under Martian atmospheric pressure with observations of the linear dune canyons located within the Russell Crater Giant Dunes.
“We experimented by simulating dune slopes of varying steepness.”
“We released chunks of carbon dioxide ice down a slope and observed the outcomes.”
“Once we discovered an appropriate slope, we began to see significant effects. The carbon dioxide ice chunks started to penetrate the slope and move downwards, resembling burrowing moles or dune sandworms. It was quite an unusual sight.”
“But how exactly do these ice blocks originate? They form in the desert dunes located in the midlands of Mars’ southern hemisphere.”
“During winter, a layer of carbon dioxide ice develops across the entire surface of the dunes, reaching thicknesses of up to 70 cm. As spring arrives, this ice begins to warm and sublimate.”
“The last remnants of the ice persist on the shaded side of the dune’s summit, where blocks will break off once temperatures rise sufficiently.”
“When a block reaches the base of the slope and halts its movement, sublimation continues until all carbon dioxide evaporates, leaving behind a cavity filled with sand at the dune’s base.”
This study was published in the October 8th issue of Geophysical Research Letters.
_____
Loneke Roelofs et al. 2025. Particle transport driven by explosive sublimation causes blocks of CO2 to slide and burrow, forming winding “linear dune valleys” in Martian dunes. Geophysical Research Letters 52 (19): e2024GL112860; doi: 10.1029/2024GL112860
Cardamom (Elettaria cardamom) seed extract, notably its primary bioactive element, 1,8-cineole, has been highlighted in recent research for its potential as an antiviral agent by enhancing the production of antiviral proteins known as type I interferons.
Herbal remedies have long been utilized to address various health conditions, including viral infections.
Medicinal herbs and plants are abundant sources of bioactive substances and have been incorporated into antiviral products by pharmaceutical companies.
These substances interfere with different stages of various viruses’ life cycles and help modulate the body’s immune response to viral threats.
Recent research by Takeshi Kawahara and his team at Shinshu University suggests that cardamom seed extract might possess formidable antiviral properties.
“Even prior to the emergence of the recent coronavirus, we were investigating substances that could help prevent viral infections in daily life,” Dr. Kawahara stated.
“The pandemic has amplified public interest in the antiviral qualities of food, providing us more avenues to pursue this research.”
In earlier investigations, the researchers discovered that cardamom seed extract effectively prevented influenza virus infections.
The latest study involved conducting experiments on human lung cells, specifically A549 cells, treated with cardamom seed extract to simulate viral infection processes and better understand its effects on the production of antiviral molecules.
They found that cardamom seed extract, along with its key bioactive component, 1,8-cineole, activates intracellular nucleic acid sensors that recognize viral DNA and RNA.
These sensors trigger the production of various cytokines, which impact the virus at different phases of infection.
In this instance, treatment with cardamom seed extract or 1,8-cineole resulted in increased production of a specific type of cytokine known as type I interferon, which is crucial for the body’s defense against viral infections, facilitated by the intracellular nucleic acid sensors.
Given these findings, the researchers expressed significant interest in the potential therapeutic applications of their results.
“Traditionally, cardamom has been widely recognized as a medicinal spice, and based on our findings, we aspire to explore its use as an antiviral agent to combat various viral infections,” Dr. Kawahara noted.
“We hope this research sheds new light on the antiviral properties of foods and inspires further exploration of various food components that may aid in preventing viral infections in everyday life.”
These findings were published in the August 2025 issue of Foods.
_____
Abdullah Al Sufian Shuvo et al. 2025. Type I interferon-enhancing effect of cardamom seed extract via intracellular nucleic acid sensor regulation. Foods 14(15):2744; doi: 10.3390/Food14152744
Ancient volcanic eruptions on Mars may have led to ice deposits near the planet’s equator
Ron Miller/Science Photo Library
The hottest regions on Mars conceal an unexpectedly dense layer of ice beneath their surface, and researchers might have unraveled its origins. This water could have journeyed from the planet’s interior via peculiar volcanic eruptions billions of years ago, making it a vital resource for future human expeditions.
While Mars is known for its polar ice caps, recent radar data from orbiting satellites indicates that ice also exists in equatorial zones. “There’s a frozen layer at the equator, which is curious given that it’s the warmest area of the planet,” says Saira Hamid from Arizona State University. At high noon, temperatures around the equator can soar to approximately 20°C (68°F).
Hamid and her team conducted simulations of volcanic activity on Mars, revealing that explosive eruptions could have propelled water from the interior into the atmosphere over extensive periods. In ancient times, Mars boasted a denser atmosphere conducive to freeze and snowfall, leading to the ice layers observed today. “This narrative intertwines fire and ice,” adds Hamid.
These eruptions would have differed substantially from those on Earth. Mars’ reduced gravity allows volcanic ash, water, and sulfur plumes to ascend as high as 65 kilometers (65 kilometers) above Earth’s surface, and under certain atmospheric conditions during eruptions, even reach space.
As snow accumulates, the water compresses into muddied ice layers, shielded by a blanket of volcanic ash. This covering prevents the ice from sublimating into space and has contributed to its preservation to the present day.
“The potential for such ice-laden deposits poses challenges for many,” comments Tom Watters from the Smithsonian Institution in Washington, DC. A notable source of confusion is the massive Medusa Fosse Formation near Mars’ equator. “If the water anticipated in the Medusa Fosse Formation were to melt, it could fill the Great Lakes. That’s a substantial volume of water.”
Another theory for the ice’s formation suggests that Mars’ axial tilt may have changed drastically over time, potentially shifting equatorial areas to pole-like conditions. “However, these volcanic eruptions are sufficient to generate ice without requiring shifts in axial tilt,” Hamid pointed out. “It’s the simpler explanation.”
Equatorial regions are also prime sites for landing missions to Mars because the faint atmosphere thickens in these areas, helping to decelerate landers approaching the surface. The availability of water there could be crucial for future human missions, although initial missions may not exploit this resource. Subsequent landings could benefit from the ice.
“On our inaugural trips, we intend to carry plenty of water, just in case we misinterpret our radar readings,” says Watters. “Without enough water, venturing out with only a shovel expecting to strike water is unwise. Bring a shovel, but also ensure you have sufficient water.”
homo heidelbergensis The ancient banks of the River Thames in modern-day Swanscombe, England
Natural History Museum/Scientific Photography Library
This is an excerpt from Our Human Story, a newsletter focused on the advancements in archaeology. Subscribe to receive it directly to your inbox each month.
When contemplating regions that are challenging for human habitation, we often envision extreme environments: the Sahara Desert, the Arctic, and the peaks of the Himalayas. While the British Isles may not be as severe, they posed significant challenges for ancient inhabitants.
A recent study I came across in September examined some of the earliest signs of human presence in Britain. The occupations highlighted in this study date back over 700,000 years, which is relatively recent when considering the migration patterns of early humans out of Africa. For instance, these early adventurers reached Indonesia quite swiftly but took longer to make their way to England.
To put numbers to this timeline: Around six to seven million years ago, humans roamed Africa. The oldest widely acknowledged evidence of humans outside Africa comes from Dmanisi, Georgia, where Homo erectus remains were uncovered, dating back 1.8 million years. These ancient relatives seem to have broadened their migration paths, eventually reaching locations like Java, Indonesia.
Nevertheless, the earliest evidence of human populations in Britain emerges within the last million years, indicating a significant gap.
Some scientists suggest that hominins could have been outside Africa much earlier, hinting at an even larger delay. For instance, stone tools have been identified in China’s Xihoudu, dating to 2.43 million years ago, and artifacts from Shangcheng are dated to 2.12 million years ago. Over the last five years, I’ve documented findings of Jordanian tools believed to be over 2 million years old, as well as Indian artifacts thought to date back to 2.6 million years. While the validity of these claims remains contentious—debating whether these objects are actual human tools or merely stones shaped by natural forces—the number of discoveries is growing, and I won’t be surprised if more concrete evidence surfaces shortly.
Regardless, it seems that settling in Britain was a gradual process for our ancient ancestors.
Farewell, Clear Skies
Alternatively, perhaps early humans arrived, took one look at the environment, and decided against settling without leaving a trace. Although the UK’s climate is mild in terms of its lack of extreme heat or cold, its gloomy weather and frequent rains present unique challenges.
During discussions about the British climate with Nina Jablonski from Penn State University, he remarked that in the UK, “the harsher the weather, the lower the UV rays, and the higher the seasonality.” Essentially, it’s jarringly overcast. Unless you venture to polar regions, finding a place with less sunlight is quite rare.
This pattern persists even today, and there were even colder periods. Since the onset of the Pleistocene epoch 2.58 million years ago, the climate has fluctuated between icy ages and warmer interglacial phases. We’ve enjoyed an interglacial phase for the last 11,700 years, during which polar ice sheets expanded south, enveloping vast regions of Britain.
Historically, evidence of ancient humans predominantly comes from warmer interglacial phases, but that narrative has shifted recently.
Research has focused on excavations at Old Park, adjacent to Canterbury in southeast England. In the 1920s, this area was home to Fordwich Pit, a quarry that yielded numerous stone tools. Since 2020, Dr. Alastair Key from the University of Cambridge has led excavations in the region.
His team reported in 2022 about their initial findings, which included 112 artifacts from layers dated between 513,000 and 570,000 years old. My colleague Jason Arun Murguez noted at the time that these artifacts represented the oldest of their kind discovered in Britain and Europe.
Three years later, Key’s team extended the dig and uncovered even older layers containing stone tools, potentially dating hominins to between 773,000 and 607,000 years ago.
Additionally, the researchers found two more recent layers with artifacts dating back to 542,000 and 437,000 years ago, coinciding with the earlier glacial periods.
This indicates that hominins occupied Old Park multiple times, even during the harshest climatic moments.
Ancient footprints uncovered in Happisburgh, England
Simon Parfitt
Heading North
In a broader perspective, while Old Park isn’t the earliest evidence of humankind in the British Isles, it comes very close. The oldest known evidence, however, has unfortunately vanished.
In 2013, while exploring a beach in Happisburgh, eastern England, researchers stumbled across 49 footprints preserved in layers of silt exposed by erosion. Sadly, these footprints were washed away weeks later, but archaeologists documented them and verified they were between 850,000 and 950,000 years old.
Happisburgh has also yielded findings of stone tools exceeding 780,000 years in age, while nearby Pakefield boasts artifacts dating to approximately 700,000 years ago. In stark contrast, the oldest human remains were found in Boxgrove, southeast England, dating back merely 500,000 years.
Of course, the archaeological record remains incomplete, making these sites only representative samples. In 2023, Key and colleague Nick Ashton suggested that humans might have already been in northern Europe as early as 1.16 million years ago. With fresh evidence emerging from Old Park, this date might need reconsideration.
And herein lies the mystery: Who were the ancient humans capable of surviving the often brutal climate of Britain?
Although Homo erectus seems to have been the first to venture out of Africa, concrete evidence of their presence in Europe is limited. Tools dating back 1.4 million years have been unearthed in Korolevo, Ukraine, but no hominin remains were found. Similarly, I reported earlier this year on the discovery of fragments of facial bones from Spain, dating to between 1.1 to 1.4 million years ago, attributed tentatively to “Homo af. erectus.”
Northern Spain was also home to another species, referred to as Homo antecessor, identified from a cave that existed between 772,000 and 949,000 years ago.
The Boxgrove hominids, on the other hand, are thought to belong to a distinct species, Homo heidelbergensis. Their classification poses challenges; they likely thrived in Europe hundreds of thousands of years ago, yet clear archaeological sites specifically linked to them remain scarce.
How these species interrelated, along with later groups like us and Neanderthals, remains a mystery. Consequently, the identities of the early Britons are still shrouded in uncertainty, fittingly, considering the cloudy weather.
Keith Thomas (right) was able to control other people’s hands.
Matthew Ribasi/Feinstein Institute for Medical Research
A paralyzed individual can now move and sense the hands of others as if they were his own due to an innovative ‘telepathic’ brain implant. “We’ve established a mind-body connection between two distinct individuals,” explains Chad Bouton from the Feinstein Institute for Medical Research in New York.
Bouton theorizes this method could serve as a rehabilitation tool following spinal cord injuries, enabling paralyzed individuals to collaborate and potentially allowing shared experiences from a distance.
Bouton and his team collaborated with Keith Thomas, a man in his 40s who became paralyzed from the chest down after a diving accident in July 2020, losing all movement and sensation in his hands.
In a prior study in 2023, researchers inserted five sets of small electrodes into the part of Thomas’s brain responsible for movement and sensation in his right hand, enabling them to monitor his neural activity through a device affixed to his skull.
By processing these signals through a computer equipped with an artificial intelligence model, the researchers deciphered the neural activity and relayed signals wirelessly to electrodes on Thomas’ forearm, prompting muscle contractions and relaxations that allowed him to move his hand. Thomas also used force sensors on his hands, transmitting signals back to his brain implant, thereby creating a sense of touch. Consequently, he was able to use his mind to pick up and feel objects in his hands for the first time in years.
Now, the team has adapted a similar system that enables Thomas to control and sense through the hands of others. In one experiment, a non-disabled woman was fitted with forearm electrodes and numerous force sensors on her thumb and index finger. Although she did not attempt to move, Thomas was able to open and close her hand by merely imagining the action.
He could also perceive the sensation of her fingers gripping a baseball, a soft foam ball, and a firmer ball in his own hand, distinguishing between them based on their hardness while blindfolded. “It definitely feels strange,” Thomas remarked. “You’ll eventually get accustomed to it.”
Though Thomas could only identify the different balls with 64% accuracy, Bouton believes this figure could be enhanced by optimizing sensor locations and numbers on his hands. They also could not discern the shape of the balls, but Bouton is hopeful that additional brain electrodes and force sensors might enable them to recognize various objects.
In another similar trial, Thomas assisted a paralyzed woman named Kathy DeNapoli in picking up a can and drinking from it, a task she struggled to perform independently due to limited finger movement. “It was genuinely remarkable, how you can assist someone simply by thinking about it,” Thomas expressed.
Electrodes implanted in Keith Thomas’ brain are wired to a computer
Matthew Ribasi/Feinstein Institute for Medical Research
After several months of working with Thomas, DeNapoli’s grip strength nearly doubled, according to Bouton. DeNapoli’s paralysis isn’t so severe that receiving invasive surgery is morally questionable. While similar gains in grip strength can be achieved through conventional treatments like electrical muscle and spinal cord stimulation, Thomas and DeNapoli found collaborating far more appealing than rehabilitating alone, Bouton added.
“Just conversing about things like, ‘How’s your weekend going?’ can be beneficial. It enhances your self-esteem and theirs,” Thomas states. Bouton shared that the team intends to explore this approach with more individuals next year.
Rob Tyler, who has paralysis and is a lay member of the scientific committee of the spinal cord injury charity Inspire Foundation, perceives potential value in this method for specific paralyzed patients..
“I view this as a convenient option,” he states. “It’s enjoyable to collaborate with other patients who likely share similar experiences. It can greatly enhance someone’s quality of life.” He emphasized that finding the right combination of people with compatible outlooks and motivations will be critical.
Bouton admits numerous ethical concerns regarding who could benefit from this method must be addressed before it can receive broader medical approval, which he aims to achieve within the next decade.
Nonetheless, Bouton asserts that such technology may have applications beyond medical use, such as allowing non-disabled individuals to remotely control or experience sensations through others. “This could represent a new frontier for human connection,” he suggests.
However, it opens up a plethora of ethical dilemmas. “Is it beneficial or detrimental for society if people can control and feel through others?” questions Harris Akram from University College London Hospital. “I can envision using your body to harm another individual, or controlling someone to perpetrate a crime, and then claiming, ‘That wasn’t me.’
This phenomenon is attributed to plate tectonics and the rock movement resulting from the melting of substantial ice sheets above, which alleviates the underground pressure. A new study published in Geophysical Research Journal: Solid Earth highlights that the pressure has been decreasing in recent years due to significant ice melt in Greenland, alongside the ongoing influence of colossal ice masses that have melted since the peak of the last ice age around 20,000 years ago. Consequently, the entire island has shifted northwest by approximately 2 centimeters annually over the past two decades.
Horizontal land movement observed by 58 GNET stations in Greenland. Image credit: Longfors Berg et al., doi: 10.1029/2024JB030847.
“Overall, this indicates that Greenland is gradually decreasing in size; however, with the accelerated melting currently observed, this could potentially change,” stated Dr. Danjal Longfors Berg, a postdoctoral researcher at the Technical University of Denmark and NASA’s Jet Propulsion Laboratory.
“The geophysical processes influencing Greenland’s structure are being exerted in various directions.”
“The region actually expanded during this timeframe, as the melting ice over the past few decades caused Greenland to extend outward and resulted in uplift.”
“Simultaneously, we are observing shifts in the opposite direction: Greenland is both rising and contracting due to alterations in the ancient ice mass associated with the last Ice Age and its conclusion.”
This marks the first detailed description of horizontal movement.
“We have constructed a model illustrating movement over an extensive timescale, from around 26,000 years ago to the present,” remarked Dr. Longfors-Berg.
“Additionally, we are utilizing highly precise measurements from the past 20 years to scrutinize current movements.”
“This allows us to measure movement with great accuracy.”
The new measurements rely on data gathered from 58 GNSS stations (GPS) distributed across Greenland.
These stations monitor Greenland’s overall position, changes in bedrock elevation, and the dynamics of the island’s contraction and expansion.
“For the first time, we have measured with such precision how Greenland is evolving,” commented Dr. Longfors-Berg.
“It was previously believed that Greenland was primarily being stretched by dynamics related to recent ice melt.”
“However, unexpectedly, we also discovered extensive areas where Greenland is converging or contracting as a consequence of this movement.”
This new research offers valuable insights into the potential impacts of accelerated climate change in the Arctic, as observed in recent years.
“Understanding the movements of land masses is crucial,” asserts Longfors-Berg.
“While they are certainly of interest to geosciences, they also hold significance for surveying and navigation, as even Greenland’s fixed reference points are shifting over time.”
_____
D. Longfors Berg et al. 2025. Estimation and attribution of horizontal land motion measured by the Greenland GNSS network. JGR: Solid Earth 130 (9): e2024JB030847; doi: 10.1029/2024JB030847
Skeleton of a woman holding a baby in her left arm, interred in an Anglo-Saxon cemetery in Screnby, England
Dr Hugh Wilmot, University of Sheffield
Researchers are now investigating ancient pregnancy tests undertaken on women from centuries past.
For the first time, scientists have identified levels of estrogen, progesterone, and testosterone in remains of women from the 1st to the 19th century. Some of these women were entombed with their unborn children. This revelation indicates that historic bones and teeth can retain identifiable traces of specific sex hormones, which might aid in discerning which individuals at archaeological sites were pregnant or had recently given birth at the time of their demise, according to Amy Barlow from the University of Sheffield, UK.
“The physiological and emotional impacts of pregnancy, miscarriage, and childbirth carry profound significance for women, yet they remain largely unexplored in archaeological records,” she notes. “This technique could revolutionize how we comprehend the reproductive narratives of ancient populations. We’re genuinely excited about it.”
Establishing pregnancy in ancient individuals can be challenging, particularly if the fetus lacks a visible skeleton. Even second- and third-trimester fetuses may be overlooked due to their bones resembling those of the mother’s hands, often placed on the abdomen during burial.
Contemporary pregnancy tests evaluate hormone levels such as hCG in blood or urine. However, hCG degrades rapidly, leaving minimal evidence in the body.
In contrast, progesterone, estrogen, and testosterone can persist in tissues for extended periods. Recent studies have demonstrated that these steroid hormones are also present in human blood, saliva, and hair. Samples from long-buried Egyptian mummies.
To explore the likelihood of identifying ancient pregnancies, Barlow and her team analyzed rib fragments and one neck bone from two men and seven women interred in four British cemeteries. They also examined teeth from another male.
Two of the women had fetal remains discovered within them, and another two were buried alongside their newborns. The gender of the others was established through DNA analysis.
The research team ground each sample into powder and employed chemical techniques to extract the steroid hormones. Laboratory tests subsequently identified the estrogen, progesterone, and testosterone levels in each of the 74 samples.
Estrogen was only found in four samples, without a discernible pattern. This may be due to its quicker breakdown compared to progesterone or testosterone, which may not accumulate as efficiently in tissues.
However, between the 11th and 14th centuries, heightened levels of progesterone were discovered in the spines of young women who died while carrying full-term fetuses. A later pregnant woman interred in the 18th or 19th century also exhibited elevated progesterone in her ribs. Moderate progesterone levels were noted in the dental plaque of two women buried with their infants during the 5th or 6th century.
Interestingly, no testosterone was detected in the bones or teeth of these four women. However, one woman who was buried with her premature infant had trace amounts of testosterone in her dental plaque. In contrast, three unrelated women from 8th- to 12th-century sites and Roman tombs showed testosterone in all layers of their ribs and teeth.
Low testosterone levels are known to play a crucial role in women’s health, so its discovery in these samples isn’t unexpected, Barlow states. “However, the absence of testosterone may indicate that she was recently or currently pregnant at the time of her death,” she adds.
“This intersection of archaeology and hormone science is exhilarating and unforeseen,” states Alexander Komninos from Imperial College London. “These methods will enhance our ability to detect pregnancy in human remains with greater precision, providing deeper insights into ancient pregnancies.”
Nevertheless, while the findings show promise, additional research is essential to clarify many aspects, according to Barlow. For instance, moderate progesterone levels were frequently found in the bones and inner teeth of men, but the reasoning behind this remains unclear, she comments. “Interpretation is quite cautious at this junction.”
Walking through Hadrian’s Wall and Rome’s innovations: England
Join this immersive walking tour along Hadrian’s Wall, one of Great Britain’s most iconic ancient structures and a UNESCO World Heritage Site.
‘Extremely serious’ cyber-attacks have surged by 50% over the past year, with UK security agencies now addressing a new nationally significant attack every two days, according to the latest data from the National Cyber Security Center (NCSC).
In what officials are calling a “call to arms,” national security leaders and ministers are encouraging all organizations, from small businesses to major corporations, to develop contingency strategies for the possibility that their “IT infrastructure is compromised.” [is] Tomorrow, all screens could potentially be rendered [go] Blank.”
The NCSC, a division of GCHQ, stated in its annual report released on Tuesday that a “highly sophisticated” China, along with a “competent yet reckless” Russia, Iran, and North Korea, represent the primary national threats. This rise is fueled by ransomware attacks from profit-driven criminals and society’s growing dependence on technology, resulting in more potential targets for hackers.
Prime Minister Rachel Reeves, Security Secretary Dan Jarvis, and Technology and Business Secretaries Liz Kendall and Peter Kyle have contacted the leaders of hundreds of the UK’s largest companies, urging them to elevate cyber resilience to a board-level concern and cautioning that hostile cyber activities in the UK are becoming “more intense, frequent, and sophisticated.”
“We must not make ourselves an easy target,” stated Anne Keast-Butler, GCHQ’s director. “It’s critical to prioritize cyber risk management, integrate it into governance, and set a tone from the top.”
The NCSC dealt with 429 cyber incidents from the past year up to September, with nearly half considered to be of national significance, a figure that has more than doubled in the last year. Among these, eighteen incidents were categorized as “very serious,” indicating they profoundly affected governments, essential services, the public, and the economy. Many of these were ransomware attacks, with Marks & Spencer and Co-op Group among those heavily impacted.
“Cybercrime poses a significant threat to our economy’s security, businesses, and the lives of individuals,” Jarvis remarked. “We are working tirelessly to combat these threats and support organizations of all sizes, but we cannot do this alone.”
The NCSC refrained from commenting on reports suggesting it is investigating possible Russian involvement in the severe attack on Jaguar Land Rover, which has halted production. This report indicated that Russia is encouraging unofficial “hacktivists” to target the UK, the USA, as well as European and NATO nations.
Last month, a cyberattack disrupted passenger services at numerous European airports, including London Heathrow.
Photo: Isabel Infantes/Reuters
Overall, the number of attacks up to September signifies the highest level of cyber threat activity recorded by the NCSC in the last nine years. For the first time in a year, the UK and its allies have detected Russian military units executing cyber attacks, provided recommendations against a China-linked campaign affecting thousands of devices, and raised alarms over cyber attackers affiliated with Iran, as noted by the NCSC. Domestic threats also persist, with two 17-year-old boys arrested in Hertfordshire last week following an alleged ransomware hack of children’s data from the Kido nursery chain.
Hackers are increasingly incorporating artificial intelligence (AI) to enhance their activities, and although the NCSC has not yet encountered an AI-driven attack, they predict that “AI will almost certainly present cyber resilience challenges by 2027 and beyond.”
“We observe attackers improving their capacity to inflict significant damage on the organizations they compromise and those dependent on them,” commented Richard Horne, NCSC’s chief executive. “Their disregard for their targets and the harm they cause is clear. This is why all organizations must take action.”
He emphasized the psychological toll inflicted on victims of cyberattacks, stating, “I have been in numerous meetings with individuals profoundly affected by cyberattacks on their organizations. I am aware of the anxiety, the sleepless nights, and the consequent turmoil caused by such disruptions for employees, suppliers, and customers.”
quick guide
Contact us about this story
show
The best public interest journalism relies on first-hand reporting from those in the know.
If you have something to share on this matter, please contact us confidentially using the methods below.
Secure messaging in the Guardian app
The Guardian app has a tool to submit story tips. Messages are end-to-end encrypted and hidden within daily activities performed by all Guardian mobile apps, obscuring your communication from potential observers.
If you haven’t yet downloaded the Guardian app, you can do so here (iOS/android). Access the menu and select “Secure Messaging.”
SecureDrop, instant messenger, email, phone, mail
If you can use the Tor network securely, you can send messages and documents to the Guardian via our SecureDrop platform.
Additionally, our guide at theguardian.com/tips outlines various secure ways to contact us and discusses the pros and cons of each method.
Beginning Tuesday, Microsoft will cease offering standard free support for Windows 10, the operating system relied on by millions of computer and laptop users globally.
As of September, data indicates that four out of ten individuals worldwide still use Windows 10, despite the release of its successor, Windows 11, in 2021.
What’s Changing with Windows 10?
Effective October 14, 2025, Microsoft will no longer offer standard free software updates, security patches, or technical support for PCs running Windows 10.
While computers utilizing this software will continue to operate, their vulnerability to viruses and malware will increase as new bugs and security issues come to light.
Microsoft states that Windows 11, a more advanced system, “meets modern security demands by default.”
What Are the Risks?
If Windows users take no action, they might find themselves particularly exposed to hackers attempting to exploit vulnerabilities in large systems.
The consumer group Which? has highlighted that around five million British users intend to keep using devices running this software.
Regardless of location, continuing to operate on Windows 10 places users at risk for cyberattacks, data breaches, and fraud.
According to Lisa Barber, editor at Which?, criminals “will target individuals and exploit vulnerabilities to steal data.” – Technology magazine.
How Can I Mitigate the Threat?
The simplest solution is to upgrade to Windows 11 at no cost.
If your PC is less than four years old, it is likely capable of running Windows 11. To confirm, check your computer specifications. The minimum specifications for Windows 11 include 4GB of RAM and 64GB of storage, and the machine also requires a Trusted Platform Module 2.0 (TPM 2.0) that securely stores credentials, similar to modern smartphones.
Microsoft provides a free tool to determine if your Windows 10 PC is compatible with Windows 11. For additional compatibility checks, you can use online tools based on your CPU.
What If My Computer Lacks the Necessary Hardware to Upgrade to Windows 11?
If you don’t take any action, you could be exposed to malware and security risks. One option is to enroll in a one-year agreement with Microsoft for Extended Security Updates, which will be available until October 13, 2026.
This provides an additional year to plan for the end of support and arrange for replacements.
Registration is free if you log in to Windows 10 with a Microsoft account to sync your settings. Otherwise, it will cost $30 (excluding tax) or you can redeem 1,000 reward points.
Are There Alternatives to Windows 11?
You can use your PC safely with other operating systems if it cannot be upgraded to Windows 11.
A viable solution is installing Linux, a free family of operating systems that offers various distributions.
Ensure you back up all your files to an external drive or secure storage, as switching from Windows may delete or complicate file access.
Among the most popular and user-friendly versions of Linux is Canonical’s Ubuntu, which is free, open-source, and regularly updated for security. Installing it in place of Windows requires a USB flash drive; Canonical provides a step-by-step installation guide.
While many applications support Linux, be mindful that not all Windows software is available for Linux.
Alternatively, if your computing needs can be met via a web browser, Google provides a lightweight version of ChromeOS, which can be installed for free on many PCs. Ensure your model is supported and refer to Google’s installation guide, which also requires a USB flash drive.
Buying a New Computer
If you cannot install alternative software or still require Windows, consider purchasing a new PC equipped with Windows 11 and ongoing support.
Many retailers offer trade-in programs where you can recycle your old computer and get a small discount on a new model. Refurbished Windows 11 devices are also readily available from various retailers. Check out options like Currys, Back Market, and manufacturers like Dell for affordable options.
DDuring my family’s vacations in the 1980s, primarily spent at classic British seaside resorts, I devoted all my time and pocket money exploring arcades. From Shanklin to Blackpool, I dabbled in them all, drawn in by their vibrant bulb-lit facades and enticing names (Fantasy Land! Treasure Island!), alongside the alluring sound of beeping video machines within. Although I spent countless hours on well-known classics like Pac-Man, Galaxian, and Kung Fu Master, there’s one particular game that has always captivated me. It features a distinctive design that is both quirky and exhilarating. It offers a complete experience that feels like a blend of a traditional arcade game, a flight simulator, and a roller coaster. At the time, it appeared remarkably futuristic. Now, I find myself at the age of 40.
Launched by Sega in 1985, Space Harrier is a 3D space shooter where players control a jetpack superhero named Harrier. Harrier emerges on the screen and shoots down surreal alien foes amidst a psychedelic landscape. Initially envisioned by designer Yutaka Suzuki as a detailed military flight shooter, the graphic constraints of that era rendered this impossible. The animations were too complex. Thus, drawing inspiration from the flying scenes in the fantasy film The Neverending Story, he conceived something surreal and different, replacing fighter planes with flying characters and creating alien adversaries reminiscent of stone giants and dragons. It was vividly colorful and wild, akin to a Roger Dean artwork animated by the Memphis Group.
However, what truly captivated players was the game’s motion cabinet. Sitting in a cockpit-style seat connected to two motors that provided rocking motion in eight directions, as Harrier leaped, so did you; as he tilted from side to side, you mirrored his movements. Enemies constantly approached from various angles, switching direction and altitude, keeping you swooping down, rising up, and spinning your body into action. Throughout, a synth-pop score by Hiroshi Kawaguchi, known for his work on Suzuki’s “Afterburner” and “Outrun,” resonated through the headrest speakers. Advanced speech synthesis enhanced the experience, allowing machines to shout encouragement and instructions: “Welcome to the Fantasy Zone, get ready!”
Space Harrier was a truly immersive experience and exemplified Suzuki’s talent for crafting engaging gameplay. It was just one of the many projects he was proud of at the time. Notably, Taikan offered a physical sensory experience. Games like “Outrun,” “Space Harrier,” “After Burner,” and “Power Drift” emerged in arcades featuring large motorized or hydraulically driven cabinets designed to enhance realism. Suzuki and his team also created an animation technology termed Super Scaler, which allowed manipulation of thousands of 2D animation frames to simulate a 3D environment. What I cherished most about Space Harrier was the way this motion intertwined within its fantastical realms of checkered planets and surreal aliens. It felt akin to participating in a vibrant 1980s interactive pop video. Like Pac-Man or Tetris, its timelessness lies in its unique abstract world.
Why does Space Harrier celebrate its 40th anniversary? This arcade cabinet could still entice players anywhere in the world (if one can still locate it), but sadly, such opportunities are diminishing. The machinery is aging, and the expertise to repair and maintain it is fading. Aside from a few adaptations for home computers and consoles (with the PC Engine and 32X versions being the most notable), I haven’t engaged with the game in years. Now, as I settle into that familiar seat, insert two 10p coins into the slot, and grasp the joystick in anticipation, I wonder: Will I ever rediscover that immersive gaming experience? Will I ever see my 13-year-old self exploring an arcade in northern England again? Regardless, Space Harrier remains fulfilling for whatever purpose.
The Perseverance rover from NASA has uncovered unusual leopard spot-like formations on rocks, suggesting potential evidence of ancient microbial life. Scientists consider this discovery to be the strongest and most definitive indication that life may have existed on Mars.
Erratic Weather Patterns
Sudden shifts in weather can lead to severe repercussions. Weather trends are swinging between extremes more rapidly and with greater frequency than ever before.
Experiencing Lucid Dreams
Imagine being able to slip into a lucid dream every night, where everything feels vivid and you have complete control—even the ability to fly. While there are techniques to master this skill yourself, researchers are also innovating technology that could unleash tremendous new experiences.
Breathing Techniques for Better Health
Breathing is often an automatic function, but consciously practicing deep breathing can offer numerous health benefits. Here’s what you need to know to enhance your well-being from the comfort of your couch (or bed!).
Additionally
A Key Tool in Combating Depression: Depression is a common affliction, and researchers are continually exploring quicker and more cost-effective treatment methods. Could the nutritional supplements favored by bodybuilders and athletes hold the key?
Artificial Intelligence: How much further can AI evolve, or has it already reached its peak?
Q&A: I have addressed your queries. This month features a thrilling topic: Are psychopaths born or made? What’s the most chilling experiment we’ve conducted? Which organs can we live without? Can animals detect death? What is the foulest smell in existence? Can you get a phone signal on the moon? Should I store my car keys in a Faraday box? Am I alexithymic? Should I start using rosemary scents? Plus more…
Issue 425 Releases on Tuesday, October 14, 2025
Don’t forget that BBC Science Focus is also accessible on all major digital platforms. You can find a version available for Android, on Kindle Fire and Kindle e-readers, as well as the iOS app for iPads and iPhones.
Bose has enhanced its flagship noise-cancelling headphones, introducing the longest battery life, USB-C audio support, and premium materials, making it an even more appealing choice for commuters.
The second-generation QuietComfort Ultra headphones carry a hefty price tag, starting at £450 (€450/$450/AU$700), which surpasses many of its competitors, including the Sony WH-1000XM6.
They exude an air of luxury and comfort. With a refined sliding aluminum arm and updated color, they maintain the same design, weight, and fit as their predecessor, resulting in some of the most elegant and comfortable headphones available.
Available in bolder color options. Composite: Bose
Controls for noise cancellation, immersion mode, and playback are intuitive and user-friendly. A touch-sensitive volume slider also serves as a shortcut for features, such as activating your phone’s voice assistant or starting music from apps like Spotify.
The battery offers up to 30 hours of playback with noise cancellation, providing an additional six hours compared to its predecessor, ensuring it stands toe-to-toe with the best competitors. This duration is ample for one or two flights. A new feature allows lossless music listening on devices while charging via Bluetooth, headphone cable, or USB-C.
Button and slider controls are located on the back of the right earcup, while USB-C and headphone ports are available on the left. Photo: Samuel Gibbs/The Guardian
It supports Bluetooth 5.4 and can pair with two devices simultaneously, like a smartphone for calls and a laptop for music. In addition to the standard SBC and AAC audio formats, Bose includes Qualcomm’s aptX Adaptive for enhanced audio quality. Compatible Android device or a Bluetooth dongle.
Specifications
Weight: 250g
Size: 195×139×50.8mm
Connectivity: Bluetooth 5.4 with multipoint, 2.5/3.5mm, USB-C audio and charging
Bluetooth codec: SBC, AAC, aptX compatible
Battery life: 30 hours
Excellent Sound and Noise Cancellation
The headphone arms fit snugly against your head, with deep and well-cushioned ear cups offering a plush fit. Photo: Samuel Gibbs/The Guardian
Bose is a pioneer in noise cancellation technology, consistently delivering exceptional performance. The new Ultra headphones include advanced noise reduction features that effectively handle sounds from airplanes, commutes, and more. While they may not completely eliminate higher-pitched noises like background chatter, they are still highly effective.
A refined transparency mode enables the headphones to dampen sudden loud noises, allowing for awareness of surroundings while retaining comfort. Call quality is impressive as well, ensuring clear communication in both quiet and noisy environments.
The Bose app for Android and iPhone manages settings, updates, and custom options. Composition: Samuel Gibbs/The Guardian
These headphones excel in everyday listening, boasting a bass-heavy profile tailored to modern music. The bass is impactful yet well-balanced, ensuring clarity across the musical spectrum. While some may find Bose’s sound to be overly clean or lacking in grit, the excellent tonal separation and sound management provide a pleasant listening experience, making them ideal for travel, commuting, and work.
New with the Ultras is Bose’s immersive sound system, Cinema Mode, which emulates surround sound for movies and TV shows. It functions effectively across all connected devices, making it versatile for users with various brands of electronics. This complements the standard immersive audio mode that simulates a stereo speaker setup.
Sustainability
Bose combines luxury with durability, making it travel-friendly. Photo: Samuel Gibbs/The Guardian
The battery can withstand over 500 full charge cycles and is replaceable by Bose. The headphones are generally repairable, with replaceable components like ear cushions available. They do not include recycled materials. Bose has a trade-in program and offers refurbished products, but individual environmental impact reports are not available.
Price
The Bose QuietComfort Ultra headphones (2nd generation) retail for £449.95 (€449.95/$449.99/AU$699.95).
For context, the Sony WH-1000XM6 is priced at £399, the Sonos Ace is £299, the Beats Studio Pro costs £349.99, while the Sennheiser Momentum 4 Wireless is £199 and the Fairbuds XL is priced at £219.
Verdict
The second-generation Bose QuietComfort Ultra headphones represent a high-quality choice, delivering the brand’s trademark exceptional sound, leading noise cancellation, and luxurious comfort.
While the Sony WH-1000XM6 may have surpassed it in noise cancellation effectiveness, these remain among the most comfortable headphones available, perfect for both travel and extensive listening sessions.
They fold neatly for compact storage, boast a long battery life of 30 hours, and offer connectivity options through Bluetooth, an analog headphone cable, or USB-C, making them versatile for any device.
Although priced quite high, it aligns with competitors, and prices may decrease during sales. Replacement ear cushions and other components can be obtained from Bose or third parties, ensuring long-term value for your investment.
Pros: Extremely comfortable, leading noise cancellation, excellent sound quality, immersive/spatial audio capabilities, excellent connectivity (including Bluetooth multipoint and USB-C or analog audio), foldable design for travel, a comprehensive app for multiple platforms, and long battery life.
Cons: It is quite expensive, and while the sound and noise cancelling features are superb, the microphone cannot be used with an analog connection.
The headphones can be compactly folded and stored in their case. Photo: Samuel Gibbs/The Guardian
Premature babies may face language challenges later, but simple interventions can assist.
BSIP SA/Alamy
The first randomized controlled trial of this straightforward intervention suggests that playing recordings of a mother’s voice to premature infants could expedite their brain maturation processes. This method may eventually enhance language development in babies born prematurely.
Premature birth alters brain structure, leading to potential language disorders and affecting later communication and academic success. A mother’s voice and heartbeat can foster the development of auditory and language pathways. Unfortunately, parents may not always be able to physically be with their infants in the neonatal units.
To explore whether this absence could be compensated for through recordings, Katherine Travis and her team at Weill Cornell Medicine in New York conducted a study with 46 premature infants born between 24 and 31 weeks gestation, all situated in the neonatal intensive care unit.
We recorded mothers reading from children’s books, including selections from A Bear Named Paddington. Half of the infants listened to a ten-minute audio segment twice every hour overnight between 10 PM and 6 AM, increasing their daily exposure to their mother’s voice by an average of 2.7 hours until they reached their original due date. The other infants received similar medical care but were not exposed to recordings.
Upon reaching their due date, these infants underwent two MRI scans to evaluate the organization and connectivity of their brain networks. The results indicated that those who heard their mother’s voice at night exhibited more robust and organized connections in and around the left arcuate fasciculus, a crucial area for language processing. “The structure appeared notably more developed,” said Travis. “The characteristics matched what one might expect to find in older, more mature infants.”
The scans also suggested that this maturation could be linked to increased myelination— the creation of a fatty sheath that insulates nerve fibers, enhancing the speed and efficiency of signal transmission within the brain. “Myelination is crucial for healthy brain development, especially in pathways that support communication and learning,” noted Travis.
However, is it truly vital for infants to hear their mother’s voice rather than others? While this study did not address that, earlier research explains the phenomenon. Babies start hearing around the 24th week of pregnancy, and continue to recognize their mother’s voice after birth due to early exposure in the womb. Travis explained, “This voice is biologically significant and may be especially appealing to the developing brain.”
Nonetheless, Travis emphasizes that language exposure from other caregivers is also critical for language development, and future studies will explore this aspect further.
The intervention is straightforward and can easily be integrated into care protocols. However, David Edwards from Evelina London Children’s Hospital cautioned against overinterpreting the findings. “Given the small sample size, additional control groups, including different audio sources and forms of auditory stimulation, should be evaluated,” he suggested.
Travis and her research team aim to validate these results in larger trials involving medically vulnerable infants. They will continue to monitor current participants to determine if the observed brain differences result in tangible improvements in language and communication skills as these infants grow.
Our MG5 electric vehicles have spiraled out of control, and MG seems unresponsive to the situation.
After utilizing a charger at one site, the vehicle experienced a power system failure at a highway service station.
The car became unresponsive to all controls, including the off button. Consequently, we called the AA. The patrolman was able to start the engine and opted to take a test drive with my family onboard as it was pouring rain.
When the patrolman shifted the car into reverse, it surged forward and wouldn’t stop, even when he pressed the brakes. The vehicle collided with an AA van, and as it attempted to accelerate, its wheels spun and began smoking.
We all exited the vehicle safely, and eventually, a patrolman managed to switch the car off from outside. I was informed it wasn’t safe to drive.
The AA arranged for a tow truck to bring it to the dealership, covering the repairs (£2,500). A police vehicle was also damaged in the incident.
The dealership is scrutinizing the defect, and there will be a charge of £500 in costs. No issues were identified. MG insists the matter has been resolved and intends to pursue further investigation at their own cost.
Six weeks on, the vehicle remains with the dealership. I am reluctant to drive until I know it’s safe, yet I can’t afford continued investigations.
I requested a technical report from the AA, confirming that the vehicle “jumped forward” when shifted to reverse.
It’s understandable to hesitate before getting behind the wheel until the issue is identified, and since the vehicle is still under warranty, it’s reasonable to avoid spending personal funds on the repairs.
MG Motor UK appears surprisingly indifferent, given the potential dangers posed by malfunctioning EVs. The dealership recommended that MG’s technical department investigate the issue and provide guidance, but MG let it go.
MG merely issued an apology for the “inconvenience” following your complaint.
My inquiries regarding whether the dealer-requested investigation was conducted prior to resolving the incident and how many similar cases they’ve acknowledged concerning power failures were side-stepped.
However, the company promptly initiated a more comprehensive assessment of the vehicle and conducted a 45-mile test drive, using various public charging stations. There will be no charges for this test or any previous evaluations.
“MG considers all reported issues related to malfunctions a priority. No associated faults with onboard equipment or the vehicle’s charging capabilities were discovered.”
“MG and our authorized dealer have meticulously examined the vehicle and concluded that an error occurred that isn’t linked to the vehicle itself. We will continue to provide support to our customers with relevant information and advice.”
This places you in a challenging position. Your car is deemed healthy, yet your trust in it has waned. Consequently, you’ve decided that selling it is the best course of action.
We encourage letters, though we cannot respond individually. Email us at Consumer.champions@theguardian.com or write to Consumer Champions, Money, the Guardian, 90 York Way, London N1 9GU. Please include a daytime contact number. The following conditions apply to all letter submissions and publications: Our Terms of Use.
It’s hard for many in India to envision life before Aadhaar. Digital biometric IDs, which claim to be accessible to all Indians, were rolled out just 15 years ago, yet they have become an integral part of daily life.
An Aadhaar number is now essential for purchasing a home, securing employment, opening a bank account, paying taxes, receiving benefits, buying a vehicle, obtaining a SIM card, booking priority train tickets, and enrolling children. Infants receive their Aadhaar number immediately after birth. Although it is not obligatory, lacking an Aadhaar effectively renders one invisible to the state, according to digital rights advocates.
For Umesh Patel, 47, a textile businessman in Ahmedabad, Aadhaar has been a welcome change. He reminisces about the days of hauling stacks of paperwork to government offices just to verify his identity, often with little success. Now, with a quick glimpse of his Aadhaar, “everything flows smoothly,” he said, viewing it as “a testament to how our nation utilizes technology for its citizens’ benefit.”
“It’s a solid system that has simplified our operations,” Patel asserts. “Moreover, it enhances our country’s security by minimizing the risk of forged documents.”
“Aadhaar has become an integral part of Indian identity.”
The initiative has been deemed so effective that it caught the attention of the UK government, which considered the introduction of mandatory ID cards for its citizens. However, digital rights groups, activists, and humanitarian organizations highlight a less favorable perspective of Aadhaar and its effects on Indian society.
For some of India’s most underprivileged and least educated individuals (those unable to engage with the Aadhaar system due to issues like illiteracy, lack of education, or missing documentation), the system can be exclusionary and punitive, denying essential access to welfare and employment. With increasing moves to link Aadhaar to voting rights and citizenship, concerns arise that it may further disenfranchise and stigmatize the impoverished.
Apal Gupta, founder and director of the Internet Freedom Foundation in Delhi, stated that Aadhaar has become a digital obligation for many people in India, as Aadhaar-based verification is required to access government services, enter public venues, or carry on with their daily activities.
Mr. Gupta asserted that Aadhaar has “metastasized” since its inception, morphing into an extensive bureaucratic network of unique IDs required for business operations. “The essence of your existence is scrutinized at every juncture,” he remarked.
Critics contend that the current draft of India’s data protection and privacy law is inadequate for safeguarding privacy or preventing potential misuse of the invaluable Aadhaar database, which includes biometric data such as photos, facial and iris scans, and fingerprints of over a billion Indians.
Indian media has uncovered multiple instances of Aadhaar data breaches over the years, including a 2018 incident where data pertaining to 1.1 billion individuals was found to be sold online for a mere 500 rupees (£5).
Keir Starmer met Narendra Modi in Mumbai last week. During his visit, Mr. Starmer described the Aadhaar system as a “huge success”. Photo: Stéphane Rousseau/AFP/Getty Images
“According to this yet-to-be-notified law, there is no mechanism to ascertain if a data breach has been documented, and there is a lack of oversight on how Aadhaar data is consolidated with other databases, risking broader public surveillance and tracking,” Mr. Gupta noted. “Transparency is entirely absent.”
Although Aadhaar was initiated before Prime Minister Narendra Modi assumed office in 2014, his governing Bharatiya Janata Party (BJP) has significantly promoted and expanded the digital ID initiative. As India prepares to host the G20 summit in 2023, Prime Minister Modi referenced Aadhaar as one of the flagship achievements of ‘Digital India’, which he describes as an incubator for innovation. He asserts that India has saved over $22 billion by combating corruption in the welfare system.
The government highlights the extensive uptake of Aadhaar as an indicator of its success and inclusivity. As of last month, more than 1.42 billion Aadhaar numbers had been generated, corresponding to roughly the entire population of India, making it the largest digital identity program globally. Before this initiative, over 400 million Indians lacked any official identification and were unable to access banking services.
Yet the reality, particularly in rural and tribal regions, diverges sharply from the image portrayed by the government, as noted by Chakradhar Buddha, a senior researcher at Livetech India, an organization aimed at assisting those marginalized by India’s transition to digitalization.
“The deprivation of Aadhaar is pervasive among tribal communities, people in mountainous regions, and those in remote areas, and this reality is largely overlooked,” Buddha stated.
“This situation arises partly from a lack of proper documentation or incomplete documentation capture. Moreover, technological advancements create further obstacles that disproportionately affect the most vulnerable populations. Ultimately, this system undermines access to crucial social security and welfare for those most in need.”
Mr. Buddha challenged the government’s assertion that Aadhaar represents an infallible form of identification, recounting numerous instances where incorrect names and details led to significant issues for communities. For instance, in one village, tribal individuals lacked birth certificates and were assigned January 1 as their birthdate, while tribal names are often misspelled on Aadhaar cards due to unfamiliarity among officials.
Highlighting the recent example of millions of impoverished workers being erroneously removed from government support systems after the implementation of Aadhaar certification, Buddha cautioned that using Aadhaar as the universal standard for voting rights could result in “mass purges of the poorest from electoral registers.”
“These individuals have already been stripped of social equality. Now, Aadhaar is being utilized to deny them their right to political equality and universal suffrage,” Buddha stated.
Among those recently at risk of lacking an Aadhaar card was Ahram Sheikh, 34, an uneducated worker, who had important identification documents, including his Aadhaar card, stolen while on a train.
The aftermath was a nightmarish experience. He couldn’t recall his Aadhaar number from a decade earlier, rendering him unable to obtain a replacement card. Without it, he had to discontinue his construction job, losing crucial income for his family, and as a result, his son ultimately dropped out of school.
Months later, after traveling thousands of miles back to his village, Sheikh remained unable to resolve the issue and secure a new card. He now lives in constant fear of being declared an illegal alien without it.
“This Aadhaar system has turned into a nightmare for us. Why can’t the government establish proper institutions?” Sheikh lamented. “Everything in this country works against the poor, and this Aadhaar card is no exception.”
Inquiring about the health advantages of living near a golf course might come off as someone attempting to leverage scientific studies to persuade their partner that residing adjacent to Gleneagles is a wise choice.
Fair play. I genuinely respect this transparent application of science. So, here’s some evidence from the archives.
When you tee off, appreciate all that lush greenery. Research consistently indicates that residing near green spaces correlates with a diminished risk of conditions such as cardiovascular disease and obesity.
While quantifying these effects is challenging, the study suggests it might lower stress hormones, enhance exercise, and benefit cognitive functions like memory and attention.
In one investigation, researchers concluded that a 10 percent increase in access to green and blue spaces resulted in a 7 percent decrease in anxiety and depression risk.
It’s well recognized that playing golf offers health benefits. In 2023, a Finnish study compared the cardiovascular impacts of playing an 18-hole round of golf (walking – no cart) to one hour of brisk walking and one hour of Nordic walking.
All three activities were beneficial, but golf proved to be the most effective, reducing blood pressure, cholesterol, and blood sugar levels.
Additional research has shown that golf training can provide cognitive benefits, particularly for older adults. It’s also advantageous for mental health due to its focus on fostering social connections.
In summary, regular golfing contributes to a longer and healthier life. Researchers found that individuals who played golf consistently experienced a 40 percent reduction in mortality.
That’s not a bad score, but there are some hazards to be aware of. At the start of 2025, a study explored the possible link between Parkinson’s disease and proximity to golf courses, highlighting potential exposure to pesticides.
Some chemicals used to maintain greens and fairways are neurotoxic, and numerous studies have associated them with Parkinson’s disease (although the risks are influenced by factors such as the type of pesticide and level of exposure).
Chemicals used on golf courses to maintain grass health may contribute to Parkinson’s risk – Credit: David Madison via Getty
In recent studies, researchers surveyed residents living near 139 golf courses in the United States. They discovered that individuals living within one mile of a golf course faced a 126 percent higher likelihood of developing Parkinson’s disease compared to those more than 6 miles away.
The risk nearly doubled for those sharing the same water supply zone as a golf course, suggesting that groundwater contaminated with pesticides, along with airborne transmission, may also play a role.
It’s crucial to note that the risk of Parkinson’s disease arises from a complex interplay of genetic and environmental factors. Risks associated with these chemicals are predominantly linked to occupational exposure rather than recreational exposure.
If you happen to reside in the UK, your risk might be lower, as paraquat, a chemical linked to Parkinson’s disease, is prohibited.
Thus, living next to a golf course presents a multifaceted situation, much like residing anywhere else. Why not head to the 19th hole and ponder this?
This article (by Carlisle native Paul Leach) addresses the question: “Will I be healthier if I move next to a golf course?”
If you have any inquiries, feel free to email us at:questions@sciencefocus.com or send us a messagefacebook,×orInstagramPage (please include your name and location).
Explore our ultimatefun facts for more amazing science insights.
NGC 7496 is a barred spiral galaxy situated roughly 24 million light-years away in the Taurus constellation.
This Hubble image captures barred spiral galaxy NGC 7496 in the constellation Hyuri, located approximately 24 million light-years away. Image credits: NASA / ESA / Hubble / R. Chandar / J. Lee / PHANGS-HST team.
NGC 7496 was discovered by British astronomer John Herschel on September 5, 1834.
The galaxy is also identified as ESO 291-1, LEDA 70588, and IRAS 23069-4341, and spans approximately 70,000 light-years in diameter.
NGC 7496 belongs to the NGC 7582 group, which comprises about 10 large galaxies.
This galaxy is classified as a Type II Seyfert galaxy, notable for a high star formation rate.
At its center lies an active galactic nucleus containing a supermassive black hole primarily consuming gas.
According to Hubble astronomers, “Hubble observed NGC 7496 for the first time as part of the Physics at High Angular Resolution of the Nearby GalaxieS (PHANGS) program.”
“Alongside the NASA/ESA Hubble Space Telescope, this initiative utilizes the capabilities of various powerful observatories, including the Atacama Large Millimeter/Submillimeter Array, ESO’s Very Large Telescope, and the NASA/ESA/CSA James Webb Space Telescope.”
“NGC 7496 was the inaugural galaxy in the PHANGS sample to be observed by Webb.”
“Each of these observatories offers a unique perspective on this extensively studied galaxy.”
“With its exceptional ultraviolet capabilities and high resolution, Hubble’s observations reveal young star clusters emitting high-energy radiation.”
“Hubble’s insights into NGC 7496 will assist in determining the ages and masses of these young stars, as well as the degree to which their light is obscured by dust.”
“Previous Hubble images of NGC 7496 were released in 2022,” they noted.
“Today’s image incorporates fresh data showcasing the galaxy’s star clusters amid glowing red clouds of hydrogen gas.”
Small, isolated groups of the Common hippopotamus (Hippopotamus amphibius) were present in the upper reaches of the Rhine River in southwestern Germany during the Middle Ages. New research indicates their presence during the Weichselian period, which spanned from approximately 47,000 to 31,000 years ago.
Radiocarbon dating indicates that the common hippopotamus was present in the middle Weichselian (Hippopotamus amphibius) in the upper reaches of the Rhine River, Germany. Image credit: Gemini AI.
Hippos likely made their way into Europe from Africa through multiple waves, involving various species within the Hippopotamus genus, including the common hippo, which currently inhabits only sub-Saharan Africa.
At their peak distribution in Europe, hippos were found from the British Isles in the northwest to the Iberian and Italian peninsulas in the south.
Their fossil record generally suggests they thrived in temperate climates, characterized by denser vegetation and abundant freshwater bodies.
Nevertheless, their origins and relation to today’s African hippos, as well as the precise timing of their extinction in central Europe, remain ambiguous.
“Previously, it was thought that the common hippopotamus extirpated from central Europe around 115,000 years ago with the conclusion of the last interglacial period,” stated co-senior author Professor Wilfried Rosendahl, general director of the Ries-Engelhorn Mannheim Museum.
“Our findings reveal that hippos inhabited the Upper Rhine Valley in southwestern Germany from about 47,000 to 31,000 years ago.”
For this study, Professor Rosendahl and his team analyzed 19 hippo specimens collected from a fossil site located in the rift valley upstream of the Rhine River.
“The Upper Rhine Rift Valley serves as a significant continental climate archive,” noted study co-author Dr. Ronnie Friedrich, a researcher at the Kurt Engelhorn Zentrum Archaeological Institute.
“Animal bones preserved for millennia in gravel and sand deposits provide invaluable data for scientific inquiry.”
“It’s astonishing how well-preserved the bones are,” he added.
“In many human remains, we’ve successfully obtained samples suitable for analysis, but such conditions are not to be expected after such extended periods.”
By analyzing ancient DNA, researchers discovered that Ice Age hippos in Europe share a close relationship with modern African hippos, being part of the same species.
Radiocarbon dating confirmed their existence during the mid-Weichselian temperate climatic phase.
Furthermore, extensive genome-wide analyses indicated very low genetic diversity, suggesting a small, isolated population in the upper Rhine region.
These results, in conjunction with additional fossil evidence, imply that the heat-loving hippos coexisted with cold-adapted species such as mammoths and woolly rhinos.
“This finding indicates that hippos did not vanish from central Europe at the end of the last interglacial period, as was previously thought,” stated study lead author Dr. Patrick Arnold, a researcher at the University of Potsdam.
“Thus, there’s a necessity to reevaluate other continental European hippo fossils typically considered to belong to the last interglacial period.”
“This study provides significant new insights that compellingly demonstrate that the Ice Age was not uniform everywhere but rather that regional specificities contributed to a complex picture,” remarked Professor Rosendahl.
“It would be intriguing and valuable to further examine other heat-loving animal species that have so far been linked to the last interglacial.”
This result was published in the journal on October 8, 2025, in Current Biology.
_____
Patrick Arnold et al. Ancient DNA and dating evidence show hippos dispersed into central Europe during the last ice age. Current Biology published online October 8, 2025. doi: 10.1016/j.cub.2025.09.035
According to recent analyses of fossils from two mammalian forms, the development of jaws in modern mammals proves to be more intricate than previously understood. (i) Polystodon chuananensis, a mid-Jurassic herbivorous tritylodont known for its relatively large size and possibly fossilized lifestyle, and (ii) Camulochondylus rufengensis, a newly identified Morganucodontan from the Early Jurassic.
“In mammals, the joint connecting the skull to the lower jaw consists of two bones: the squamous bone and the dentary bone, where the lower jaw teeth are situated,” stated Dr. Jin Meng, a curator at the American Museum of Natural History and a researcher at the City University of New York, along with colleagues.
“This configuration replaced the older temporomandibular joint seen in reptiles, which is composed of two different bones: the quadrate and the articular bone.”
“As organisms transitioned from early mammal-like reptiles to true mammals, various ‘experimental’ versions of this new temporomandibular joint arose to withstand the forces of mastication.”
“Ultimately, this culminated in the formation of the bimandibular joint, with the new dentosquamous joint handling most of the chewing pressure, maintaining the joint functionality in reptiles, and establishing an initial system for detecting airborne sounds.”
“Over time, the dentate squamosal became the sole joint, transforming the quadratoid joint into a diminutive bone within the mammal’s middle ear, a critical feature that aids in hearing.”
However, scientists still lack a comprehensive understanding of how this new temporomandibular joint evolved, primarily due to the scarcity of fossils from this era.
“The evolution of the mammalian temporomandibular joint represents one of the most fascinating yet incomplete chapters in vertebrate history, with gaps in fossil records obscuring significant transitions,” remarked Dr. Meng.
One of these species, Polystodon chuananensis, is an opossum-sized creature featuring “horns” potentially used for digging.
The other, Camulochondylus rufengensis, is a newly described squirrel-sized animal that lived during the Early Jurassic, approximately 174 to 201 million years ago.
Paleontologists identified new jaw structures in both ancient species.
In Polystodon chuananensis, they discovered a uniquely formed secondary temporomandibular joint located between the zygoma and dentary. This marks the first identification of this joint structure in a tetrapod.
In Camulochondylus rufengensis, they characterized a simple articular head of the dentary bone that likely indicates an evolutionary step towards a morphology adaptable to the new temporomandibular joint socket.
“These discoveries enhance the diversity of temporomandibular joints in mammalian evolution and broaden our comprehension of the evolutionary lineage of key mammalian features crucial for understanding how mammals process food and perceive airborne sounds,” the authors concluded.
Their study was published in the journal Nature in September 2025.
_____
F. Mao et al. Convergent evolution of diverse temporomandibular joints in mammals. Nature published online on September 24, 2025. doi: 10.1038/s41586-025-09572-0
On Monday, NASA’s Jet Propulsion Laboratory revealed plans to eliminate around 550 jobs, which represents about 10% of its workforce.
In a statement shared online, Institute director Dave Gallagher indicated that these layoffs are part of a larger “workforce realignment” and are not connected to the ongoing government shutdown.
The positions affected by the layoffs will span various areas including technology, business, and support within the NASA center.
Gallagher emphasized, “Making these decisions this week will be difficult, but they are vital for ensuring JPL’s future by establishing a more streamlined infrastructure, concentrating on our primary technology capabilities, upholding fiscal responsibility, and positioning us for competitiveness within the changing space landscape, all while continuing to deliver critical contributions for NASA and the nation.”
He also mentioned that affected employees will receive notifications regarding their status on Tuesday.
Located in Pasadena, California, the Jet Propulsion Laboratory is a research and development center funded by NASA but managed by the California Institute of Technology, and is home to some of the agency’s most renowned missions, including Explorer 1, America’s inaugural satellite, launched in 1958.
Additionally, JPL scientists have designed, constructed, and operated all five rovers that have landed on Mars with NASA’s guidance.
NASA is grappling with uncertainty surrounding its budget and future goals. Similar to many government entities, it has experienced considerable budget cuts and staffing reductions as part of a broader federal workforce downsizing initiated under the Trump administration.
Since the commencement of President Donald Trump’s term, approximately 4,000 NASA staff members have opted for deferred retirement programs, leading to a nearly 20% decrease in the agency’s workforce, which originally comprised 18,000 employees.
In July, Reuters reported that about 2,145 senior employees at NASA are expected to retire as part of the layoffs.
The Trump administration assumed office last week amid an ongoing government shutdown. Over 4,000 additional federal workers were laid off across various departments including Treasury and Health and Human Services, although this reduction does not seem to impact NASA.
Dean Lomax, a palaeontologist at the University of Manchester, along with his team, has unveiled a new genus and species of leptonectid ichthyosaur based on fossil remains found in Dorset, England.
Reconstruction of Siphodracon goldencapensis. Image credit: Bob Nichols.
The near-complete skeleton of this dolphin-sized ichthyosaur was unearthed near Golden Cap in 2001 by fossil collector Chris Moore from Dorset.
This specimen features a skull with large eye sockets and a long, sword-like snout, marking it as a new genus and species.
Dating back to the Pliensbachian period of the Early Jurassic, the fossil is estimated to be between 193 and 184 million years old.
“I vividly recall first seeing the skeleton in 2016. While we recognized its rarity then, we didn’t anticipate its significant contribution to our understanding of the intricate faunal turnover during the Pliensbachian period,” stated Dr. Lomax.
“This era is critical for ichthyosaurs, as certain families disappeared while new ones emerged, making this new species potentially the ‘missing piece of the ichthyosaur puzzle.’
“It is more closely related to species from the Late Jurassic, and its discovery helps indicate that faunal turnover transpires much earlier than we previously thought.”
“This marks the first early Jurassic ichthyosaur genus to be described in this region in over a century.”
Skeleton and skull of Siphodracon goldencapensis. Image credit: Dean Lomax.
Named Siphodracon goldencapensis, this new ichthyosaur measures approximately 3 meters (10 feet) in length and likely preyed on fish and squid. Evidence of its last meal can also be observed in the remains.
According to Dr. Erin Maxwell, an ichthyosaur specialist at the State Museum of Natural History in Stuttgart: “This skeleton not only offers essential insights into the evolution of ichthyosaurs but also enhances our understanding of life in Britain’s Jurassic seas.”
“The limb bones and teeth appear malformed, suggesting the animal suffered significant injury or disease during its life, and indications show the skull may have been bitten by a large predator, possibly another larger ichthyosaur, leading to this individual’s death.”
“Life in the Mesozoic ocean was perilous.”
The researchers identified several traits in Siphodracon goldencapensis that have not been seen in any known ichthyosaur.
One of the most peculiar features is the lacrimal bone, which has a unique protruding structure around the nostril.
“Thousands of complete or nearly complete ichthyosaur skeletons exist from both pre- and post-Pliensbachian layers,” noted Judy Massare, a ichthyosaur expert from the State University of New York at Brockport.
“Although the overall ecosystem shows similarities, the two faunas differ significantly with no overlapping species.”
“Evidently, a substantial shift in species diversity took place at some point during the Pliensbachian period.”
“Siphodracon goldencapensis aids in pinpointing when this change happened, yet we still lack insight into the reasons.”
This work is detailed in a study published in this month’s edition of Paleontology Papers.
_____
Dean R. Lomax et al. 2025. A new species of ichthyosaur with an elongated snout reveals complex faunal alterations during the poorly sampled Early Jurassic (Pliensbachian) period. Paleontology Papers 11 (5): e70038; doi: 10.1002/spp2.70038
Internet Explorer 11 is not supported. For an optimal experience, please access our website using a different browser.
Currently playing
Understanding the political significance of rare earth elements
02:44
Next up
In memory of Jane Goodall: Renowned chimpanzee researcher passes away at 91
02:30
In vitro fertilization boosts fertility and aids coral reef restoration
02:21
Bob Costas hosts a dinner for paralysis research fundraising
03:49
‘Cyborg’ beetles, pivotal in rescue missions, are being developed
02:42
NASA reveals 2025 astronaut class selected from 8,000 applicants
02:54
140-year-old ‘ghost ship’ found in Lake Michigan
02:02
Interstellar object 3I/ATLAS is traveling through the solar system
06:54
Research reveals thumbs’ influence on brain development
03:43
Europe’s glaciers are rapidly melting, irrefutably fastest on earth
02:40
Argentinians captivated by deep-sea exploration livestreams
01:50
SpaceX Crew-11 departs for ISS
03:51
Scientists inject radioactive material into rhino horns to combat poaching
01:30
SpaceX launch scrapped due to bad weather
04:08
A thriving ecosystem discovered 30,000 feet deep in the Pacific Ocean
00:48
New report reveals growing economic impact of weather-related disasters
02:29
Dinosaur fossils found in parking lot of Denver Museum
01:46
Why July 9, 2025 will be one of the shortest days ever recorded
03:10
EPA looks to lift greenhouse gas restrictions on power plants
03:16
Chemist Kate offers science tips to beat summer boredom
04:34
NBC News NOW
The U.S. depends on imports for nearly 80% of the rare earth elements necessary for critical electronics, making the securing of mining rights and import agreements a pivotal political issue. NBC News’ Zinhle Essamuah clarifies what rare earth elements are and their significance. October 13, 2025
Currently playing
Understanding the political significance of rare earth elements
02:44
Next up
In memory of Jane Goodall: Renowned chimpanzee researcher passes away at 91
02:30
In vitro fertilization boosts fertility and contributes to coral reef recovery
02:21
Bob Costas hosts dinner for fundraising toward paralysis research
03:49
‘Cyborg’ beetle under development for search and rescue missions
02:42
NASA announces 2025 astronaut class selected from 8,000 candidates
To enhance certain outcomes related to constipation, recommendations include kiwifruit, rye bread, highly mineralized water, psyllium supplements, specific probiotic strains, and magnesium oxide supplements. Image credit: Aziz3625.
Constipation is a persistent condition that significantly affects quality of life and places a considerable economic strain on both individuals and healthcare systems.
Previous clinical guidelines offered limited and often outdated dietary advice, such as suggestions to increase fiber and fluid intake.
In contrast to older guidelines, the latest recommendations are founded on several thorough systematic reviews and meta-analyses, employing the GRADE framework to evaluate evidence quality.
Professor Kevin Whelan from King’s College London stated, “This new guidance represents a positive development towards empowering health professionals and their patients to manage constipation via dietary means.”
“This means individuals worldwide suffering from constipation can now receive current advice based on the best available evidence to enhance their symptoms and health outcomes.”
“With ongoing research, we have a genuine opportunity to significantly improve quality of life.”
Professor Whelan and his team analyzed over 75 clinical trials, formulating 59 recommendations and pinpointing 12 key research priorities.
“Chronic constipation can greatly influence your daily routine,” noted Dr. Eirini Dimidi from King’s College London.
“For the first time, we outline effective dietary strategies and identify advice lacking robust evidence.”
“By improving this condition through dietary modifications, individuals can better manage their symptoms and, hopefully, enhance their quality of life.”
The recommendations also examine constipation outcomes like stool frequency, consistency, straining, and overall quality of life, thereby allowing for more tailored care based on individual symptoms.
Clinician-friendly resources have also been created to facilitate the implementation of these guidelines in practices globally.
An analysis of the evidence indicates that while certain foods and supplements provide benefits, the overall quality of existing research remains low.
Most studies have concentrated narrowly on single interventions instead of comprehensive dietary strategies, emphasizing the pressing need for improved nutritional research in managing constipation.
“Adopting a high-fiber diet offers numerous health benefits and is generally advised for constipation,” Dr. Dimidi stated.
“However, our guidelines indicate insufficient evidence to confirm that it is effective, particularly for constipation.”
“Instead, our research has revealed some novel dietary strategies that may genuinely assist patients.”
“Simultaneously, there is an urgent necessity for high-quality trials to reinforce our understanding of what works and what doesn’t.”
Experts are cautioning that the integration of artificial intelligence in healthcare may lead to a legally intricate blame game when determining responsibility for medical errors.
The field of AI for clinical applications is rapidly advancing, with researchers developing an array of tools, from algorithms for scan interpretation to systems for assisting in diagnosis. AI is also being designed to improve hospital operations, such as enhancing bed utilization and addressing supply chain issues.
While specialists acknowledge the potential benefits of this technology in healthcare, they express concerns regarding insufficient testing of AI tools’ effectiveness and uncertainties about accountability in cases of negative patient outcomes.
“There will undoubtedly be situations where there’s a perception that something has gone awry, and people will seek someone to blame,” remarked Derek Angus, a professor at the University of Pittsburgh.
The Journal of the American Medical Association hosted the Jama Summit on Artificial Intelligence last year, gathering experts from various fields, including clinicians, tech companies, regulatory bodies, insurers, ethicists, lawyers, and economists.
According to the report of results, of which Angus is the lead author, the publication discusses the nature of AI tools, their application in healthcare, and the various challenges they present, including legal implications.
Co-author Glenn Cohen, a Harvard Law School professor, indicated that patients might find it challenging to demonstrate negligence concerning AI product usage or design. Accessing information about these systems can be difficult, and proposing reasonable alternative designs or linking adverse outcomes to the AI system may prove unwieldy.
“Interactions among involved parties can complicate litigation,” he noted. “Each party may blame the others, have pre-existing agreements redistributing liability, and may pursue restitution actions.”
Michel Mello, a Stanford Law School professor and another report author, stated that while courts are generally equipped to handle legal matters, the process can be slow and create early-stage mismatches. “This uncertainty increases costs for everyone engaged in the AI innovation and adoption ecosystem,” she remarked.
The report also highlights concerns regarding the evaluation of AI tools, pointing out that many fall outside the jurisdiction of regulatory bodies like the U.S. Food and Drug Administration (FDA).
Angus commented, “For clinicians, efficacy typically translates to improved health outcomes, but there’s no assurance that regulators will mandate evidence.” He added that once an AI tool is launched, its application can vary widely among users of differing skills, in diverse clinical environments, and with various patient types. There’s little certainty that what seems advantageous in a pre-approval context will manifest as intended.
The report details numerous obstacles to evaluating AI tools, noting that clinical application is often necessary for thorough evaluation, while current assessment methods can be prohibitively expensive and cumbersome.
Mr. Angus emphasized that investing in digital infrastructure is crucial and that adequate funding is essential for effectively assessing AI tools’ performance in healthcare. “One point raised during the summit was that the most respected tools are often the least utilized, whereas the most adopted tools tend to be the least valued.”
The performing arts union Equity has issued a warning of significant direct action against tech and entertainment firms regarding the unauthorized use of its members’ likenesses, images, and voices in AI-generated content.
This alert arises as more members express concerns over copyright violations and the inappropriate use of personal data within AI materials.
General Secretary Paul W. Fleming stated that the union intends to organize mass data requests, compelling companies to reveal whether they have utilized members’ data for AI-generated content without obtaining proper consent.
Recently, the union declared its support for a Scottish actor who alleges that his likeness contributed to the creation of Tilly Norwood, an “AI actor” criticized by the film industry.
Bryony Monroe, 28, from East Renfrewshire, believes her image was used to create a digital character by the AI “talent studio” Xicoia, though Xicoia has denied her claims.
Most complaints received by Equity relate to AI-generated voice replicas.
Mr. Fleming mentioned that the union is already assisting members in making subject access requests against producers and tech firms that fail to provide satisfactory explanations about the sources of data used for AI content creation.
He noted, “Companies are beginning to engage in very aggressive discussions about compensation and usage. The industry must exercise caution, as this is far from over.”
“AI companies must recognize that we will be submitting access requests en masse. They have a legal obligation to respond. If a member reasonably suspects their data is being utilized without permission, we aim to uncover that.”
Fleming expressed hope that this strategy will pressure tech companies and producers resisting transparency to reach an agreement on performers’ rights.
“Our goal is to leverage individual rights to hinder technology companies and producers from binding collective rights,” Fleming explained.
He emphasized that with 50,000 members, a significant number of requests for access would complicate matters for companies unwilling to negotiate.
Under data protection laws, individuals have the right to request all information held about them by an organization, which typically responds within a month.
“This isn’t a perfect solution,” Fleming added. “It’s no simple task since they might source data elsewhere. Many actors are behaving recklessly and unethically.”
Ms. Monroe believes that Norwood not only mimics her image but also her mannerisms.
Monroe remarked, “I have a distinct way of moving my head while acting. I recognized that in the closing seconds of Tilly’s showreel, where she mirrored exactly that. Others observed, ‘That’s your mannerism. That’s your acting style.'”
Liam Budd, director of recorded media industries at Equity UK, confirmed that the union takes Mr. Monroe’s concerns seriously. Particle 6, the AI production company behind Xicoia, claimed it is collaborating with unions to address any concerns raised.
A spokesperson from Particle 6 stated, ‘Bryony Monroe’s likeness, image, voice, and personal data were not utilized in any way to create Tilly Norwood.’
“Tilly was developed entirely from original creative designs. We do not, and will not, use performers’ likenesses without their explicit consent and proper compensation.”
Budd refrained from commenting on Monroe’s allegations but said, “Our members increasingly report specific infringements concerning their image or voice being used without consent to produce content that resembles them.”
“This practice is particularly prevalent in audio, as creating a digital audio replica requires less effort.”
However, Budd acknowledged that Norwood presents a new challenge for the industry, as “we have yet to encounter a fully synthetic actor before.”
Equity UK has been negotiating with UK production industry body Pact (Film and TV Producers Alliance) regarding AI, copyright, and data protection for over a year.
Fleming mentioned, “Executives are not questioning where their data originates. They privately concede that employing AI ethically is nearly impossible, as they are collecting and training on data with dubious provenance.”
“Yet, we frequently discover that it is being utilized entirely outside established copyright and data protection frameworks.”
Max Rumney, deputy chief executive of Pact, highlighted that its members must adopt AI technology in production or risk falling behind companies without collective agreements that ensure fair compensation for actors, writers, and other creators.
However, he noted a lack of transparency from tech firms regarding the content and data used for training the foundational models of AI tools like image generators.
“The fundamental models were trained on our members’ films and programming without their consent,” Rumney stated.
“Our members favor genuine human creativity in their films and shows, valuing this aspect as the hallmark of British productions, making them unique and innovative.”
Researchers focused on the quest for extraterrestrial life are actively searching, as aliens have yet to appear on Earth to join us in a galactic federation. Nonetheless, there remains a chance that scientists will find extraterrestrial life close enough for observation, through numerous probes and satellites dispatched throughout our solar system. The anticipation of visitors from the cosmos often generates a constant buzz within the scientific community.extrasolar celestial body passing near the sun.
Many astronomers and astrobiologists are venturing even farther, beyond our solar system and into the realms of other stars. As they cannot deploy instruments to such distant locations for at least several centuries, scientists rely on telescopes to search for indicators of life. These indicators are referred to as biosignatures, which can include elements, molecules, or other characteristics. However, caution is necessary when seeking biosignatures, as measurement inaccuracies and overlooked variables can lead to false positives.
A hypothetical false positive might involve: Exoplanets possessing atmospheres rich in carbon dioxide and nitrogen gas, as well as some hydrogen-oxygen molecules, none of which necessarily indicate life. A powerful burst of matter and energy from an exoplanet’s host star, known as an exoplanet flare, could emit energy that impacts the atmosphere and triggers chemical reactions producing oxygen gas, O2, and ozone, O3. Should astronomers detect these compounds in an exoplanet’s atmosphere, they might mistakenly consider the planet a candidate for life.
Recently, a group of scientists explored how such a scenario could manifest on exoplanets and the potential for false life indicators. They conducted a series of six simulations to create plausible scenarios of a flare impacting an uninhabited Earth-like planet. They selected red dwarfs, the most prevalent star type near Earth, and analyzed data on Earth’s atmospheric and surface chemical composition from 4.5 to 4 billion years ago, during a period dominated by carbon dioxide, N2, and water. They positioned the planet within proximity to its star to receive comparable light levels to what Earth receives from the sun today.
In five of the simulations, they modified the presence of CO.2 and N2, adjusting CO2 levels to make up 3%, 10%, 30%, 60%, or 80% of the atmosphere. The sixth simulation looked at a different atmospheric composition with minimal water. This variant checked for possible extremes in O2 and O3 levels, considering that hydrogen from water can bind with stray oxygen atoms. All simulated atmospheres contained trace amounts of O2 and O3.
Each simulated atmosphere was then subjected to two flares: one of typical strength observed from real red dwarfs, and the other, known as a super flare, which is 100 times stronger and exceedingly rare. The chemical outcomes of these flares were calculated using specialized software called atmos. Following this, they employed the Spectral Mapping Atmospheric Radiative Transfer (SMART) model to simulate observable effects from Earth-based telescopes.
During standard flare events, O2 and O3 levels initially decreased but reverted to their original state approximately 30 years later. Nevertheless, five months post-flare, a slight overshoot in oxygen levels was noted before they normalized.
Analyzing the variations in CO levels, 2, hydrogen gas, and water within exoplanet atmospheres revealed that each can significantly alter the detectability of oxygen molecules by astronomers. Consequently, the impacts of typical flares are subtle and challenging to discern on actual exoplanets. However, in the unique instances simulated involving super flares, notable increases in O2 and O3 occurred, though these levels also nearly returned to pre-flare conditions within 30 years.
Ultimately, the researchers concluded that flares likely have only a minimal and fleeting impact on life detection efforts on these exoplanets. Even if astronomers observed an exoplanet struck by a flare five months prior, the O2 and O3 levels, considering potential measurement errors, would not present as distinctly elevated. Nonetheless, the results from super flare scenarios indicate that further examination of false positives in biosignatures is warranted, as high-energy events can substantially disrupt the environmental conditions of exoplanets.
This orange dot represents a gamma-ray burst, thought to indicate an extraordinary event.
ESO/A. Levan, A. Martin-Carrillo et al.
A black hole that has consumed a star appears to have avenged itself by devouring the star from within, generating a gamma-ray burst located approximately 9 billion light-years from Earth.
This burst, known as GRB 250702B, was initially identified by NASA’s Fermi Gamma-ray Space Telescope in July. Such bursts are brilliant flashes of light due to jets produced by high-energy occurrences, like massive stars collapsing into black holes or the merging of neutron stars, and generally last only a few minutes.
However, GRB 250702B lasted an astonishing 25,000 seconds, equating to about 7 hours, which makes it the longest gamma-ray burst on record. Researchers have struggled to account for this phenomenon, but Eliza Knights and her team at NASA’s Goddard Space Flight Center propose an unusual and rare scenario.
“The only [model] providing a natural explanation for the characteristics observed in GRB 250702B involves a stellar-mass black hole falling into the star,” the researchers mentioned in their published study.
In a typical long gamma-ray burst, a massive star collapses to create a black hole and emits a jet during its demise. In this situation, however, the research team posits the inverse. An existing black hole spiraled into a companion star, whose outer layers had expanded during its later stages, resulting in the black hole losing angular momentum and descending toward the star’s center.
The black hole then incinerated the star from the inside, producing a powerful jet perceived as GRB 250702B, potentially causing a faint supernova, although it remained too dim for detection at this distance by the James Webb Space Telescope.
This theory is beneficial for understanding the mechanisms behind ultra-long bursts. Hendrik van Eerten from the University of Bath, UK, remarks, “The arguments presented in this paper are very persuasive.”
Knights and her team hope that, with the help of telescopes like the Vera Rubin Observatory in Chile, we may observe more such events in the future. Meanwhile, van Eerten describes the gamma-ray burst as “absurd.”
Your approach to chatting with AI may matter more than you realize
Oscar Wong/Getty Images
The manner in which you converse with an AI chatbot, especially using informal language, can significantly impact the accuracy of its replies. This indicates that we might need to engage with chatbots more formally or train the AI to handle informal dialogue better.
Researchers Fulei Zhang and Zhou Yu from Amazon explored how users begin chats with human representatives versus chatbot assistants that utilize large language models (LLMs). They employed the Claude 3.5 Sonnet model to evaluate various aspects of these interactions, discovering that exchanges with chatbots were marked by less grammatical accuracy and politeness compared to human-to-human dialogues, as well as a somewhat limited vocabulary.
The findings showed that human-to-human interactions were 14.5% more polite and formal, 5.3% more fluent, and 1.4% more lexically diverse than their chatbot counterparts, according to Claude’s assessments.
The authors noted in their study, “Participants adjust their linguistic style in human-LLM interactions, favoring shorter, more direct, less formal, and grammatically simpler messages,” though they did not respond to interview requests. “This behavior may stem from users’ mental models of LLM chatbots, particularly if they lack social nuance or sensitivity.”
However, embracing this informal style comes with challenges. In another evaluation, the researchers trained an AI model named Mistral 7B using 13,000 actual human-to-human interactions, then assessed 1,357 real messages directed at the AI chatbot. They categorized each conversation with an “intent” derived from a restricted framework summarizing the user’s purpose. Unfortunately, Mistral struggled with accurately defining the intentions within the chatbot conversations.
Zhang and Yu explored various methods to enhance Mistral AI’s understanding. Initially, they used Claude AI to transform users’ succinct messages into more polished human-like text and used these rewrites to fine-tune Mistral, resulting in a 1.9% decline in intent label accuracy from the baseline.
Next, they attempted a “minimal” rewrite with Claude, creating shorter and more direct phrases (e.g., asking about travel and lodging options for an upcoming trip with “Paris next month. Where’s the flight hotel?”). This method caused a 2.6% drop in Mistral’s accuracy. On the other hand, utilizing a more formal and varied style in “enhanced” rewrites also led to a 1.8% decrease in accuracy. Ultimately, the performance showed an improvement of 2.9% only when training Mistral with both minimal and enhanced rewrites.
Noah Jansiracusa, a professor at Bentley University in Massachusetts, expressed that while it’s expected that users communicate differently with bots than with other humans, this disparity shouldn’t necessarily be seen as a negative.
“The observation that people interact with chatbots differently from humans is often depicted as a drawback, but I believe it’s beneficial for users to recognize they’re engaging with a bot and adjust their communication accordingly,” Giansiracusa stated. “This understanding is healthier than a continual effort to bridge the gap between humans and bots.”
3D rendering of a quantum computer’s chandelier-like structure
Shutterstock / Phong Lamai Photography
Eleven years ago, I began my PhD in theoretical physics and honestly had never considered or written about quantum computers. Meanwhile, New Scientist was busy crafting the first “Quantum Computer Buyer’s Guide,” always ahead of its time. A glance through reveals how things have changed—John Martinis from UC Santa Barbara was recognized for developing an array of merely nine qubits and earned a Nobel Prize in Physics just last week. Curiously, there was no mention of quantum computers built using neutral atoms, which have rapidly transformed the field in recent years. This sparked my curiosity: how would a quantum computer buyer’s guide look today?
At present, around 80 companies globally are producing quantum computing hardware. My reporting on quantum computing has allowed me to witness firsthand how the industry evolves, complete with numerous sales pitches. If choosing between an iPhone and an Android is challenging, consider navigating the press lists of various quantum computing startups.
While there’s significant marketing hype, the challenge in comparing these devices stems from the lack of a clear standard for building quantum computers. For instance, potential qubit options include superconducting circuits, cryogenic ions, and light. With such diverse components, how does one assess their differences? This aspect will hone in on each quantum computer’s performance.
This marks a shift from the early days, where success was measured by the number of qubits—the foundational elements of quantum information processing. Many research teams have surpassed the 1000-qubit threshold, and the trajectory for achieving even more qubits appears to be becoming clearer. Researchers are exploring standard manufacturing methods, such as creating silicon-based qubits, and leveraging AI to enhance the size and capabilities of quantum computers.
Ideally, more qubits should always translate to greater computational power, enabling quantum computers to tackle increasingly complex challenges. However, in reality, ensuring each additional qubit doesn’t impede the performance of existing ones presents significant technical hurdles. Thus, it’s not just the number of qubits that counts, but how much information they can retain and how effectively they can communicate without losing data accuracy. A quantum computer could boast millions of qubits, but if they’re susceptible to errors that disrupt computations, they become virtually ineffective.
The extent of this “glitch” or noise can be measured by metrics like “gate fidelity,” which reflects how accurately a qubit or pair can perform operations, and “coherence time,” which gauges how long a qubit can maintain a viable quantum state. However, we must also consider the intricacies of inputting data into a quantum computer and retrieving outcomes, despite some favorable metrics. The growth of the quantum computing industry is partly attributed to the emergence of companies focused on qubit control and interfacing quantum internals with non-quantum users. A thorough buyer’s guide for quantum computers in 2025 should encompass these essential add-ons. Choosing a qubit means also selecting a qubit control system and an error correction mechanism. I recently spoke with a researcher developing an operating system for quantum computers, suggesting that such systems may become a necessity in the near future.
If I were to create a wish list for the short term, I would favor a machine capable of executing at least a million operations: a million-step quantum computing program with minimal error rates and robust error correction. John Preskill from the California Institute of Technology refers to this as the “Mega-Quop” machine. Last year, he expressed confidence that such machines would be fault-tolerant and powerful enough to yield scientifically significant discoveries. Yet, we aren’t there yet. The quantum computers at our disposal currently manage tens of thousands of operations, but error correction has only been effectively demonstrated for smaller tasks.
Quantum computers today are akin to adolescents—growing toward utility but still faced with developmental challenges. As a result, the question I frequently pose to quantum computer vendors is, “What can this machine actually accomplish?”
In this regard, it’s vital to compare not only various types of quantum computers but also contrast them with classical counterparts. Quantum hardware is costly and complex to manufacture, so when is it genuinely the sole viable solution for a given issue?
One method to tackle this inquiry is to pinpoint calculations traditional computers cannot resolve without unlimited time. This concept is termed “quantum supremacy,” and it keeps quantum engineers and mathematicians consistently preoccupied. Instances of quantum supremacy do exist, but they raise concerns. To be meaningful, such cases must be applicable, facilitating the construction of capable machines that can execute them, while also being demonstrable enough for mathematicians to assure that no conventional computer could compete.
In 1994, physicist Peter Shor devised a quantum computing algorithm for factoring large numbers, a technique that could potentially compromise the prevalent encryption methods utilized by banks worldwide. A sufficiently large quantum computer that could manage its own errors might execute this algorithm, yet mathematicians have yet to convincingly demonstrate that classical computers can’t efficiently factor large numbers. The most prominent claims of quantum supremacy often fall into this gray area, with some eventually being outperformed by classical machines. Ongoing demonstrations of quantum supremacy appear currently to serve primarily as confirmations of the quantum characteristics of the computers accomplishing them.
Conversely, in the mathematical discipline of “query complexity,” the superiority of quantum solutions is rigorously demonstrable, but practical algorithms remain elusive. Recent experiments have also introduced the notion of “quantum information superiority,” wherein quantum computers solve tasks using fewer qubits than traditional computers would require, focusing on the physical components instead of time. Though this sounds promising—indicating that quantum computers may solve problems without extensive scaling—they are not recommended for purchase simply because the tasks in question often lack pivotal real-world applications.
It’s undeniable that several real-world challenges are well-suited for quantum algorithms, like understanding molecular properties relevant to agriculture or medicine, or solving logistic issues like flight scheduling. Yet, researchers lack full clarity on these applications, often opting to state, “it seems.”
For instance, recent research on the prospective applications of quantum computing in genomics by Aurora Maurizio from the San Raffaele Scientific Institute in Italy and Guglielmo Mazzola at the University of Zurich suggests that traditional computing methods excel so significantly that “quantum computing may, in the near future, only yield speedups for a specific subset of sufficiently complex tasks.” Their findings indicate that while quantum computers could potentially enhance research in combinatorial problems within genomics, their application needs to be very precise and calculated.
In reality, for numerous issues not specifically designed to demonstrate quantum supremacy, there exists a spectrum in what constitutes “fast,” particularly when one considers that quantum computers might ultimately run algorithms quicker than classical computers, despite overcoming noise and technical challenges. However, this speed may not always offset the hardware’s significant costs. For example, the second-best-known quantum algorithm, Shor’s search algorithm, offers a non-exponential speedup, reducing computation time at a square root level instead. Ultimately, the question of how fast is “fast enough” to justify the transition to quantum computing may depend on individual buyers.
While it’s frustrating to include this in a purported buyer’s guide, my discussions with experts indicate that there remains far more uncertainty about what quantum computers can achieve than established knowledge. Quantum computing is an intricate, costly future technology; however, its genuine added value to our lives remains vague beyond serving the financial interests of a select few companies. This might not be satisfying, but it reflects the unique, uncharted territory of quantum computing.
For those of you reading this out of the desire to invest in a powerful, reliable quantum computer, I encourage you to proceed and let your local quantum algorithm enthusiast experiment with it. They may offer better insights in the years to come.
For centuries, the greatest minds have pondered the concept of time, yet its absolute nature remains elusive.
While physics does not dictate that time must flow in a specific direction or define its essence, it is widely accepted that time is a tangible aspect of the universe.
The two cornerstone theories of modern physics, general relativity and quantum mechanics, perceive time in distinct ways. In relativity, time functions as one coordinate in conjunction with three spatial coordinates.
Einstein demonstrated the intricate relationship between these dimensions, revealing that the flow of time is relative, not absolute. This implies that as you move faster, time appears to slow down in comparison to someone who remains “stationary.”
Interestingly, photons traveling at light speed experience no passage of time; for them, everything occurs simultaneously.
On the other hand, quantum mechanics, which pertains to the macroscopic realm, views time as a fundamental parameter—a consistent and one-way flow from past to future, disconnected from spatial dimensions and entities (like particles).
This divergence creates a conflict between these two prominent theories and poses a challenge for physicists attempting to unify gravitational and quantum theories into a singular “grand unified theory.”
Crucially, neither general relativity nor quantum mechanics defines time as a “field,” a physical quantity that permeates space and can affect particle characteristics.
Each of the four fundamental force fields (gravity, electromagnetism, strong nuclear force, and weak nuclear force) involves the exchange of particles.
These particles can be viewed as carriers of force. In electromagnetism, the carrier is a photon, while strong interactions are mediated by particles known as “gluons.”
Gravity, too, is thought to be transmitted by hypothetical particles called “gravitons,” yet a complete quantum description of gravity remains elusive.
Scientists continue to struggle with the concept of time, which appears to lack tangible properties like discrete chunks – Credit: Oxygen via Getty
Other “fields” confer specific properties to particles. For instance, the Higgs field involves the transfer of Higgs bosons, endowing them with mass.
In the realm of physics, time—regardless of its true essence—differs fundamentally from a “field.” It is not a physical quantity (like charge or mass) and does not apply forces or dictate particle interactions.
Thus, in contemporary physics, time is not characterized by mediating particles as are the four fundamental forces. The notion of “time particles” does not hold relevance.
Remarkably, recent studies indicate that time might actually be an illusion. This intriguing theory emerges from quantum “entanglement,” wherein the quantum states of particles are interlinked, regardless of their spatial separation.
This article addresses a question posed by Brian Roche from Cork, Ireland: “Is it possible for a time particle to exist?”
If you have any inquiries, please connect with us at:questions@sciencefocus.com or reach out viaFacebook,Twitter, orInstagramPage (please include your name and location).
Explore our ultimatefun facts and other amazing science content.
This year’s Nobel Prize in Economics has been awarded to three experts who explore the influence of technology on economic growth.
Joel Mokyr from Northwestern University receives half of the prize, amounting to 11 million Swedish kronor (£867,000), while the remaining portion is shared between Philippe Aghion from the Collège de France, INSEAD Business School, and the London School of Economics, alongside Peter Howitt from Brown University.
The Royal Swedish Academy of Sciences announced this award during a period marked by rapid advancements in artificial intelligence and ongoing discussions about its societal implications, stating that the trio laid the groundwork for understanding “economic growth through innovation.”
This accolade comes at a time when nations worldwide are striving to rejuvenate economic growth, which has faced stagnation since the 2008 financial crisis, with rising concerns about sluggish productivity, slow improvements in living standards, and heightened political tensions.
Aghion has cautioned that “dark clouds” are forming amid President Donald Trump’s trade war, which heightens trade barriers. He emphasized that fostering innovation in green industries and curbing the rise of major tech monopolies are crucial for sustaining growth in the future.
“We cannot support the wave of protectionism in the United States, as it hinders global growth and innovation,” he noted.
While accepting the award, he pointed out that AI holds “tremendous growth potential” but urged governments to implement stringent competition policies to handle the growth of emerging tech firms. “A few leading companies may end up monopolizing the field, stifling new entrants and innovation. How can we ensure that today’s innovators do not hinder future advancements?”
The awards committee indicated that technological advancements have fueled continuous economic growth for the last two centuries, yet cautioned that further progress cannot be assumed.
Mokyr, a Dutch-born Israeli-American economic historian, was recognized for his research on the prerequisites for sustained growth driven by technological progress. Aghion and Howitt were honored for their examination of how “creative destruction” is pivotal for fostering growth.
“We must safeguard the core mechanisms of creative destruction to prevent sliding back into stagnation,” remarked John Hassler, chairman of the Economics Prize.
Established in the 1960s, the professional National Bank of Sweden awarded the Economics Prize in memory of Alfred Nobel.
ASince graphene was first synthesized at the University of Manchester in 2004, it has been recognized as a remarkable material—stronger than steel yet lighter than paper. Fast forward 20 years, and not all UK graphene enterprises have been able to harness its full capabilities. Some view the future with optimism, while others face significant challenges.
Derived from graphite, the same substance used in pencils, graphene consists of a lattice-like sheet of carbon just one atom thick, boasting impressive conductivity for both heat and electricity. Presently, China is the leading global producer, leveraging this to secure an edge in the race for microchip production and construction applications.
In the UK, graphene-enhanced low-carbon concrete, developed by the Graphene Engineering Innovation Center (GEIC) at the University of Manchester in collaboration with Cemex UK, was recently installed at Northumbrian Waters in July.
“The material had an overwhelming amount of hype as it came out of academia… the real challenge lies in transitioning it from the lab to actual production,” explains Ben Jensen, CEO of 2D Photonics, a startup that originated from the University of Cambridge, specializing in graphene-based photonics technology for data centers.
Jensen was also behind the invention of Vantablack, a coating made from carbon nanotubes (rolled graphene sheets) renowned as the “blackest black” due to its ability to absorb 99.96% of light. He founded Surrey Nanosystems in 2007, where he sold exclusive artistic rights to sculptor Anish Kapoor, who featured the material on the X6 Coupe to achieve the “blackest black” effect six years ago.
Anish Kapoor’s untitled Vantablack piece was displayed in Venice in 2022. Photo: David Levin/The Guardian
“Shifting to new materials to replace existing technologies presents a significant challenge,” Jensen states. “The value proposition must be compelling, while also ensuring that the material can be manufactured efficiently at scale and priced competitively, otherwise, there’s little point in offering something ten times more costly than existing products.”
German company Bayer attempted to produce large quantities of carbon nanotube items but shuttered its pilot plant over a decade ago when a surge in demand failed to materialize. Currently, this material finds its primary use as a filler to enhance the strength of plastic products. Bayer has referred to the potential applications for nanotubes as “fragmentary.”
More promising is a graphene-based optical microchip created by CamGraPhIC, a branch of 2D Photonics, stemming from research at the University of Cambridge and CNIT in Italy.
Silicon photonics microchips currently translate electrical data into optical signals for transmission through fiber optic cables. The company claims its graphene-based chips can transmit more data in less time and at significantly lower costs.
Graphene single crystal. Photo: 2D Photonics
These chips consume 80% less energy and are capable of functioning across a broader temperature range, minimizing the requirement for costly water and energy-intensive cooling systems in AI data centers.
Transmitting data through silicon often leads to delays. Jensen compares this issue to a 16-lane highway unexpectedly narrowing down to one lane due to construction, slowing down traffic significantly. He argues that graphene photonics functions like an expansive highway with hundreds of lanes.
“Our breakthrough lies in the capability to cultivate stable, ultra-high performance graphene and effectively integrate it into devices,” he asserts. “Keep in mind, this material is only one atom thick, which makes the process particularly challenging.”
Ben Jensen, CEO of 2D Photonics. Photo: Ermanno Fissole
CamGraPhIC was established in 2018 by Professor Andrea Ferrari, a Cambridge Nanotechnology professor, who also heads the Cambridge Graphene Center, alongside Marco Romagnoli, head of advanced photonics at CNIT in Pisa and the startup’s chief scientific officer.
The parent company, 2D Photonics, recently acquired £25m in funding from a diverse group of investors, including Italy’s sovereign wealth fund, NATO, the Sony Innovation Fund, Bosch Ventures, and the UK’s Frontier IP Group. The firm will be based in the former Pirelli photonics research facility in Pisa and aims to launch a pilot manufacturing site in the Milan region designed for large-scale production of 200mm wafers, confident in receiving an additional €317m (£276m) in funding by year-end.
Aside from data centers, the company’s chips have potential uses in high-performance computing, 5G and 6G mobile systems, aviation technologies, autonomous vehicles, advanced digital radar, non-satellite space communications, and beyond.
Paragraph, a spin-out from Cambridge University located in the nearby village of Somersham, has thrived in the past decade with backing from the UK Treasury. The firm creates graphene-based electronic devices, including sensors designed for electric vehicles and biosensors for early disease detection and various applications in medicine and agriculture. Recently, they secured $55 million (£41 million) from a group of investors, including a sovereign wealth fund from the United Arab Emirates, which acquired a 12.8% share in Paragraph.
Graphene Innovations Manchester, a fledgling company started by Vivek Konchery in 2021, finalized a deal with Saudi Arabia in December for the first commercial production of graphene-enhanced carbon fiber. This material will be utilized in constructing roofs, facades, and light poles. Production has begun in Tabuk with local partners, with an expected output of 3,000 tons by 2026.
2D photonics cleanroom at the Pisa development facility. Photo: 2D Photonics
Conversely, other companies are facing harsher realities. One of the pioneering firms in this domain, Applied Graphene Materials, was launched in 2010 by Professor Carl Coleman, a spin-out from Durham University. It introduced various products, such as anti-corrosion primers and bike detail protection sprays, which became available in Halfords stores. However, the struggling company declared bankruptcy in 2023, resulting in its main operations being acquired by Canada’s Universal Matter.
Ron Mertens, the owner of Graphene-Info, remarked, “As is often true in the broader materials industry, the path to market can be lengthy. Many graphene producers and developers have yet to generate substantial revenue or profit.”
Versarian, located in Gloucestershire, expanded from a garage startup with support from the government agency Innovate UK. They developed graphene powder and other products for usage in sensors, low-carbon concrete, paints, electronic inks, textiles, and more, including running gear and prototype stealth technologies for the US military.
The AIM-listed firm sought to establish operations in Spain and South Korea, but encountered financial troubles, leading several subsidiaries to enter administration or voluntary liquidation in July. Versarian is now looking to sell off assets, such as its patent portfolio, and currently has enough funds to last only until the end of October.
Depending on the nature of the upcoming transactions, this may trigger a liquidation process for the company or a financial shelter. Their investment agreement with a Chinese partner collapsed after the British government intervened to block any technological collaboration, marking a somber potential finale for what was once a promising graphene venture.
BJust moments into the first round of the expansive multiplayer mode Conquest, you can’t help but feel the thrill of battle return. Fighter jets zoom overhead, tanks thunder by, and buildings crumble under the impact of rocket-propelled grenades. While Call of Duty has traditionally emphasized close-quarters combat in online matches, Battlefield 6 immerses you in a colossal military engagement that’s both bewildering and ear-piercing. Even in the quieter moments, you’re jolted back to reality by the distant sounds of rifle fire, urgent shouts for orders, and calls for medics.
EA’s legendary FPS series has faced significant challenges in recent years, and its futuristic installment Battlefield 2042 is widely regarded as a letdown. In response, the development team—comprising various studios including the original creator DICE—has returned to the stellar Battlefield 4 for inspiration. This time, the focus is on contemporary military warfare, delivering an authentic experience across expansive maps with numerous players involved. Similar to previous titles, Battlefield 6 offers four distinct classes: Assault, Support, Engineer, and Recon. Each class comes equipped with unique weapons and gadgets, which you can upgrade and customize as you gain experience and level up your soldiers. It’s a hybrid system that blends elements from earlier Battlefield games with features from modern Call of Duty titles, notably the Gunsmith system, which has revolutionized weapon customization in online shooters.
Brooklyn at war…Battlefield 6. Photo: Electronic Arts
The standout online modes are the large-scale ones like Conquest and Breakthrough, which concentrate on capturing objectives and seizing territory from rivals. There are also smaller modes such as King of the Hill and Domination, but for seasoned Battlefield players, these options feel like a different approach altogether. Since the groundbreaking Battlefield 1942 in 2002, the series has promoted strategic gameplay, encouraging teamwork among allies to infiltrate enemy bases, synchronizing assaults with helicopter cover, and gradually breaking through defenses. In a good game session, you may find yourself stealthily navigating the map or inching toward a heavily fortified structure. The rapid-fire nature of Call of Duty, characterized by quick skirmishes and instant respawns, seems worlds apart.
Yet, engaging in combat here feels invigorating. Whether you’re navigating the bustling streets of Brooklyn or the shores of Cairo, debris cascades, bullets ricochet, and tanks detonate in fiery explosions. The graphics and audio design are remarkably well-executed, channeling the gritty, camera-shaking documentary style of Generation Kill or Warfare rather than the polished action-movie mayhem typical of CoD. If you’re fortunate enough to join a solid team (I strongly recommend playing with one or two friends), you’ll forge genuine camaraderie.
However, the game does stumble with its lackluster campaign mode. The storyline is a standard techno-thriller set in a near-future world where a private military firm seeks global domination, and only a rugged team of American special forces stands in their way. This narrative feels clichéd and uninspired. By portraying the antagonist as a fictional military corporation, the developer sidesteps political controversy and avoids addressing the game’s potential market dynamics or its investors at Electronic Arts. Additionally, staying engaged with the cast of tough guys, who consistently deliver lines like “There’s no bureaucracy here” or quip, “I don’t know what’s more impressive, the scenery or the firepower” while staking out an enemy base in sunny Gibraltar can be a challenge. When Murphy, the protagonist, states, “There’s no one I want to join in this fight,” I seriously wished that defection had been an option.
Don’t let that discourage you. Overall, Battlefield 6 marks a triumphant return to form, delivering a thrilling, almost operatic shooter experience that masterfully blends explosive combat with tactical finesse. It remains to be seen how it will fare amidst the contemporary landscape of hero shooters and battle royale games, but it is undoubtedly worth your time.
Labor unions and online safety advocates are urging Members of Parliament to examine TikTok’s decision to eliminate hundreds of content moderation jobs based in the UK.
The social media platform intends to reduce its workforce by 439 positions within its trust and safety team in London, raising alarms about the potential risks to online safety associated with these layoffs.
Conferences from trade unions, communication unions, and prominent figures in online safety have authored an open letter to Chi Onwurah MP, who chairs Labour’s science, innovation, and technology committee, seeking an inquiry into these plans.
The letter references estimates from the UK’s data protection authority indicating that as many as 1.4 million TikTok users could be under the age of 13, cautioning that these reductions might leave children vulnerable to harmful content. TikTok boasts over 30 million users in the UK.
“These safety-focused staff members are vital in safeguarding our users and communities against deepfakes, harm, and abuse,” the letter asserts.
Additionally, TikTok has suggested it might substitute moderators with AI-driven systems or workers from nations like Kenya and the Philippines.
How TikTok harms boys and girls differently – video
The signatories also accuse the Chinese-owned TikTok of undermining the union by announcing layoffs just eight days prior to a planned vote on union recognition within the CWU technology sector.
“There is no valid business justification for enacting these layoffs. TikTok’s revenue continues to grow significantly, with a 40% increase. Despite this, the company has chosen to make cuts. We perceive this decision as an act of union-busting that compromises worker rights, user safety, and the integrity of online information,” the letter elaborates.
Among the letter’s signatories are Ian Russell, the father of Molly Russell, a British teenager who took her life after encountering harmful online content, former meta-whistleblower Arturo Bejar, and Sonia Livingstone, a social psychology professor at the London School of Economics.
The letter also urges the commission to evaluate the implications of job cuts on online safety and worker rights, and to explore legal avenues to prevent content moderation from being outsourced and to keep human moderators from being replaced by AI.
When asked for comments regarding the letter, Onwurah noted that the layoff strategy suggests TikTok’s content moderation efforts are under scrutiny, stating, “The role that recommendation algorithms play on TikTok and other platforms in exposing users to considerable amounts of harmful and misleading content is evident and deeply troubling.”
Onwurah mentioned that the impending job losses were questioned during TikTok’s recent appearance before the committee, where the company reiterated its dedication to maintaining security on its platform through financial investments and staffing.
She remarked: “TikTok has conveyed to the committee its assurance of maintaining the highest standards to safeguard both its users and employees. How does this announcement align with that commitment?”
In response, a TikTok representative stated: “We categorically refute these allegations. We are proceeding with the organizational restructuring initiated last year to enhance our global operational model for trust and safety. This entails reducing the number of centralized locations worldwide and leveraging technological advancements to improve efficiency and speed as we develop this essential capability for the company.”
TikTok confirmed it is engaging with the CWU voluntarily and has expressed willingness to continue discussions with the union after the current layoff negotiations are finalized.
quick guide
Contact us about this story
show
The best public interest journalism relies on first-hand reporting from those in the know.
If you have insights regarding this matter, please reach out to us confidentially using the methods outlined below.
Secure messaging in the Guardian app
The Guardian app features a tool for submitting tips. All messages are encrypted end-to-end and seamlessly integrated into everyday use of Guardian mobile apps, keeping your communication private.
If you don’t have the Guardian app, download it (iOS/Android) and navigate to the menu. Select “Secure Messaging.”
SecureDrop, instant messaging, email, phone, mail
If you can access the Tor network safely and privately, you can send messages and documents to the Guardian through our SecureDrop platform.
Lastly, our guidelines at theguardian.com/tips provide multiple secure contact methods, detailing their advantages and disadvantages.
This image seems to show a Martian wrench, but it’s just a stone
Brian Cory Dobbs Productions
Blue Planet Red Directed by Brian Corrie Dobbs, available on Amazon Prime Video
Blue Planet Red is a documentary focused on Mars. The world depicted by director Brian Corrie Dobbs diverges from our understanding but certainly possesses its allure. It showcases an advanced civilization of pyramid builders that either failed to avert their world’s demise or destroyed it through a catastrophic nuclear conflict.
Dobbs presents his assertions regarding advanced Martian life directly to the audience, complete with expressive gestures and confident poses. I found him quite engaging. Yet, after viewing his work, I wasn’t surprised to discover that a section of his portfolio includes questionable content (referring to dubious videos concerning cell phones, electromagnetic fields, and cancer).
Whether by design or not, Blue Planet Red serves as a historical record. It is a testament to a generation of researchers and enthusiasts raised under the imposing shadow of a two-kilometer geological mound in the Martian region of Sidonia. Back in 1976, NASA’s Viking spacecraft took a blurry photo of what seemed to be a giant human face, known as the “Face of Mars,” at the intersection of Mars’ southern highlands and northern plains.
There’s no need to delve into debunking topics that have already been convincingly dismantled many times before. If you enhance the resolution of the image, the so-called face vanishes. Features resembling tools or bones are simply rocks. Additionally, the presence of xenon-129 in Mars’ atmosphere suggests an ancient nuclear war only if we disregard the well-understood decay process of the now-extinct isotope iodine-129 into xenon-129 within Mars’ cooling lithosphere.
“
The ambiguous data from the Viking orbiters fostered the growth of fanciful ideas “
Yet, capturing this narrative holds a certain poignancy. Transforming Ideas gives voice to this generation of researchers. Individuals featured in the film include Richard Bryce Hoover, who led NASA’s astrobiology research at the Marshall Space Flight Center in Alabama until 2011, where he helped prove the existence of extremophiles on Earth. He is convinced he discovered microfossils in Martian meteorites. However, despite his enthusiasm, director Hoover fails to clarify in the film why these fossils rest atop the rock samples rather than embedded within them.
Contributor John Brandenburg is regarded as a respectable plasma scientist, provided he avoids discussing nuclear war on Mars. Mark Carlot, on the other hand, has dedicated 40 years to chronicling remnants of civilization on Mars while others merely see rocks. Upon returning to Earth, he proves to be an adept archaeologist.
After Apollo made its final moon landing in 1972, the initial thrill of the space race began to diminish. The images transmitted back by the Viking spacecraft signaled the next significant discovery. This hazy mixture of revolutionary yet unclear data served as a fertile ground for the emergence of fanciful ideas, particularly in the United States, where the Vietnam War and Watergate bred skepticism and paranoia.
Dobbs’ dynamic recounting of the Martian narrative frames it as a tale of an event occurring 3.7 billion years ago when the wet, warm planet transitioned into a barren dust bowl. For me, it resonates more with what happened to the passionate groups glued to their screens and magazines in the 1970s. Let us momentarily set aside our disdain and engage with this generation. Strong hope should never again hinder a kind heart like this.
American and German (and Nazi) rocket scientists drew inspiration from Antarctic exploration to draft this foundational technical specification for a manned mission to Mars.
Simon Ings is a novelist and science writer. X Follow him at @simonings
“Many scenarios can be represented using so-called game theory…”
Shutterstock/Anne Kosolapova
In a world where survival favors the strongest, the question arises: how do cooperative behaviors develop?
From the realm of evolutionary biology to the complexities of international diplomacy, numerous scenarios can be analyzed through the lens of game theory. These games consider not only the various actions and strategies available to each participant but also the corresponding payoffs—positive or negative outcomes that each player receives based on various results. Some games are classified as “zero-sum,” meaning one player’s gain directly translates to another player’s loss, while others are not.
A notable example of a non-zero-sum game is the Prisoner’s Dilemma, which presents a compelling situation. The basic scenario involves two “criminals” held in separate cells, unable to communicate with each other.
While there isn’t sufficient evidence to charge them with the most serious offenses, there’s enough to convict both on lesser charges. The authorities simultaneously present each prisoner with a deal: if one testifies against the other while the other stays silent, the betrayer walks free while the silent one serves three years. However, if both betray each other, they each face two years in prison. If they both choose to remain silent, they will each serve just one year for the lesser offense.
The “reward” each player receives can be viewed in terms of years served: if both stay silent, the outcome results in a payoff of -1 for each. If player A betrays player B, A’s payoff is 0 while B’s is -3. In the case of mutual betrayal, both players incur a payoff of -2. Therefore, how can players optimize their outcomes?
In certain scenarios, each participant’s strategy emerges as the optimal response to the other’s actions, leading to a concept known as Nash equilibrium. Both players act in a way that maximizes their individual benefits, resulting in a favorable outcome.
The challenge lies in how actions interact without prior knowledge of the other player’s intentions. Consider if you decide to remain silent; if your counterpart shares that thought, betrayal will yield a greater return for you. Conversely, if they plan to betray you, it’s in your best interest to do the same. Thus, the most logical option appears to be betrayal. This reasoning applies universally, leading both players to defect, resulting in a total payoff of -4.
Should both players trust one another and remain silent, their total payoff would be -2. This implication—that the so-called survival of the fittest can yield suboptimal results compared to cooperative strategies—hints at the potential for collaboration.
A famous experiment from the 1980s involved 62 computer programs engaging in 200 rounds of Prisoner’s Dilemma. Crucially, these programs could adapt their strategies based on their opponent’s previous actions. Interestingly, self-serving strategies proved less successful compared to those grounded in altruism. A successful algorithm would cooperate initially but choose to defect only when the opponent had done so in prior rounds. Furthermore, these programs exhibited a forgiving nature, often returning to cooperation after prior acts of betrayal.
Thus, while “pure” game theory may lead to unfavorable outcomes, incorporating a touch of kindness can pave the way for improved results. Be generous, but remain vigilant against exploitation. Such findings lend credence to game theory.
When the Australian Christian College, a secondary school situated in Melbourne’s Casey suburb, enforced a mobile phone ban, it was driven by numerous factors. There was an escalation in peer conflicts online, students had difficulty maintaining focus, and teachers noticed students engaging in “code-switching on notifications.”
Caleb Peterson, the school’s principal, stated, “When a phone is within arm’s reach, a student’s attention is only half in the room. We aimed to reclaim their full attention.”
Traditionally, cell phone bans in educational institutions necessitate that devices be stored in bags or lockers during class hours, with confiscation upon discovery to be retained in the school office until the day’s end. This month signifies the two-year mark since the introduction of phone bans across many Australian states. Victoria notably pioneered this move by prohibiting mobile phone usage in public primary and secondary schools back in 2020. By the close of the fourth term in 2023, Western Australia, Tasmania, New South Wales, and South Australia implemented similar measures, with Queensland limiting mobile phone use starting early 2024.
The announcement regarding the ban received endorsement from both parents and politicians, many of whom contended that: restricting access to phones enhances focus and minimizes distractions, though some experts expressed doubts concerning its efficacy. Two years later, what has truly transpired within Australia’s phone-free schools?
At a high school in New South Wales, students’ mobile phones are being stored in a container after being “checked in.” Photo: Stephen Safoir/AAP
“The effects have been evident,” Peterson remarked. “Post-ban, we’ve enhanced class beginnings, diminished disruptions, and improved class dynamics. Conflicts related to devices have reduced, and recess and lunch have transformed. We now see games, conversations, and positive interactions among students and staff. That’s the atmosphere young people seek.”
Research from South Australia—released earlier this March—indicated that 70% of educators noticed increased focus and engagement during learning periods, while 64% noted “a reduction in the rate of serious incidents” attributable to device usage.
Lucaya, a graduate from a western Sydney high school in 2024, views the ban as an “overreaction.” Having experienced both unrestricted cell phone use and the ban during her final year, she reports that students still find covert ways to use their devices.
“Teenagers regard cell phones as vital,” she asserts. “It provides them with a sense of safety and security. Denying them something that holds such significance will only exacerbate stress and anxiety, complicating matters for teachers and administrators.” [and] assisting staff in coping.”
Several students believe that the removal of cell phones from the classroom has curtailed their options to cheat. Photo: Mike Bowers/The Guardian
Nevertheless, anecdotal evidence from dialogues with students and staff across various public and private institutions suggests a general consensus that the ban has yielded positive outcomes. An anonymous high school teacher noted that simply having mobile phones present in classrooms can prove distracting, even if not actively used. “They simply offer opportunities,” she commented. “You can distinctly notice the difference in their absence.”
Many students believe the ban has created a more equitable learning environment. Amy, a Year 11 student at a public high school in Sydney’s west, remarked that eliminating mobile phones in classrooms has curtailed misbehavior while also fostering social connections for those who spend excess time online.
“Students [feel more at ease] “It fosters a safe environment where we don’t have to stress about people sharing pictures of us,” she stated.
Mariam, a Year 11 student at a public high school in Sydney’s south, felt that the phone ban was “unjust” and claimed that teachers occasionally used it to exert authority, but admitted it positively influenced learning outcomes. Aisha, a Year 11 student from a private Islamic school in Sydney’s west, noted that the phone ban has helped her “maintain attention longer and perform better academically.”
Dr. Tony Mordini, principal of Melbourne High School, a public selective institution, has observed this heightened attention firsthand. His school adopted a no-phone policy in January 2020, following guidelines from the Victorian Department of Education.
“From a professional perspective, this ban has clearly had a beneficial impact,” he stated. “Students exhibit increased focus during lessons and are less sidetracked by online distractions. Furthermore, the absence of phones has significantly curtailed opportunities for cyberbullying and harassment in classrooms.”
However, Mordini acknowledges that the ban also curtails certain student opportunities.
“It’s crucial to recognize what we’ve surrendered,” he remarks. “Mobile phones can serve as powerful educational tools, capable of storing extensive content, assisting with research, capturing photographs, creating videos, and hosting valuable applications. Lacking a mobile phone necessitates reliance on the traditional resources and devices provided by the school.”
Professor Neil Selwyn from Monash University’s School of Education, Culture, and Society, stated, “We’ve been informed that banning phones will curb cyberbullying, enhance concentration in class, and reduce the need for teachers to discipline for phone misuse.” Some politicians promised to boost student learning and mental health, but a significant impetus behind these bans was their popularity.
He suggested that schools might serve as a stand-in for wider concerns about children and their device usage, but questions whether schools serve as the optimal solution.
“Young people spend a significant amount of time outside school, thus parents and families must engage in discussions on regulating their children’s device usage at home,” he emphasizes. “Regrettably, this isn’t a priority for most policymakers, so enacting phone bans in schools feels like an easy way to address the broader issue of excessive digital device use.”
Mr. Selwyn indicated that Australia’s phone ban was not implemented “with the intent of thoroughly investigating its effectiveness” and termed specific research into this field as “not conclusive or particularly rigorous.”
He further asserted that recent government data from New South Wales and South Australia is “not particularly illuminating.”
“The critical concern remains how these bans will affect us over time,” he noted. “Claims suggesting these bans suddenly result in dramatic improvements may sound politically appealing, but the tangible impact of these bans necessitates more comprehensive and ongoing investigation.
“We must go beyond merely asking principals if they believe student learning has enhanced. We need to enter classrooms and engage students and teachers about their varied experiences with the ban, and the potential benefits they foresee moving forward.”
He referenced a recent UK study of 30 schools and over 1,200 students which concluded that “students in schools devoid of smartphones showed no notable differences in mental health, sleep, academic performance in English or mathematics, or even disruptive behavior in class.”
“Phone bans are not a silver bullet, but they serve as an important tool,” Peterson comments. Photo: Dan Peled/AAP
“While some studies imply a connection between phone bans and improved academic performance, they are not deemed to provide reliable evidence of direct causation,” he states. “It would be imprudent to assume a phone ban would singularly and significantly rectify these issues.”
Peterson takes care not to “exaggerate” the ban’s implications but asserts that it aims to “foster conditions conducive to successful learning and friendships.” Despite exempting medical management, disability support, or assistive translation applications, he contends that academic flow is enhanced, conflict is reduced, and social unity is improved. His school’s “health metrics” indicate “lessened psychological distress.”
“Phone bans are not a panacea,” he notes. “However, they are a valuable resource, particularly when paired with digital citizenship, mental health advocacy, and positive playground initiatives.”
Peterson conveyed that numerous students suggested the ban offers them a “reprieve.”
“Phone bans have now simply become the norm, with real and modest benefits that are genuinely worthwhile.”
Coral reefs are critically threatened by climate change
WaterFrame/Alamy
The recent surge in ocean temperatures has led to extensive bleaching and mortality of warm-water corals globally, marking the onset of the first climate tipping point in an ecosystem on Earth, as stated by scientists.
The deterioration of one of the planet’s most biodiverse and vulnerable ecosystems presents ‘risks to human health and safety’ for which governments are inadequately prepared, cautions Melanie McField, who oversees Florida’s “Healthy Reefs for Healthy People” conservation initiative under the Smithsonian Institution.
Warm-water coral reefs account for one-third of all known marine biodiversity and offer food, coastal protection, and livelihoods for approximately one billion individuals worldwide. Additionally, coral reefs contribute $9.9 trillion annually in goods and services globally.
However, corals are particularly vulnerable to fluctuations in water temperature. Record-breaking global temperatures in 2023 have elevated ocean heat levels to unprecedented highs, resulting in significant bleaching events impacting over 80 percent of the world’s corals. Bleaching occurs when corals react to elevated water temperatures by expelling the algae residing within their tissues, leading them to bleach white. This process can make corals more prone to disease, and prolonged bleaching can deplete their primary food supply, ultimately leading to their death.
The most recent bleaching event represented “an order of magnitude” beyond what scientists had previously witnessed, according to McField. “We are at a tipping point,” she acknowledged. This is generally understood as a crucial threshold that, if crossed, can trigger dramatic and potentially irreversible changes in the climate system.
McField contributed to the chapter on corals in the Global Tipping Point Report 2025, which is now available for purchase. This report, the first update since 2023, was compiled by 160 scientists globally and coordinated by the University of Exeter and campaign organization WWF. It warns that warm-water corals are the initial component of the Earth system to reach a tipping point and are currently facing an “unprecedented crisis.”
Leading experts estimate that the thermal threshold for warm-water corals will be reached at a 1.2 degrees Celsius increase in global atmospheric temperatures above pre-industrial levels, with an upper limit of 1.5 degrees Celsius. By 2024, the world’s average temperature is expected to surpass this 1.5 degrees Celsius threshold for the first time in human history, exceeding the limits within which coral reefs can survive, noted Tim Renton, who spearheaded the report at the University of Exeter.
“We assessed the world at a temperature of 1.5 degrees Celsius and confirmed the results,” he stated during a press conference ahead of the report’s release. “Most coral reefs are at risk of large-scale mortality or bleaching and are transitioning into a different state dominated by seaweed and algae.”
The most promising chance to save the world’s warm-water corals from near-total extinction lies in rapidly reducing global average temperatures to 1.2 degrees Celsius below pre-industrial levels, Renton asserts. However, whether this ambitious goal, which exceeds even the targets set for 1.5°C, is attainable remains uncertain.
Terry Hughes, a researcher from Australia’s James Cook University, emphasizes that “few unbleached coral reefs remain worldwide”. Nonetheless, there is still potential for improvement. “If global greenhouse gas emissions are swiftly curtailed, we can influence the future of coral reefs over the next few decades,” he states.
Although the timing of climate tipping points is often uncertain, researchers caution that significant declines in the Amazon rainforest, melting of polar ice sheets, and collapse of the crucial AMOC current may all occur at warming levels below 2°C.
Moreover, humans can also instigate “positive tipping points” to mitigate these risks, Renton highlighted, pointing to the rapid advancements in renewable energy and the increased adoption of electric vehicles in the past decade. Fast-tracking cleaner technologies could significantly reduce emissions and help keep global warming well below 2°C, the report suggests.
Renton stated that immediate action is crucial from world leaders during the upcoming COP30 summit in Brazil to expedite emissions reductions across the global economy and minimize the duration for which global temperatures exceed 1.5 degrees Celsius. “We are swiftly nearing tipping points in various Earth systems that could have catastrophic impacts on humanity and nature, fundamentally altering the planet. This necessitates immediate and unprecedented action from COP30 leaders and policymakers worldwide,” he urged.
TThis past month, independent musicians in San Francisco convened for a series of discussions titled “Death to Spotify,” where attendees delved into “the implications of decentralizing music discovery, production, and listening from a capitalist framework.”
Hosted at Bathers Library, the event featured speakers from indie radio station KEXP, record labels Cherub Dream Records and Dandy Boy Records, along with DJ collectives No Bias and Amor Digital. What began as a modest gathering quickly sold out, gaining international interest. Organizers were approached by individuals as far away as Barcelona and Bengaluru eager to replicate the event.
“Death to Spotify” event held on September 23rd at Buzzards Library in San Francisco, California. Photo: Dennis Heredia
These discussions occur as the global backlash against Spotify gains traction. In January, music journalist Liz Perry released *Mood Machine*, a critical examination arguing that streaming services have decimated the industry, turning listeners into “passive, unstimulated consumers.” Perry asserts that Spotify’s business model pays artists meagerly, particularly if they consent to be “playlisted” in discovery mode, which delivers a bland, ambient soundtrack that blends into the background.
While artists have long voiced concerns over inadequate compensation, this past summer, criticism turned personal, specifically targeting Spotify’s billionaire co-founder Daniel Ek’s backing of Hellsing, a German company developing military technology AI. Prominent acts like Massive Attack, King Gizzard & the Lizard Wizard, Deerhoof, and Hotline TNT have pulled their songs from the platform in protest, though Spotify stresses that “Spotify and Hellsing are entirely separate entities.”
“Mood Machine: The Rise of Spotify and the Cost of the Perfect Playlist” by Liz Perry. Photo: Hodder
In Oakland, Stefani Dukic read *Mood Machine*, learned about the boycott, and felt inspired.
While not a musician, Dukic, who investigates city police complaints, describes her fascination with sound alongside her friend Manasa Karthikeyan, who works in an art gallery. They decided to foster a similar dialogue. “Spotify plays a vital role in our music interaction,” Dukic explains. “We thought it would be enriching to investigate our relationship with streaming, the significance of deleting a file, and the process involved.” Thus, Death to Spotify was conceived.
In essence, the aim was to “end algorithmic listening, cease royalty exploitation, and discontinue AI-generated music.”
Karthikeyan believes the onus of quitting Spotify falls on both listeners and musicians. “One must acknowledge that not everything is instantly available,” she states. “It prompts deeper consideration of what you support.”
Yet, will musicians and fans truly commit to a long-term boycott of the app?
Numerous prominent artists have previously pulled their catalogs from Spotify amid much fanfare, only to quietly return. Taylor Swift, one of the platform’s biggest stars, returned in 2017 after a three-year boycott over unfair payment practices. Thom Yorke, the frontman of Radiohead, removed some solo projects in 2013 for similar reasons, labeling Spotify as “the last desperate fart of a dying corpse,” yet he later reinstated them.
In 2022, Neil Young and Joni Mitchell left the platform due to an exclusive deal with anti-vaccine podcast host Joe Rogan. Having both contracted polio in their childhood during the 1950s, they have also reinstated their catalogs on Spotify.
Eric Drott, a music professor at the University of Texas at Austin, suggests this latest wave of boycotts feels distinct. “These artists are not mainstream. Many have realized for years that streaming isn’t lucrative, but they still sought recognition. With the sheer volume of available music, people are questioning its overall value.”
Will Anderson of Hotline TNT asserted there is “0%” chance his band will return. “There’s no rationale for genuine music enthusiasts to be there,” he claims. “Spotify’s primary objective is to encourage you to stop pondering what’s being played.” When the band sold their new album, “Raspberry Moon,” directly via Bandcamp and a 24-hour Twitch stream, it sold hundreds of copies and generated substantial revenue.
Manasa Karthikeyan (left) and Stephanie Dukic. Photo: Eva Tuff
Pop-rock artist Caroline Rose and others are also experimenting with alternative distribution methods. Her album *Year of the Slug*, influenced by Cindy Lee’s “Diamond Jubilee,” was exclusively released on vinyl and Bandcamp, initially available only on YouTube and the file-sharing platform Mega. “It’s disheartening to pour your heart and soul into something only to give it away online for free,” Rose articulates.
Rose is a member of the Union of Musicians and Allied Workers (UMAW), an advocacy organization established to protect music professionals since the onset of the COVID-19 pandemic. Joey DeFrancesco, a member of the punk band Downtown Boys and a UMAW co-founder, stated the group “clearly advocates for artists as agents, holding corporations accountable and facilitating necessary dialogue,” including efforts to remove music from Spotify. He also noted the “limitations” inherent in individual boycotts.
“Our goal in the labor movement and within UMAW is to act collectively,” he emphasized. Notable examples include UMAW’s successful campaign—in partnership with the Palestine for Palestine coalition—to persuade the South by Southwest music festival to cut ties with U.S. military and arms manufacturers as sponsors for its 2025 event, as well as the introduction of the Living Wage for Musicians Act (a bill aimed at regulating payments to artists on Spotify) championed by Congresswoman Rashida Tlaib.
The organizers of Death to Spotify assert that their intent isn’t to dismantle the app but rather to prompt users to critically reflect on their music consumption habits. “We want to encourage a more thoughtful engagement with how we listen to music,” Karthikeyan explains. “Sticking to algorithmically generated comfort zones only serves to diminish the richness of our culture.”
If you’re contemplating an upgrade, a new laptop you’re in good company. Many individuals consider replacing their device when it starts to feel sluggish or antiquated. However, before you invest in the latest technology, it’s crucial to ask: Do you truly need a new laptop?
For the majority of users, the answer is: no. The reality is that the requirements for laptop performance haven’t evolved significantly even as technology progresses. Whether you’re browsing, participating in video conferences, or working on spreadsheets, your current laptop likely meets all your needs.
Reasons not to upgrade
If your laptop is still adequate for your daily tasks, an upgrade might be unnecessary. Here’s why:
Most applications don’t need cutting-edge specifications. Upgrading from an Intel Core i3 to an Ultra Core 5 won’t necessarily speed up report generation. Productivity is often influenced more by effective workflows than by raw CPU speed.
New ports are adaptable. Even without USB-C or Wi-Fi 7, adapters and dongles make it simple to connect to modern devices.
It’s both financially and environmentally responsible. A new laptop could set you back between $1,000 and $10,000, and its production utilizes rare minerals and significant energy. Repairing or upgrading your existing device is better for your finances and the environment.
When is buying a new laptop justified?
There are valid reasons for purchasing a new laptop in 2025.
Severe physical damage or hardware failure (e.g., a malfunctioning motherboard or a battery that can’t be replaced)
Display or keyboard issues that hinder daily usage
Incompatibility with essential new software vital for work or study
If your current device meets any of these criteria, it might be time for a replacement. Otherwise, with a few tweaks, you can restore your laptop to like-new condition.
Enhancing laptop performance without a new purchase
1. Optimize startup programs
Excessive startup applications can degrade performance.
On Windows, open Task Manager → Startup and disable any unnecessary apps.
On macOS, navigate to System Settings → Users and Groups → Login Items and remove what you don’t need.
2. Uninstall unused browser extensions
Extensions can consume resources and slow down your browsing experience. For Chrome, enter: chrome://extensions in the address bar and remove any outdated extensions.
3. Free up storage
Utilize a free disk analysis tool such as:
WinDirStat (Windows)
Disk Inventory X (Mac)
Delete large files, outdated downloads, and applications you no longer use. This can free up valuable space and help you avoid costly storage upgrades.
4. Consider hardware upgrades
Small upgrades can significantly enhance your system’s performance.
Add RAM (8GB is the minimum, but 16GB is recommended)
Switch from HDD to SSD for improved speed
Replace your battery if it drains quickly
5. Reinstall the operating system
A clean OS installation can eliminate years of accumulated digital clutter. Be sure to back up your files first, then download and reinstall the operating system from the official site. You may be surprised by the enhanced performance.
Don’t overlook physical cleaning
One of the best parts of getting a new laptop is how fresh and tidy it is. To refresh your current device:
Shut down and disconnect.
Use a microfiber cloth to wipe down the screen and surfaces.
Turn the keyboard upside down and gently vacuum to remove any dust or debris.
A clean laptop not only looks appealing but also enhances airflow and minimizes overheating.
Final thoughts: Make wise upgrade decisions
Before you rush to purchase a new laptop, consider enhancing the performance of your current device. Through simple maintenance, a handful of upgrades, and good cleaning practices, you can extend the lifespan of your laptop by several years— saving money and benefiting the planet.
If you do decide to buy a new one, research thoroughly and focus on what truly matters: performance, reliability, and user experience— not just impressive specifications.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.