AI’s Profound Impact on Wealth: Is This What We Truly Desire? | Dusting Astera

rSpecifically, Palantir—a cutting-edge firm known for its five billionaire executives—recently made an announcement stating its Second Quarter Revenue exceeded $1 billion. This marks a 48% increase from the previous year, with a staggering 93% growth in the U.S. commercial sector. These figures are astonishing, largely owing to the company’s embrace of AI.

The AI revolution is upon us, and as a proponent of this advancement, it reminds us that every day in the U.S., we are reshaping our world, enhancing the efficiency and reducing the errors in businesses and government agencies while unlocking extraordinary opportunities in science and technology. If managed well, this latest surge from Big Tech could catalyze unprecedented economic growth.

But who is asking about growth?


Take OpenAI, the powerhouse behind ChatGPT. In a promotional video, CEO Sam Altman boasted that “You can write an entire computer program from scratch.” Shortly after, the New York Times reported that Computer Science alumni are “facing some of the highest unemployment rates” compared to other fields. This issue doesn’t only pertain to coders or engineers; AI-driven automation threatens jobs even within lower-skilled labor sectors. McDonald’s, Walmart, and Amazon are already deploying AI tools to automate tasks from customer service to warehouse operations.

While the immediate outcome of these cost-cutting layoffs is beneficial to AI entrepreneurs, it appears the AI revolution is primarily enriching those who are already wealthy. On Wall Street, AI stocks are rising at record speeds, with hundreds of so-called “unicorns” emerging. According to 500 AI startups are now valued at over $1 billion each. Bloomberg reports that 29 founders of AI companies are currently creating new billionaires, and it’s worth noting that nearly all of these firms were founded in the past five years.

Why are investors so optimistic about the AI boom? Partly because this technology has the potential to replace human jobs faster than any recent innovation. The soaring valuations of AI startups are predicated on the notion that this technology could eliminate the necessity for human labor. The layoff trend is proving to be very lucrative, suggesting that the AI boom may represent the most efficient redistribution of wealth seen in modern history.

Some AI advocates argue that the fallout from these changes isn’t too detrimental for the average worker. Microsoft has even speculated that blue-collar workers may find advantages in the future AI economy. However, this perspective seems unconvincing. Certain workers with specialized skills can maintain decent wages and steady employment temporarily. However, advancements in self-driving technologies, automated warehouses, and fully automated restaurants will likely impact non-university educated workers much sooner than optimistic forecasts suggest.

All of this raises significant questions about our current economic trajectory and the wisdom of prioritizing high-tech innovation above all else. In the late 1990s, the emergence of the knowledge economy was hailed as a solution to various economic crises. While the transition from traditional industries led to the decline of millions of high-wage union jobs, people were encouraged to “upskill” and pursue higher education to secure jobs in Google’s new universe. Ironically, AI—the epitome of knowledge—is threatening to eliminate knowledge-based work. As Karl Marx once noted, the bourgeoisie digs their own grave by impoverishing the proletariat. Today’s tech elites seem intent on fulfilling that prediction.

The information age has not only created a new class of oligarchs—from Bill Gates and Jeff Bezos to Elon Musk—but also widened class divides based on education and income. As computer-driven work gained respect, wage disparities between those with university degrees and those without expanded significantly.

Today, a person’s stance on cultural issues—ranging from gender ideology to immigration—can often be tied to their economic standing. Those who still earn a living through manual labor are increasingly alienated from those who prosper through managing and manipulating “data.” In urban knowledge hubs, a near-medieval class structure emerges, where bankers and tech moguls thrive, while a robust class of lawyers, healthcare professionals, and white-collar workers is followed by a scrutinized segment of blue-collar and service workers, alongside a growing cohort of semi-permanent unemployed individuals.

This profound inequality has led to political dysfunction. Our civic landscapes are characterized by hostility, suspicion, resentment, and extreme polarization. Ultimately, politics seems to favor only the financial and technological elites who maintain effective control over government influence. Under Joe Biden, they benefit from incentives and subsidies, while under Donald Trump, they received tax cuts and deregulation. Regardless of who holds power, they always seem to become richer.

Societally, the anticipated benefits of the knowledge economy have not materialized as promised. With the advent of global connectivity, we expected cultural flourishing and social vibrancy. Instead, we have received an endless scroll of mediocrity. Smartphone addiction has exacerbated our negativity, bitterness, and boredom, while social media has turned us into narcissists. Our attention spans have degraded due to the incessant need for notifications. The proliferation of touchscreen kiosks has further diminished the possibility for human interaction. As a result, we are lonelier and less content, and the solution being offered is more AI—perhaps indicating an even deeper psychosis. Do we truly need more?


mCommon labor is essential for achieving any semblance of shared interest. Rebuilding our aging infrastructure and modernizing the electrical grid requires electricians, steel workers, and skilled trades—not simply data centers. To maintain clean city streets, we need more, better-compensated sanitation workers, not “smart” trash compactors. Addressing crime and social order necessitates more police officers on patrol—not fleets of robotic crime dogs. Improving transportation requires actual trains operated by people, not self-driving cars. In short, investing in a low-tech economy offers a multitude of opportunities. Moreover, essentials in life—love, family, friendship, and community—remain fundamentally analog.

Beyond what is desirable, investing in a low-tech future may even become necessary. Despite the persistent hype surrounding AI, it remains an illusion. The massive influx of investment capital into the AI domain carries all the hallmarks of speculative bubbles that, if burst, could further destabilize an already precarious economy.

This does not advocate for Luddism. Technological advancements should progress at a measured pace. However, technological development must not dominate our priorities. Shouldn’t government priorities center around social and human needs? In 2022, Congress approved around $280 billion for high-tech investments. In 2024, private funding in AI alone reached $2.3 trillion. This year, the largest tech companies benefitted from deregulatory measures and Wall Street’s overreliance, with plans to commit an additional $320 billion to AI and data centers. In contrast, Biden’s significant investments in infrastructure reached only $110 billion. This disparity highlights the need for a balanced approach to technology and societal welfare.

Marx, despite his complexities, understood that technology should cater to societal needs. Currently, we have inverted that model—society exists to serve technology. Silicon Valley leaders would like to portray a narrative where the intricate challenges of the future require ever-increasing R&D investments, but the ongoing deregulations primarily benefit tech sectors. The most pressing concerns are not the complexities of tomorrow but the enduring issues of wealth, class, and power.

Source: www.theguardian.com

Review of “How to Save the Internet with Nick Clegg” – Unpacking Silicon Valley’s Impact on Technology

Nick Clegg takes on challenging positions. He served as the British Deputy Prime Minister from 2010 to 2015, navigating the complex dynamics between David Cameron’s Conservatives and his own Liberal Democrats. A few years later, he embraced another tough role as Vice President of Meta and President of Global Affairs from 2018 until January 2025. In this capacity, he managed the contrasting landscapes of Silicon Valley and Washington, D.C., as well as other governments. “How to Save the Internet” outlines Clegg’s approach to these demanding responsibilities and presents his vision for fostering a more collaborative and effective relationship between tech companies and regulators in the future.

The primary threats Clegg discusses in his book do not originate from the Internet; rather, they come in the form of regulatory actions against it. “The true aim of this book is not to safeguard myself, Meta, or major technologies. It is to enhance awareness about the future of the Internet and the potential benefits of these innovative technologies.”

However, much of the book focuses on defending Meta and large technology firms, beginning with a conflation of the widely beloved Internet with social media, which represents a more ambiguous aspect of online activity. In his exploration of “Techlash,” the swift public backlash against big tech occurring in the late 2010s, he poses the question:

That brings me to a recent survey I conducted through Harris Poll. I posed this question to a nationally representative sample of young American adults—the very generation that has been shaped by a plethora of social media platforms. We invited respondents to share their thoughts on the existence of various platforms and products. The regret for the existence of the Internet is low at 17%, while for smartphones, it’s only 21%. However, regret regarding major social media platforms is considerably higher, ranging from 34% for Instagram (owned by Meta) to 47% for TikTok and 50% for X. A parental investigation also found high levels of regret regarding social media. Similarly, other researchers have uncovered similar findings in their studies.

In other words, many of us would opt to disconnect from certain technologies if given the chance. Clegg presents this choice as binary: either fully embrace the Internet or shut it down. Yet, the real concern lies with social media, which can be regulated without dismantling the entire Internet and is consequently far more challenging to defend.

Nevertheless, Clegg attempts this defense. In the opening chapter, he addresses dual accusations that social media has harmed global democracy and adversely affected teenage mental health. While he acknowledges both have deteriorated since the 2010s, he contends that the decline merely coincides with the rise of social media and is not a direct cause. He refers to academic research, yet his interpretations echo standard narratives from Meta and overlook many critical counterarguments. For instance, consider this study contrasted with alternative perspectives. Ultimately, Clegg borrows many of his defensive phrases directly from a rebuttal published by Meta in response to criticisms, while my own work articulates a case for the detrimental impact of social media on democracy.

In this book, Clegg aligns himself with Meta’s narrative, despite previously holding different views on teenage mental health. Multiple state attorneys general in the U.S. have initiated lawsuits against Meta, revealing insights through obtained documents that show Clegg’s awareness of the issues. For instance, on August 27, 2021, Clegg sent an email to Mark Zuckerberg, prompted by an employee’s request for increased resources to address teenage mental health concerns. Clegg expressed that it was “increasingly urgent” to tackle “issues concerning the impact of products on the mental health of young people,” indicating that the company’s efforts were hampered by staffing shortages. Zuckerberg, however, did not respond to this email.

Clegg’s current stance—that harm is merely correlational and that such correlations lack significance—contradicts the experiences of numerous Meta employee, contractor, whistleblower, and leaked document evidence. One example comes from a 2019 Meta-offered study commissioned by the Tennessee Attorney General, where researchers informed Meta: “[teens] Despite Instagram’s addictive nature and detrimental effects on mental health, it’s still irresistible.”

Regarding his suggestions for preserving the Internet, Clegg proposes two key principles: radical transparency and collaboration. He advocates for tech companies to be more open about how their algorithms function and how decisions are made. He warns: “If the Silicon Valley Master refrains from opening up, external forces will intervene.”

In terms of collaboration, he advocates for a “digital democratic alliance,” emphasizing the importance of providing a counter to China’s technology, which supports its authoritarian regime. Clegg envisions that world democracies should unite to ensure the Internet upholds the democratic ideals prevalent in the 1990s.

Does Clegg’s vision hold merit? While transparency is commendable in theory, it may be too late to enforce these principles on the currently dominant companies of the Internet. As tech journalist Kara Swisher articulated, we built cities without infrastructure—no sanitation, no law enforcement, no guidance. Envision such a city. This lack of foundational design allows fraudsters, extremists, and others to thrive on these platforms, posing risks that even teenagers and large enterprises doubt can be addressed. A leap towards transparency by 2026 may prove insufficient to rectify the detrimental frameworks established two decades ago.


As for collaboration, envisioning a corporation like Meta relinquishing data and control seems implausible. The tech giant has garnered considerable support from the Trump administration, raising doubts about their willingness to pressure other nations. Thus, it remains unclear how “the choice will be taken out of their hands” should they resist cooperation. By whom?

The great biologist and ant expert, E.O. Wilson, once remarked that Marxism is “a good ideology for the wrong species.” After engaging with Clegg’s proposals, one might draw a parallel; his suggestions overlook the many critiques found in books addressing Meta’s unethical practices, numerous revelations from the 2021 leak known as the Facebook Files, and ongoing legal challenges.

Jonathan Haidt is a social psychologist and author of “The Unreliable Generation” (Penguin). How to Save the Internet: The Threat to Global Connections in the Age of AI and Political Conflicts by Nick Clegg is published by Bodley Head (£25). To support the Guardian, purchase a copy at Guardianbookshop.com. Shipping charges may apply.

Source: www.theguardian.com

Spacecraft Predicts Solar Storm 15 Hours Before Impact with Earth

Solar activity

Solar Storms Threaten Electronic Systems on Earth

Solar Dynamics Observatory, NASA.

Following the successful testing of techniques using solar group spacecraft, it may soon be possible to forecast significant solar storms capable of disrupting Earth’s electronics by more than half a day in advance.

The Sun periodically emits powerful plasma bursts known as coronal mass ejections (CMEs), which create strong magnetic fields that can harm electronics on our planet. While satellites and telescopes do monitor CME indicators, their predictions depend on the magnetic field within each CME, making it challenging to identify which emissions will be hazardous.

One of the most reliable instruments for assessing these magnetic fields is found in satellites positioned in gravity-stable orbits around the Earth, known as Lagrange Points. Though these satellites are positioned hundreds of thousands of kilometers away, they exist at only about 1% of the distance to the Sun, which contributes to their ability to provide warnings about the intensity of a CME within an hour of its impact.

Now, Emma Davis from Glaz and her colleagues at Austria’s Space Meteorological Office have discovered a method utilizing the European Space Agency’s solar orbiter to issue earlier alerts. “Solar Orbiters are primarily a scientific mission and not specifically designed for this purpose,” Davis explains. “This is an added benefit from unforeseen alignments during a CME event.”

On March 17th and 23rd of this year, two sets of CMEs were heading toward Earth while the solar orbiter was positioned between our planet and the Sun. Davis and her team leveraged the spacecraft’s magnetic field and solar wind speed measurements to model the internal magnetic architecture of each CME and anticipate the severity of the geomagnetic storms they would induce. Remarkably, the entire forecasting process required less than five minutes, allowing predictions 7 and 15 hours before the events reached Earth.

Davis noted that their predictions closely aligned with the actual geomagnetic strengths observed, which she found remarkable considering the dynamic changes the CME’s magnetic fields undergo as they approach Earth. “The fortunate aspect was that not many unexpected events occurred, and these CMEs behaved rather predictably,” she adds.

She cautions that upcoming storms may not follow the same predictable patterns and that determining the exact arrival time of these storms remains challenging, with uncertainties lasting several hours.

Nevertheless, she underscores the importance of real-time measurements once a CME departs from the Sun. Chris Scott from the University of Reading, UK, who was not part of this research, noted, “It provides an early indication of the potential configuration of the magnetic fields within each eruption.”

However, data from these two events alone are insufficient for fine-tuning predictive models, and further observations are essential before establishing reliable, specialized solar storm monitoring missions near the Sun, Scott concludes.

Astronomy’s Global Capital: Chile

Explore Chile’s astronomical wonders. Visit some of the world’s leading observatories and gaze upon a star-studded sky that boasts some of the clearest views on the planet.

Topics:

Source: www.newscientist.com

Polls Reveal Half of UK Adults Fear AI Will Impact Jobs

Half of adults in the UK express worries about artificial intelligence affecting their employment, prompting union leaders to advocate for a significant shift in the government’s strategy towards emerging technologies.

The primary concern for 51% of the 2,600 adults surveyed by the Trade Union Council was job loss or alterations to contract terms.

AI poses a specific worry for workers aged 25 to 34, with nearly two-thirds (62%) of participants in this age group sharing such apprehensions.

The TUC’s survey results coincide with announcements from major employers, including BT, Amazon, and Microsoft, indicating potential job cuts due to advancements in AI over recent months.

The UK job market is experiencing a slowdown amid an easing economy, with the official unemployment rate reaching 4.7% for the first time in four years; however, most economists do not attribute this to increased investment in AI.

While the TUC recognizes that AI technology could benefit employees and enhance public services, it urges the government to involve both workers and unions in the deployment of AI to safeguard jobs and offer training for roles replaced by AI.

Half of those surveyed (50%) wish to have a say in how AI is implemented in the workplace and the broader economy, as opposed to leaving this decision solely to businesses, with only 17% against this idea.

As part of its AI strategy, the TUC is calling for conditions to be attached to the substantial public funds allocated for AI research and development, ensuring that workers are not displaced by innovative technologies.

Furthermore, it is essential for companies to share the “digital dividends” from productivity improvements achieved through AI by investing in employee training and skills, enhancing wages and working conditions, and involving workers in corporate decision-making processes, including representation on boards.


Union representatives have cautioned that without such regulations, allowing workers to influence AI usage, the rise of new technologies may result in “prolonged inequality,” worsened working conditions, and increasing social unrest.

The TUC has insisted on the need to strengthen the UK’s social security and skills systems to support and reskill workers whose jobs are threatened by AI advancements.

Kate Bell, TUC’s assistant secretary, stated: “AI holds transformative potential, and if developed correctly, it can enhance productivity, benefiting workers.”

She further noted: “The alternative is grim. In unchecked and improper hands, the AI revolution might establish deepening inequality as jobs decline or vanish, with shareholders growing wealthier.”

Source: www.theguardian.com

Clomal Review: A Groundbreaking Book Examines the Global Impact of Noise

Noise-Canceling Headphones as a Solution to Unwanted Sound

pjrtransport/alamy

shout
Chris Berdik (WW Norton)

Noise is a constant presence, easily overlooked until its intensity shifts significantly. We recognize familiar sounds—the heartbeats and hums of daily life—just as readily as we notice the sights seen during commutes and other outings.

When those familiar sounds change, we pay attention. Many express a desire for quietude, yet Chris Berdik, a science journalist, posits that this intricately overlaps with positive noise. He argues in his book, Cry: Noise Took Over the World – And How Can It Be Regained?, that sometimes we need to introduce pleasant sounds to mask the more intrusive ones.

While noise-cancelling headphones sell well, Berdik emphasizes that they are not a universal fix. Introducing white or grey noise can mitigate harmful sounds, yet complete silencing can often have detrimental effects.

It is crucial to cultivate the right kind of noise, as it directly impacts our health. For instance, my neighbor’s piano while I read Berdik’s work doesn’t elevate my stress levels as much as the sounds of children playing soccer against my living room walls. The immediate effects are concerning, but the long-term implications offer even greater cause for reflection.

Nearly 40 million adults in the US experience noise-induced hearing loss, with projections estimating this figure could nearly double by 2060. This is a universal challenge, as the World Health Organization indicates that over 1 billion young individuals globally face an avoidable risk of hearing impairment due to their use of devices such as smartphones and audio players. Remarkably, one in four respondents suggest that by 2050, they will be putting their hearing at risk.

I read this book amidst rising discussions about noise management. Recently, in the UK, proposed legislation suggested allowing loud music on public transport without headphones—a surprisingly popular thought.

Yet, a contrasting perspective emerged during my recent days spent in the hospital listening to the beeping machines connected to my grandfather. As time passed, those sounds became familiar, aligning with Berdik’s observation about how our brains adapt.

This prevailing issue touches on another compelling narrative Berdik shares in shout: a healthcare professional working on redesigning medical machines to ensure critical signals aren’t drowned out by unnecessary noise. Her innovations include auditory icons—concise sound cues conveying essential information, like the sound of breathing.

The ramifications of noise reach beyond human experiences. Berdik points out that from 1950 to 2007, ambient noise levels in the deep sea rose by 3.3 decibels per decade.

This increase has tangible effects on sea life; for instance, the sounds created by ocean vessels interfere with the communication frequencies of whales.

Change is essential, Berdik insists. This means reducing loud music on public transport—a notion that some, like the liberal Democrats, seem to champion. Furthermore, it calls for a reevaluation of our reliance on noise-cancelling headphones and a thoughtful assessment of the noise we wish to retain versus the noise we seek to eliminate.

Chris Stokell Walker is a technology writer based in Newcastle, UK

The Art and Science of Crafting Science Fiction

Explore new dimensions in science fiction writing this weekend, focusing on world-building and artistic expression.

topics:

Source: www.newscientist.com

Impact of Microbiota Composition, pH, and Temperature on Key Flavor Characteristics of Premium Chocolate

Cocoa (Theobroma cacao) bean fermentation is a natural process characterized by various interactions that influence the flavor profile of high-quality chocolate. By grasping these intricate interactions, one can effectively reproduce the sought-after flavor characteristics in a controlled environment. Research utilizing bean samples fermented at Columbia Farm has demonstrated that pH, temperature, and the composition of microbiota—encompassing both bacteria and fungi—significantly impact the essential flavor qualities of premium chocolate. This discovery lays the groundwork for developing fermentation starters aimed at consistently recreating the attributes of fine chocolate.

Gopaulchan et al. We have confirmed the previously suggested role of pH and temperature variations as reliable indicators of chocolate flavor properties. Image credit: Sci.News.

The creation of fermented products like chocolate relies on the metabolic activities of microbial communities.

These communities transform raw cocoa beans into essential precursors for chocolate production.

Once harvested, cocoa beans undergo several processing stages before becoming chocolate, but fermentation remains a spontaneous process.

“The distinctive flavor of chocolate is shaped by the fermentation of cocoa beans,” stated a representative from University of Nottingham.

“In contrast to the fermentation of wine, cheese, or dough, where specific microorganisms are added to enhance flavor, cocoa bean fermentation occurs naturally, and the microorganisms involved are not well understood.”

“The flavor profile of the beans is closely tied to the geographical location of the farm, resulting in variations in chocolate quality and taste.”

In this research, Dr. Castrillo and co-authors performed DNA sequence-based analyses on fermented cocoa beans from three separate farms in Colombia.

They discovered that a unique microbial community underpins the distinctive fermentation processes at Antioch farms, yielding a superior flavor, as validated by professional tasters.

By analyzing sequencing data, the authors identified the microbial interactions and metabolic pathways involved in fermentation.

This allowed for the design of microbial communities that could mimic the exquisite flavor of chocolate in laboratory settings. This was confirmed through evaluations by the same expert tasters and chocolate metabolite analyses.

Further studies could inform the development of industrial fermentation starters, eliminating the geographical limitations on chocolate flavor.

“The findings from this study enhance our understanding of how the composition of microbial communities during fermentation is a crucial factor in determining chocolate flavor properties,” stated the researchers.

“We have created a reliable methodology to design fermentation starters that facilitate the controlled domestication of the unpredictable microbial fermentations that occur on cocoa farms.”

“This paves the way for the evolution of the modern chocolate industry, akin to the beer and cheese sectors, based on regulated cocoa fermentation utilizing synthetic microbial starters that can consistently replicate the unique flavor characteristics of cocoa beans and chocolate.”

The team’s study was published in the journal Nature Microbiology this week.

____

D. Gopaulchan et al. The defined microbial community recreates the attributes of finely flavored chocolate fermentation. Nat Microbiol Published online on August 18th, 2025. doi:10.1038/s41564-025-02077-6

Source: www.sci.news

Years Without Noticing the Game’s Impact: The Genesis of the Original Football Manager

IYou were a soccer enthusiast who owned a computer in the early 1980s, and there’s one game that stands out in your memory. The box art featured an illustration of an FA Cup, with a photo of a cheerful man sporting curly hair and a goat beard in the lower right corner. That same image appeared in gaming magazine advertisements. Despite its basic graphics and primitive sound, the game was a perennial bestseller, enjoying years of popularity. This was “Football Manager,” the world’s first football tactical simulation. The man gracing the cover was Kevin Toms, the game’s creator and programmer.

The game’s inception story narrates how a passionate coder was holed up in his bedroom, crafting bestsellers for the ZX Spectrum and Commodore 64, eventually driving a Ferrari with the revenue generated. Toms, an avid soccer fan and budding game designer since childhood in the early 1970s, initially expressed his ambitions through a board game during a time when personal computers were not commonplace. “When my parents discovered my career aspirations, I told them: ‘Please ask if I can get a job as a game designer,'” Toms recounts. “They responded: ‘It’s just a phase, he’ll grow out of it.'”

Toms didn’t sway from his path. Through the 1970s, he honed his programming skills on corporate mainframes and also coded for a time at Open University. “It didn’t take long to realize that I could write a game themed around these interests,” he explains. “In fact, my first program was for a programmable calculator.” In 1980, Toms acquired a Video Jeanie Computer, primarily viewed as a clone of one of the early home microcomputers, the TRS-80. “I recognized that I could finally transform the board game concept for a soccer manager I had long aspired to create into a computer game,” he states. “There were two significant advantages: we could calculate the league table and the algorithms for arranging fixtures.”




“In the first few months, we sold 300 games”… Football manager for ZX81. Photo: Kevin Toms/Moby Games

Although the Video Jeanie never gained traction, Toms purchased a ZX81 with a 16K RAM extension and ported the game over. “In January 1982, I ran a quarter-page ad in computer and video game magazines, and it began to gain traction,” he recalls. “I still remember the thrill of opening my first letter. We sold 300 games in those early months.”

At that time, the game was quite rudimentary. There were no graphics, only text. Players had the option to select from 16 teams and play the role of manager, where they could buy players, influence team selection, and make adjustments throughout the season. You would start at the bottom of the old 4th division and work your way up. Toms crafted an algorithm that generated fixtures and determined match outcomes based on team statistics.

“The challenging aspect was determining player attributes,” he explains. “I assigned each a skill rating out of five, and wanted to ensure that you couldn’t simply purchase the best players and keep them for the entire season—there had to be a reason to rotate them. The more players you utilized, the higher the chance of injury.”

Toms aimed to integrate long-term strategy into the game, but the highlight feature became the most engaging aspect: the transfer market. The original version allowed players to sign one new player a week, but the selection was randomized, so one never knew who would become available. “Three midfielders would come up, and you’d need to evaluate their ratings to see if they met your team’s needs. Do you spend your budget now, or wait for a five-rated player who could take weeks to appear? That created a thrilling pressure.”




Inspired by Match of the Day… Soccer Manager Match highlight is Commodore 64. Photo: Kevin Toms/Moby Games

One significant challenge was memory. The expanded ZX81 had only 16K, making certain aspects, like team names, particularly troublesome. “It was a while ago when all the licensing issues came into play,” he notes. “My challenge was whether I needed to license names like Manchester United. The memory constraints meant I had to choose teams with shorter names, hence going with Leeds.”

Football Manager debuted during the nascent era of the gaming industry. Games were often sold via mail order or computer fairs. By 1982, however, high-street stores began taking interest in the burgeoning video game market. “Smith reached out and said, ‘We love your game, we want to stock it,’ and invited me to London. They eventually ordered 2,000 units. However, when I returned home, I realized their math was off—it was actually 10,000.”

Toms left his position at Open University and established his own company, Addictive Games. Later versions of the Football Manager for ZX Spectrum and Commodore 64 included additional features like match highlights that represented crucial moments like goals and near misses.

“It was inspired by Match of the Day. They capture the most exciting parts of the game,” says Toms. “I deliberately omitted the match timer from the screen, so players wouldn’t know how much time was left or if there was still an opportunity for another goal. This was an essential aspect of the design. A slight pause between highlights added to the tension.”

The game became a phenomenon, featuring on bestseller lists for years. My friends and I spent countless hours tweaking team and player names. “I didn’t fully grasp the impact of it all for quite some time,” admits Toms. “There was no internet back then. I would receive letters from players saying, ‘I played for 22 hours straight’ or ‘I failed my mock O Level because of the game.’ I later learned that professional footballers were also fans, including Arsenal striker Charlie Nicholas and Spurs manager Bill Nicholson, with Harry Redknapp serving as a mentor to competitive Football Manager players in 2010.”

Toms subsequently developed several other management simulations, such as Software Star, centering on the gaming industry. However, as Football Manager titles multiplied and the pressure increased, he eventually sold the company, stepped back from gaming, and returned to coding in business while traveling the globe. In 2003, Sports Interactive, the creators of the Championship Manager series, acquired the Football Manager name and rebranded their game accordingly.




“I had someone who played an original purchase for my kids”… Football Star Manager. Photo: Kevin Toms

However, the game was far from finished. A decade ago, Toms engaged with fans of the original game online and assessed their interest in a smartphone adaptation. The Football Manager legacy was revived with familiar visuals. The response was overwhelmingly positive, leading to the release of Football Star* Manager in 2016. Recently, he upgraded it again and introduced a PC version. “People enjoy it, and it resonates with them,” he says. “It’s central to my design philosophy: it may appear simple, but there’s subtle depth that keeps the interest alive. I’ve played through 500 seasons and my bank account now reads £5 billion. The balance is clearly well-crafted.”

Toms has evidently rekindled the spark that initially propelled his Football Manager into the gaming world four decades ago. He has ambitious plans for Soccer Star* Manager, as well as Software Star. “I still have many ideas yet to explore,” he affirms. “There are far more goals and concepts than I have time to implement at the moment. I’m not late; I’m determined to realize them, but it’s a matter of timing.”

Source: www.theguardian.com

The Extraordinary Impact of Nature on Our Brains Uncovered in a New Book

Spend time in green spaces to enhance working memory and attention.

Luke Hayes/Millennium Images, UK


Nature and the Heart


Mark Berman (Vermillion, UK; S&S/Simon Element, USA)

Mark Berman is on the verge of initiating a transformation, and I consider myself already aligned with his vision. You might have encountered his insights in New Scientist regarding the remarkable advantages of nature walks, the therapeutic impact of plants, and the enchantment of urban greenery.

If this sounds familiar, you may presume that Berman’s Research couldn’t offer anything new. However, you would be mistaken. Nature and the Heart caters to everyone, regardless of prior knowledge. It’s designed not only to inform and entertain but also to motivate action.

This narrative outlines how a once troubled boy forged a groundbreaking field in environmental neuroscience by transitioning from law studies pursued under his mother’s nursing influence and his father’s legal career to engineering as an undergraduate student.

Central to this is a fortuitous encounter between psychologists Steve and Rachel Kaplan from the University of Michigan, who introduced attentional restoration theory (ART). This concept posits that engaging with nature can help regain our focus, and by the time Berman met them as a graduate student, Kaplan had already amassed substantial evidence to support this theory.

Berman’s audacious plan involved quantifying these effects by analyzing people, their environments, and their interactions through methods including brain imaging, behavioral testing, computational neuroscience, and statistical analyses.

In his book, Berman reflects on his initial experimental proposal, met with skepticism from John Jonides, a cognitive neuroscientist at the University of Michigan, who said, “It’s crazy. It won’t work.”

The author champions a revolution to ‘naturize’ our homes, schools, offices, and cities.

Yet, Berman persevered, uncovering astonishing findings. A mere 50-minute walk in a park significantly improved individuals’ working memory and attention by 20%, irrespective of their enjoyment of the experience or the weather conditions. Remarkably, it was unnecessary for them to actually walk.

This improvement is notable, but why is attention restoration necessary? As Berman highlights, attention is a crucial resource for many cognitive and emotional functions, and our attention is often drained by an overstimulating environment. By restoring this resource, nature acts as a superpower, enhancing intelligence, happiness, reducing stress, increasing productivity, and fostering compassion.

Some of Berman’s discoveries are breathtaking. For instance, individuals suffering from clinical depression gained fivefold benefits from a walk in the park compared to participants in the original study. Moreover, having ten additional trees on a block in Toronto increased residents’ perception of well-being by 1%.

His research also leads to delightful and innovative findings. In one study, his team employed the JPEG standard in digital image compression to analyze how the human brain processes information regarding natural landscapes versus urban settings. This research demonstrated that urban and natural images with similar complexity levels taxed the brain differently, with nature being less taxing. They even created an app to provide “repair scores” for nearby walking routes.

Berman’s research addresses significant inquiries. How does nature capture attention? What scene elements encourage recovery? How can architecture leverage these effects? It also tackles intriguing questions, such as the allure of galley-style fonts (inspired by the curves of serif typefaces) and the appeal of Jackson Pollock’s abstract paintings (which reflect fractals).

Above all, he is driven by a desire to effect positive change. His work serves as a call to action, urging the implementation of a “natural revolution.” “We should fundamentally reevaluate the design of all constructed spaces,” he advocates. “The natural revolution necessitates a serious commitment from people on a grand scale.”

Source: www.newscientist.com

The Negative Impact of Excessive Fiber on Gut Health

You might have come across the term “fibremaxxing.” This recent health craze, popularized on platforms like TikTok, involves individuals significantly increasing their daily fiber intake. The recommended amount is 25-30g (0.9-1 oz) per day with the goal of enhancing gut health.

Influencers are now dubbing fiber as the “new protein,” promoting high-fiber foods such as beans, lentils, whole grains, fruits, and vegetables. Some even resort to fiber supplements to meet their daily targets.

This theory suggests that such increases have potential health benefits, including improved digestion, more regular bowel movements, increased gut bacteria, and relief from symptoms associated with irritable bowel syndrome (IBS).

This trend arises from the belief that modern diets often lack sufficient fiber, and there is certainly some truth to this.

Current data indicates that 91% of individuals in the UK, and likewise in the US, do not meet the recommended fiber intake. 95% of people lack adequate fiber.

So, ramping up fiber consumption must be beneficial, right? However, it’s not that straightforward, particularly in relation to diet and health.

Intestinal Reactions

Fiber plays a crucial role in gut health for several reasons.

Though it is a carbohydrate, fiber differs from other carbs like starch and sugar; it is not digested in the small intestine. Instead, it travels to the large intestine where gut bacteria utilize it. These bacteria offer various health benefits by producing short-chain fatty acids (SCFAs) that contribute positively to our health.

Non-decomposable fibers absorb liquid, increase stool bulk, and make elimination smoother. Thus, boosting fiber intake can help prevent constipation and regulate bowel movements.

Individuals with IBS often believe enhanced fiber intake soothes symptoms, particularly if they experience more constipation than diarrhea.

Vegetables offer a variety of fibers, both soluble and insoluble, beneficial for gut health.

Research suggests that adequate fiber intake can reduce the risk of colon cancer. One SCFA produced in the large intestine, butyrate, is believed to have anti-inflammatory and protective properties for colon cells.

By facilitating stool passage, fibers minimize the contact time colon cells have with harmful substances, potentially explaining the rising incidence of colon cancers worldwide. Particularly among young individuals who consume insufficient fiber-rich foods.

Nonetheless, while incorporating more fiber into your diet is generally beneficial, “fibremaxxing” requires a nuanced understanding of nutrition. It’s akin to running a marathon without proper training for your gut.

The Risks of Overconsumption

The gut is a complex ecosystem, finely tuned to maintain balance, and introducing excessive specific nutrients can lead to unintended consequences. Some proponents of “fibremaxxing” advocate for fiber intakes exceeding 50g (1.8oz) daily, possibly resulting in side effects such as bloating, cramps, and diarrhea.

There’s also a variety of fiber types to contemplate. Soluble fibers absorb water and slow digestion, while insoluble fibers pass through largely intact, hastening digestion. Moreover, fermentable fibers feed bacteria in the large intestine.

A healthy gut requires a balance among these fibers. Excessive insoluble fiber (found in wheat bran and some vegetables) may bulk up stool but irritate sensitive digestive organs. Certain fermentable fibers, like oligosaccharides, disaccharides, monosaccharides, and polyols (known as FODMAPs), can complicate absorption and potentially aggravate IBS symptoms.

To effectively support gut health, a balance of different fiber types is essential. By aiming excessively high, such as 50g (1.8oz) daily, “fibremaxxers” might confine their diet to a limited array of fiber-rich foods.

For example, high-fiber breakfast cereals provide 12.4g of fiber per 100g (0.5 oz per half cup); to reach their lofty goals, individuals may feel compelled to consume multiple bowls throughout the day. However, cereal mainly offers insoluble fiber, lacking soluble and fermented fiber.

Consequently, excessive fiber can lead to increased digestive discomfort and disrupt the delicate equilibrium required for a healthy gut, particularly if individuals neglect to consider how to balance their nutrient intake.

Cleansing Trends

By prioritizing one nutrient for specific health benefits, fibremaxxing fits into a long tradition of cleansing and detox trends targeting digestive health.

While various gut cleansing rituals frequently trend on social media and appear enticing, they often cause more harm than good. For instance, juice cleanses can strip away vital nutrients, and “detox” teas and laxatives may result in dehydration and long-term harm to the intestinal lining.

Additionally, recent trends involving enemas can disrupt the natural rhythm of the intestines and eliminate beneficial bacteria.

Juice cleansing is neither a healthy nor a safe dieting approach.

Experts advise caution regarding trendy supplements and extreme elimination diets that exclude entire food groups without proper oversight, as these may reduce microbial diversity and impair digestion. Your gut won’t appreciate these drastic resets; what it truly needs is ongoing, careful, and balanced support.

Monitoring Your Fiber Intake

How can you determine if you’re consuming enough fiber in your diet and if you need to increase your intake?

Signs such as fatigue and weight gain are often early warnings. Soluble fiber slows digestion, contributing to prolonged feelings of fullness while gradually releasing sugar into the bloodstream.

In its absence, blood sugar levels can fluctuate unpredictably, leading to fatigue shortly after meals. Such energy dips can tempt you into snacking, resulting in weight gain.

While fibremaxxing may have begun as a sincere effort to boost well-being, like many health trends that go viral, it oversimplifies complex bodily processes and poses risks by overdoing things.

It’s evident that most individuals will benefit from increasing fiber consumption, especially from plant-based foods, which can aid digestion, lower disease risk, and help maintain healthy weight. However, this must be done cautiously; excessive, sudden, or unbalanced increases can be detrimental.

Your gut is a finely tuned ecosystem that flourishes through diversity, consistency, and balance—not through drastic changes or quick fixes.

Read more:

Source: www.sciencefocus.com

Transatlantic Social Media Clash: Impact of UK Online Safety Laws on Internet Safety

The UK’s new online safety laws are generating considerable attention. As worries intensify about the accessibility of harmful online content, regulations have been instituted to hold social media platforms accountable.

However, just days after their implementation, novel strategies for ensuring children’s safety online have sparked discussions in both the UK and the US.

Recently, Nigel Farage, leader of the Populist Reformed British Party, found himself in a heated exchange with the government’s Minister of Labour after announcing his intent to repeal the law.

In parallel, Republicans convened with British lawmakers and the communications regulator Ofcom. The ramifications of the new law are also keenly observed in Australia, where plans are afoot to prohibit social media usage for those under 16.

Experts note that the law embodies a tension between swiftly eliminating harmful content and preserving freedom of speech.

Senior Reformer Zia Yusuf stated:

Responding to criticisms of UK legislation, technical secretary Peter Kyle remarked, “If individuals like Jimmy Saville were alive today, they would still commit crimes online, and Nigel Farage claims to be on their side.”

Kyle referred to measures in the law that would help shield children from grooming via messaging apps. Farage condemned the technical secretary’s comments as “unpleasant” and demanded an apology, which is unlikely to be forthcoming.

“It’s below the belt to suggest they’ll do anything to assist individuals like Jimmy Saville while causing harm,” Farage added.

The UK’s rights are not the only concerns raised about the law. US Vice President JD Vance claimed that freedom of speech in the UK is “retreating.” Last week, Republican Rep. Jim Jordan, who criticized the legislation, led a group of US lawmakers in discussions with Kyle and Ofcom regarding the law.

Jordan labeled the law as “UK online censorship legislation” and criticized Ofcom for imposing regulations that “target” and “harass” American companies. A bipartisan delegation also visited Brussels to explore the Digital Services Act, the EU’s counterpart to the online safety law.

Scott Fitzgerald, a Republican member of the delegation, noted the White House would be keen to hear the group’s findings.

Worries from the Trump administration have even led to threats against OFCOM and EU personnel concerning visa restrictions. In May, the State Department announced it would block entry to the US for “foreigners censoring Americans.” Ofcom has expressed a desire for “clarity” regarding planned visa restrictions.

The intersection of free speech concerns with economic interests is notable. Major tech platforms including Google, YouTube, Facebook, Instagram, WhatsApp, Snapchat, and X are all based in the US and may face fines of up to £18 million or 10% of global revenue for violations. For Meta, the parent company of Instagram, Facebook, and WhatsApp, this could result in fines reaching $16 billion (£11 billion).

On Friday, X, the social media platform owned by self-proclaimed free speech advocate Elon Musk, issued a statement opposing the law, warning that it could “seriously infringe” on free speech.

Signs of public backlash are evident in the UK. A petition calling for the law’s repeal has garnered over 480,000 signatures, making it eligible for consideration in Congress, and was shared on social media by far-right activist Tommy Robinson.

Tim Bale, a political professor at Queen Mary University in London, is skeptical about the law being a major voting issue.

“No petition or protest has significant traction for most people. While this resonates strongly with those online—on both the right and left—it won’t sway a large portion of the general populace,” he said.

According to a recent Ipsos Mori poll, three out of four UK parents are worried about their children’s online activities.

Beavan Kidron, a British fellow and prominent advocate for online child safety, shared with the Guardian that he is “more than willing to engage Nigel Farage and his colleagues on this issue.”

Skip past newsletter promotions

“If companies focus on targeting algorithms toward children, why would reforms place them in the hands of Big Tech?”

The UK’s new Under-18 guidelines, which prompted the latest legislation, mandate age verification on adult sites to prevent underage access. However, there are also measures to protect children from content that endorses suicide, self-harm, and eating disorders, as well as curtail the circulation of materials that incite hatred or promote harmful substances and dangerous challenges.

Some content falls within age appropriateness to avoid being flagged as violating these regulations. In an article by the Daily Telegraph, Farage alleged that footage of anti-immigrant protests was not only “censored” but also related to the Rotherham Grooming Gang scandal.

These instances were observed on X, which flagged a speech by Conservative MP Katie Lamb regarding the UK’s child grooming scandal. The content was labeled with a notice stating, “local laws temporarily restrict access to this content until X verifies the user’s age.” The Guardian could not access the Age Verification Service on X, suggesting that, until age checks are fully operational, the platform defaults many users to a child-friendly experience.

X was contacted for commentary regarding age checks.

On Reddit, the Alcohol Abuse Forum and the Pet Care subforum will implement age checks before granting access. A Reddit spokesperson confirmed that this age check is enforced under the online safety law to limit content that is illegal or harmful to users under the age of 18.

Big Brother Watch, an organization focused on civil liberties and privacy, noted that examples from Reddit and X exemplify the overreach of new legislation.

An Ofcom representative stated that the law aims to protect children from harmful and criminal content while simultaneously safeguarding free speech. “There is no necessity to limit legal content accessible to adult users.”

Mark Jones, a partner at London-based law firm Payne Hicks Beach, cautioned that social media platforms might overly censor legitimate content due to compliance concerns, jeopardizing their obligations to remove illegal material or content detrimental to children.

He added that the regulations surrounding Ofcom’s content handling are likely to manifest as actionable and enforceable due to the pressure to quickly address harmful content while respecting freedom of speech principles.

“To effectively curb the spread of harmful or illegal content, decisions must be made promptly; however, the urgency can lead to incorrect choices. Such is the reality we face.

The latest initiatives from the online safety law are only the beginning.

Source: www.theguardian.com

The Limited Impact of the Tsunami on the U.S. Does Not Indicate an Inaccurate Forecast

The 8.8 magnitude earthquake off the coast of Russia’s Kamchatka Peninsula generated water waves traveling at jetliner speeds toward Hawaii, California, and Washington states on Wednesday.

Yet, when the tsunami reached the U.S., it appears not to have inflicted widespread devastation, with some areas where warnings were issued showing no signs of significant flooding.

This doesn’t mean the tsunami was a “bust” or poorly predicted, according to earthquake and tsunami researchers.

“When you hear ‘tsunami warning,’ people often think of dramatic scenes from movies, and when it arrives at just three feet, they might wonder, ‘What’s going on?’,” remarked Harold Tobin, director of the Pacific Northwest Earthquake Network and professor at the University of Washington. “We should view this as a success; we received a warning, but the situation wasn’t catastrophic.”

Here’s what you should know.

How intense was the Kamchatka earthquake? What caused the initial discrepancies?

Initially, the US Geological Survey assessed the Kamchatka earthquake at magnitude 8.0, which was later adjusted to 8.8.

“It’s not unusual for major earthquakes to see such adjustments in the first moments,” Tobin explained. “Our standard methods for calculating earthquake sizes can quickly saturate, akin to turning up the volume on a speaker until it distorts.

A buoy measuring the quake, located approximately 275 miles southeast of the Kamchatka Peninsula, gave the first signs of the earthquake, showing bigger waves than the initial report.

This buoy belongs to the National Oceanic and Atmospheric Administration’s DART (Deep Ocean Assessment and Reporting) system and is connected to a submarine pressure sensor roughly four miles deep.

That sensor detected waves measuring 90 centimeters (over 35 inches), which caught the attention of tsunami researchers.

Vasily Titov, a senior tsunami modeler at NOAA’s Pacific Ocean Environment Research Institute, noted:

Titov reflected on the 2011 Tohoku earthquake and tsunami, which tragically claimed nearly 16,000 lives in Japan.

Subsequent earthquake models confirmed the Wednesday earthquake’s magnitude as 8.8, as detailed by the USGS calculator.

In comparison, Tohoku was significantly larger.

Tobin estimated that the energy released during the Kamchatka quake was two to three times less than that in Japan, with the tsunami generated there being approximately three times as severe.

He further noted that the Tohoku event “created a notably large seafloor displacement.”

Tobin speculated that the Kamchatka quake likely had less seafloor displacement than what could occur in a worst-case 8.8 scenario, though more research is needed for substantiation.

Emergency services experts assess damage on Sakhalin Island in the Far East post-earthquake.Russia’s Ministry of Emergency via Getty Images / AFP

How did researchers generate predictions? How accurate were they?

Within two hours, researchers produced tsunami predictions for various “warning points” along both the Pacific and US coasts, forecasting tidal gauge and flood levels.

The tsunami took around eight hours to reach Hawaii and twelve hours to arrive at the California coast.

Titov, who assisted in developing the model used by predictors in the National Tsunami Warning Centers in Hawaii and Alaska, explained that the model relies on seismic data and a network of over 70 DART buoys along the Pacific edge. The U.S. operates more than half of these buoys.

Titov indicated that the model projected tsunami waves hitting Hawaii’s North Shore region at approximately two meters.

“Hawaii was predicted to have waves of about 2 meters [6.5 feet], and actual measurements were around 150 centimeters, or 1.5 meters (5 feet). That aligns perfectly with our expectations,” Titov stated.

A similar trend was observed in parts of California, according to Titov.

As assessments of flooding continue to come in, it takes time to determine how well the model performed.

“We know there were floods in Hawaii. We can’t ascertain the full extent yet, but initial reports seem to align closely with our predictions,” Titov shared.

On Wednesday at the Pacifica Municipal Pier Coastline in California, tsunami alerts were triggered following the earthquake.Tayfun Coskun/Anadolu via Getty Images

Why did residents in Hawaii evacuate for a 5-foot wave?

Yong Wei, a tsunami modeler and senior research scientist at the University of Washington and NOAA’s tsunami research center, indicated that 1.5 meters (5 feet) of tsunami waves could be highly perilous, particularly in Hawaii’s shallow waters.

Tsunami waves carry significantly more energy than typical wind-driven waves, possessing shorter wavelengths and durations between waves, resulting in slower speeds.

Wei noted that tsunami waves of this stature could surge several meters inland, producing hazardous currents and endangering boats and other objects.

Visitors stand on the balcony of the Alohilani Resort facing Waikiki Beach in Hawaii, responding to warnings of potential tsunami waves.Nicola Groom / Reuters

“People can get hurt. If you ignore the warning and stay, even a wave of two meters can be deadly,” Wei warned. “Being on the beach can expose you to powerful currents that may pull you into the ocean, which can lead to fatalities.”

Tobin expressed that he viewed the initial warning as conservative yet necessary.

“It’s essential not to belittle warnings. If nothing happens, people shouldn’t think, ‘Oh, we had alerts and nothing transpired.’ Warnings need to be cautious, allowing for some margin of error.”

Was this a significant event?

No. The Kamchatka Peninsula has a long history of seismic activity.

“This area has been slated for another earthquake, with several occurring recently, which indicates a heightened risk,” researchers noted.

In 1952, prior to a robust understanding of plate tectonics, a 9.0 magnitude quake struck the Kamchatka Peninsula in a similar location, resulting in a tsunami that impacted the town of Severokrilsk.

“The Russian populace was caught off guard. It was an immensely powerful quake, leading to a massive tsunami, and they were unprepared,” McInnes shared.

McInnes explained that the tsunami measured between 30 to 60 feet in height in the southern section of the peninsula.

“Thousands perished, and the town suffered considerable destruction,” stated Joanne Bourgeois, a professor emeritus of sedimentology at the University of Washington.

How will the tsunami warning system function if an earthquake threatens your area?

The Kamchatka tsunami arose from a massive earthquake along a subduction zone fault, where one tectonic plate is pushed below another. A comparable fault exists offshore the U.S. West Coast, known as the Cascadia Subduction Zone, stretching from Northern California to Northern Vancouver Island.

“It’s akin to a mirrored image of the Pacific Ocean,” remarked Tobin. “The relatively shallow depth of 8.8 in Cascadia is certainly plausible for a scenario here.”

In fact, Cascadia has the potential to produce significantly larger earthquakes, as modeling suggests it could generate tsunami waves reaching heights of 100 feet.

Typically, earthquakes in subduction zones yield tsunamis that reach the coast within 30 minutes to an hour, and predictions are developing better methods for estimating tsunami impacts along the U.S. West Coast before flooding occurs.

Titov emphasized that enhancing predictions will necessitate advancements in underwater sensors, improved computing infrastructure, and AI algorithms.

Tobin noted that the success of Tuesday’s tsunami warning should inspire more investments in underwater sensors and earthquake monitoring stations along the subduction zones.

“This incident highlights the significant role of NOAA and USGS. Many questioned these agencies’ relevance, but without NOAA, no alert would have been issued. The next warning could be for a more imminent threat. They truly demonstrated their importance,” he asserted.

Source: www.nbcnews.com

Increasing Economic Impact of Wildfires, Severe Storms, and Earthquakes

A report published on Tuesday by German multinationals revealed that weather-related disasters in the first half of this year caused $93 billion in damages within the United States. insurance company.

An analysis from Munich RE, the largest reinsurer in the world, indicated that over 70% of the global damages from this year’s weather disasters occurred in the United States, leading to a burden of $22 billion on uninsured Americans and their local governments.

The report underscores the increasing economic impact of wildfires, severe storms, and other extreme weather events both in the US and globally. It also highlights the escalating insurance crisis in nations frequently afflicted by such disasters.

“Approximately 90% of all industry losses were observed, with $72 billion out of $80 billion occurring in the US,” stated Tobias Grimm, chief climate scientist at Munich RE. “That is remarkable.”

The catastrophic wildfires in Southern California in January ranked as the most expensive disaster in the country during the first half of 2025. The two major fires, responsible for at least 30 fatalities and displacing thousands, swept through the Pacific Ocean’s Pallisad and Altadena neighborhoods.

Munich RE estimated the wildfire losses at $53 billion, including costs affecting uninsured residents. The reinsurer noted that these flames in the Los Angeles area resulted in “the highest wildfire loss ever recorded.”

The significant economic and social impacts of wildfires can be partly attributed to the increasing development in fire-prone areas.

“In many instances, losses are growing due to property developments causing damage,” Grimm explained. “People continue to reside in high-risk zones.”

Urbanization in disaster-prone areas can similarly escalate the costs associated with other weather-related events, like hurricanes and floods, which are becoming more frequent and severe due to climate change.

Research indicates that climate change is becoming increasingly frequent as temperatures rise and drought conditions worsen. Consequently, the intensity of wildfires is also increasing.

A report by the World Weather Attributes Group issued in late January found that high temperatures, along with dry and windy conditions conducive to fire spread in Southern California, could be approximately 35% more likely due to human-induced global warming.

Source: www.nbcnews.com

Have You Discovered an Unexpected Solution to the Environmental Impact of Air Travel?

Will it take flight? This question is increasingly raised by those mindful of the environment. Boarding a plane might seem like the only realistic choice, especially during hard times or when loved ones live far away.

We can certainly engage in some air travel as part of a sustainable future, but we must first dispel certain misconceptions and clearly outline feasible ways to lessen our global warming footprint.

The most common myth is that sustainable aviation fuels (SAFs) can resolve our issues. This label is misleading, as SAFs often don’t live up to their name.

Here’s why: there are three primary categories of SAF. The first type is derived from waste, particularly used cooking oil. However, this only accounts for about 2-3% of global flights. The second type consists of synthetic SAFs produced from raw materials like captured carbon dioxide, using renewable energy. The efficiency of these processes is quite low (at least 2 kilowatt-hours of energy are needed to generate 1 kWh of fuel), which is a misguided use of limited renewable resources. The third type is made from crops, which puts immense pressure on farmland and the food system, posing major challenges. In reality, sustainable aviation fuel is not the revolutionary solution many hope for.

Another hopeful concept I often encounter is the idea that electrification or hydrogen fuel could decarbonize aviation. However, electrification is practical only for short-haul flights; battery weight makes it unfeasible for long distances. Hydrogen poses its own challenges due to its bulky storage requirements, even when compressed to 700 times atmospheric pressure.

On a brighter note, there are significant opportunities that haven’t garnered enough attention.

Potential solutions for greener aviation have been overlooked until recently. The high, wispy trails produced by aircraft exhaust – which account for over 60% of the climate impact of flights – carry even more weight when considering their short-term influence over the next two decades.

These contrails reflect Earth’s heat back into the atmosphere and function somewhat like a blanket. However, their overall impact is complex. They can not only trap heat but also reflect sunlight on clear days, creating a cooling effect that mainly occurs during the day, particularly over dark surfaces like oceans. Unfortunately, the warming effect tends to dominate during warm nights over dark surfaces.

By making small adjustments to flight paths, we can manage contrail formation. Changing an aircraft’s altitude or trajectory in specific weather conditions can be beneficial. Deliberate modifications while flying over sunny waters could yield positive results. A slight alteration in flight routes—just 1.7%—could potentially reduce contrail warming impacts by almost 60%. Real-time modeling is essential for integrating this into flight planning, similar to current practices for avoiding storms and managing air traffic.

This presents a relatively cost-effective solution that requires industry leadership. Once contrail management becomes established, the role of SAF might shift significantly, allowing it to contribute to cleaner burning and mitigate the worst impacts of contrails on more challenging flights.

Does this imply we can ignore the climate ramifications of flying? Unfortunately, no. Yet, understanding these factors provides a legitimate reason for optimism.

Mike Berners-Lee is the author of True Climate: Why We Need It and How to Get It

topic:

Source: www.newscientist.com

Study: Common Sweetener Erythritol May Impact Brain Cells and Elevate Stroke Risk

A recent study from the University of Colorado Boulder indicates that erythritol, a widely used non-nutritive sweetener, may be linked to a higher risk of cardiovascular and cerebrovascular events.



Berry et al. Our study demonstrates that erythritol, at concentrations commonly found in standard size sugar-free beverages, negatively impacts cerebral microvascular endothelial cell oxidative stress, ENOS activation, NO production, ET-1 expression, and T-PA release in vitro. Image credit: Tafilah Yusof.

Erythritol is a popular alternative to non-nutritive sugars due to its minimal effects on blood glucose and insulin levels.

This four-carbon sugar has a low-calorie content of 60-80%, being as sweet as sucrose, and commonly replaces sugar in baked goods, confections, and beverages.

Authorized by the FDA in 2001, erythritol is recommended for individuals with obesity, metabolic syndrome, and diabetes, as it aids in regulating calorie consumption, sugar intake, and minimizing hyperglycemia.

Found naturally in small amounts in certain fruits, vegetables, and fermented foods, erythritol is quickly absorbed in the small intestine through passive diffusion.

In humans, erythritol is produced endogenously from glucose and fructose by erythrocytes, liver, and kidneys via the pentose phosphate pathway, making its levels dependent on both endogenous production and external intake.

“Our findings contribute to the growing evidence that non-nutritive sweeteners, often considered safe, could pose health risks,” stated Professor Christopher Desouza from the University of Colorado.

A recent study involving 4,000 participants from the US and Europe revealed that individuals with elevated erythritol levels are at a significantly increased risk of experiencing a heart attack or stroke within three years.

Professor Desouza and his team sought to determine what factors were contributing to this heightened risk.

They exposed human cells lining blood vessels in the brain to erythritol for three hours, using concentrations similar to those found in standard sugar-free beverages.

The treated cells exhibited several alterations.

Notably, they produced significantly less nitric oxide, a molecule critical for dilating blood vessels, while increasing the expression of endothelin-1, which constricts blood vessels.

Furthermore, the challenge of a thrombogenic compound called thrombin significantly slowed the cell’s production of T-PA, a naturally occurring compound that promotes coagulation.

Cells treated with erythritol also generated more reactive oxygen species, or free radicals, which can lead to cellular damage and inflammation.

“We’ve been diligently working to share our findings with the broader community,” noted Auburn Berry, a graduate student at the University of Colorado in Boulder.

“Our research indicates that erythritol may indeed heighten the risk of stroke.”

“Our study solely focused on sugar substitutes,” emphasized Professor Desouza.

“For individuals consuming multiple servings daily, the potential impact could be even more pronounced.”

The researchers caution that their findings are based on lab research conducted on cells, necessitating larger-scale studies involving human subjects.

Nonetheless, they advise consumers to check product labels for erythritol or “sugar alcohol.”

“Considering the epidemiological evidence informing our research, along with our cellular discoveries, monitoring the intake of such non-nutritive sweeteners seems wise,” Professor Desouza remarked.

The study was published today in the Journal of Applied Physiology.

____

Auburn R. Berry et al. 2025. The non-nutritive sweetener erythritol negatively affects brain microvascular endothelial cell function. Journal of Applied Physiology 138(6):1571-1577; doi:10.1152/japplphysiol.00276.2025

Source: www.sci.news

The Impact of Government AI Usage on Democracy

AI can streamline government paperwork, yet significant risks exist

Brett Hondow / Alamy

A number of nations are exploring how artificial intelligence might assist with various tasks, ranging from tax processing to decisions about welfare benefits. Nonetheless, research indicates that citizens are not as optimistic as their governments, potentially jeopardizing democratic integrity.

“Focusing exclusively on immediate efficiency and appealing technologies could provoke public backlash and lead to a long-term erosion of trust and legitimacy in democratic systems,” states Alexander Utzke, at Ludwig Maximilian University in Munich, Germany.

Utzke and his team surveyed around 1,200 individuals in the UK to gauge their perceptions regarding whether human or AI management was preferable for government functions. These scenarios included handling tax returns, making welfare application decisions, and assessing whether a defendant should be granted bail.

Participants were divided; some learned only about AI’s potential to enhance governmental efficiency, while others were informed about both the advantages and the associated risks. The risks highlighted included the challenges in discerning how AI makes decisions, an increasing governmental reliance on AI that may be detrimental in the long run, and the absence of a straightforward method for citizens to challenge or modify AI determinations.

When participants became aware of these AI-related risks, there was a marked decline in their trust towards the government and an increased feeling of losing control. For instance, the percentage of those who felt government democratic control was diminishing rose from 45% to over 81% when scenarios depicted increasing governmental dependence on AI for specific functions.

After learning about the risks, the percentage of individuals expressing skepticism regarding government use of AI surged significantly. It jumped from under 20% in the baseline scenario to over 65% when participants were informed of both the benefits and risks of AI in the public sector.

Regardless of these findings, democratic governments assert that AI can be utilized responsibly to uphold public trust, according to Hannah Key de la Vallee from the Center for Democracy and Technology in Washington, DC. However, she notes that there have been few successful applications of AI in governance to date, with several instances of failures already observed, which can have serious consequences.

For instance, attempts by various US states to automate public interest claim processing have resulted in tens of thousands of individuals being incorrectly charged with fraud. Some affected individuals faced bankruptcy or lost their homes. “Mistakes made by the government can have significant, long-lasting repercussions,” warns Quay de la Vallee.

Topics:

  • artificial intelligence/
  • government

Source: www.newscientist.com

Tesla Vehicle Deliveries Expected to Decline Significantly Due to Mask Rebound Impact on Demand

Tesla has experienced a notable decline in quarterly deliveries, marking its second consecutive year of falling sales as demand wanes, influenced by CEO Elon Musk’s political views and the aging vehicle lineup.

In the second quarter, Tesla reported delivering 384,122 vehicles, a decrease of 13.5% from the 443,956 units delivered the same period last year. Analysts had anticipated deliveries of approximately 394,378 vehicles, based on an average estimate from 23 units by financial research firm Visible Alpha. However, forecasts from 10 analysts over the last month have been revised down to around 360,080 units. Analysts view delivery numbers as crucial indicators for evaluating vehicle sales and production success.


Seth Goldstein, senior equity analyst at Morningstar, commented, “The market is reacting less negatively than previously anticipated as several analysts have lowered their forecasts over the past week.”

This year, Tesla’s stock has fallen by 25%, driven by concerns over brand erosion in Europe, where sales are experiencing the most significant downturn, attributed to Musk’s alignment with right-wing politics and his role in the Trump administration’s cost-cutting measures. Following the public fallout between Trump and Musk in early June, Tesla saw a dramatic loss of about $150 billion in market value. Although there was a partial recovery in stock value the next month, tensions between Trump and Musk intensified amidst discussions of Trump’s expansive tax reforms.

Despite Musk asserting that sales increased in April, Tesla’s delivery dip comes in the context of a steadily expanding global EV market.

Earlier this year, the company revamped its top-selling Model Y crossover to stimulate demand, but the redesign resulted in production delays, leading some customers to postpone purchases while awaiting the updated model.

A significant portion of Tesla’s revenue and profit stem from its core electric vehicle business, while much of its trillion-dollar valuation hinges on Musk’s ambitious projections regarding the conversion of its vehicles to Robotaxis.

Last month, Tesla launched its Robotaxi service in a limited area of Austin, Texas, adhering to several restrictions, including selective invitations and the presence of safety monitors in the passenger seats. Nonetheless, only a handful of pilots were initiated, with around 12 Robotaxis operational. The National Highway Traffic Safety Administration has begun investigating the rollout of Tesla’s autonomous driving services.

Skip past newsletter promotions

The automaker anticipates beginning production of more affordable vehicles and enhancing the Model Y by the end of June.

While the introduction of less expensive models may provide a sales boost, Wall Street projects a second consecutive annual decline in sales. To achieve Musk’s objective of returning to growth for the year, Tesla will need to deliver 1 million units in the latter half of the year, a monumental challenge despite the historically strong sales numbers during this period.

Source: www.theguardian.com

The Impact of Sleep on the Aging Process

Aging often impacts sleep, leading to challenges as we grow older. Factors such as changes in circadian rhythms, increased nighttime bathroom visits, anxiety, and chronic health conditions can all compromise sleep quality.

Yet, let’s examine the flip side: the influence of sleep on the aging process.

Despite bold assertions from various hyperbaric oxygen therapy centers, nothing can halt our body’s natural aging. However, a closer look at the physiological changes that occur during sleep reveals that cultivating healthy sleep habits can help mitigate the effects of time on our bodies.

What occurs when we sleep?


Our bodies engage in powerful recovery processes during sleep to restore, reset, and rejuvenate organs and cells. Each night serves as a mini-reboot: muscles undergo repair, hormone levels stabilize, and the brain executes a version of waste removal.

Key changes that happen in the body during sleep include:

• Integration of emotional and procedural (long-term, implicit) memory during REM sleep.
• The brain experiences a neurochemical reset, with significant reductions in dopamine and serotonin levels during slumber.
• Muscle repair promotes the release of growth hormone, restoration of glycogen levels, and the production of anti-inflammatory cytokines to assist muscle recovery.
• Hormones like melatonin are produced, while others are regulated; for instance, cortisol (the “stress” hormone) decreases, and leptin (which controls hunger) is maintained.

Dive into the physiological changes that occur during sleep.

Why is sleep increasingly crucial as we age?


Waste removal
The Glymphatic System operates while we sleep to clear neurotoxic waste, such as beta-amyloid. This process becomes increasingly critical with age. The National Library of Medicine states, “The aging process involves a range of neurobiological changes in the brain, including the accumulation of toxic proteins like beta-amyloid plaques and tau tangles.”*

Immune support
As the immune system naturally declines, deep sleep becomes vital for enhancing immune cell activity, thereby supporting our immunity.

Cardiovascular health
Those with a Fitbit will attest that heart rates drop during sleep, allowing blood pressure to lower, which in turn gives the cardiovascular system a chance to rest.

Insulin sensitivity
Sleep quality, duration, and timing all influence insulin sensitivity; inadequate sleep can increase insulin resistance and elevate the risk of developing type 2 diabetes.**

Maximizing quality sleep


Hestens, a Swedish bed manufacturer, recognizes the significance of a good night’s sleep. Since 1852, luxury brands have been crafting handmade beds, with each taking up to 600 hours to create using only natural materials.

“Miracles happen while we sleep,” Hestens states. “It’s the sleep that makes a difference. This is a natural process that cannot be replicated or bought over the counter. You can’t cheat your way to perfect sleep, but understanding its importance and implementing good practices can improve your chances of a restful night.”

For more on the advantages of sleep and to explore the full collection of beds and accessories, visit Hestens’ website.

Book local sleep spa bed tests online at www.hastens.com or visit your nearest certified retailer.

Source: www.sciencefocus.com

The Enigmatic Lizard: Surviving the Chicxulub Asteroid Impact

Yellow spotted tropical night lizard (Lepidophyma flavimaculatum)

Dante Fenolio/Science Photo Library

A unique and elusive group of lizards remains today, recognized as the only terrestrial vertebrates to withstand the catastrophic Chicxulub asteroid impact, which likely resulted in the extinction of non-avian dinosaurs.

The Xantusiid Night lizard is known as an ancient lineage, surviving for tens of millions of years. However, Chase Brownstein from Yale University and his team proposed that this lineage might have originated earlier than previously estimated.

The end of the Cretaceous period was marked by a colossal asteroid strike near the Yucatán Peninsula in Mexico, creating craters wider than 150 kilometers and leading to the extinction of most animal and plant species globally.

Today, the night lizard—despite its name, not actually nocturnal—continues to inhabit Cuba, Central America, and the southwest region of the United States.

Brownstein and his researchers utilized previously published DNA sequencing data from Xantusiids to construct evolutionary trees for these groups. They integrated findings from skeletal anatomy of current species and fossil records, allowing them to estimate the lineage’s age and the quantity of offspring produced by the ancestral night lizard.

The team identified a shared ancestor that lived deep within the Cretaceous period, dating back over 93 million years, likely producing only one or two clutches of offspring.

“It’s highly probable that these ancient populations were situated close to the impact site, much like their modern counterparts,” remarks Brownstein. “It’s as though the distribution of Xantusiid lizards encircles the impact zones.”

According to fossil records, Brownstein argues that it is improbable for ancient night lizards to have simply returned to the affected areas later.

“Our reconstructions suggest that the common ancestors of living species most likely originated in North America, as the fossil evidence of Xantusiids is relatively continuous on both sides of the boundary layer,” he adds.

Numerous night lizard species inhabit rock crevices and possess a slow metabolism akin to other survivors of mass extinction, like turtles and crocodiles. “This likely enabled them to endure the aftermath of the impact,” states Brownstein.

Nathan Law from the University of Sydney expresses amazement at their survival. “These lizards resided near the asteroid’s impact site; despite the asteroid’s devastating effects within hundreds of kilometers, they managed to survive.”

Remarkably, they achieved this despite lacking many common characteristics typically associated with mass extinction survivors. “Species that endure these extinction events tend to be small, reproduce rapidly, and have extensive geographical ranges,” explains Law. “Conversely, these lizards generally have slower reproduction rates and appear to cover a minimal range.”

topic:

Source: www.newscientist.com

Guide #195: The Impact of Reddit on Our Culture

IT concluded a few years back, yet Westworld seems to be fading into a TV footnote. I scarcely recall a mid-2010s reimagining of the Yul Brynner film from the 70s. HBO’s robotic cowboy saga endured four underwhelming seasons before its cancellation.

Nonetheless, when it debuted, Westworld generated a lot of buzz. It was HBO’s sci-fi equivalent to Game of Thrones. The series boasted high production values along with a visually striking cast, including Evan Rachel Wood, Ed Harris, Tandiwe Newton, and Jeffrey Wright, led by the talented duo of Lisa Joy and Jonathan Nolan. At that moment, this project held significant promise in a period flooded with repetitive content. There was genuine apprehension about a show that “makes it up as they go along” (as a devoted fan, I must assert that “they haven’t made it up as they go along,” but that’s a discussion for another newsletter).

However, even the most elaborately planned television shows can unravel. The first hint that Westworld might not ascend to greatness came when forum/social media platform Reddit users began accurately predicting plotlines. Redditors anticipated the twists and turns of the first season, often well in advance and even familiar with the show’s rhythm and patterns. Things escalated to such an extent that in the second season, Joy and Nolan had to rewrite the script to divert the course already hinted at by Reddit users. This not only indicated Westworld’s fragility but also highlighted the formidable influence of Reddit and its community, capable of shaking seasoned showrunners.

Of course, Reddit has since eclipsed Westworld. This month marks the site’s 20th anniversary, though often mischaracterized as an “internet front page.” Celebrated this February, the anniversary brings to mind the more seismic debut of YouTube in 2005. The impact of YouTube on popular culture has been quite significant, even surpassing traditional television.

Reddit’s emergence coincided with an era marked by intense fandom and parasocial relationships. Dedicated fan forums existed prior to Reddit, from band and solo artist message boards to TV show discussions. However, Reddit streamlined and amplified these communities, fostering an environment where niche musical microgenres and discussions could flourish openly under one large digital umbrella.




Simon Quarterman and Tandiwe Newton from Westworld Season 2. Photo: HBO

This newfound freedom and openness, however, comes at a cost. Reddit has faced heavy scrutiny for misogyny, racism, conspiracy theories, and threats of violence. In contrast to many other social media platforms today, Reddit has made substantial strides in community moderation over the past decade. Pop culture discussions can sometimes spiral into more troubling territory, as seen in the long and complicated history surrounding the Rick and Morty subreddit.

Yet, discussions surrounding Reddit often focus too heavily on its negative aspects, neglecting what a surprisingly positive space it can be. With dedicated moderation efforts, it’s one of the last bastions of the old internet—quirky, supportive, and a bit eccentric. As The Atlantic aptly puts it, Reddit is “both niche and vast.” This duality allows it to be explored superficially and in depth, like communities such as Build a Gurdy. In many ways, it represents a mainstream obsession where hyper-specific communities are no longer hidden away but are easily accessible under one broad Reddit umbrella.

I wouldn’t classify myself as a prominent member of this community. At best, I am a Reddit lurker—not bold enough to engage actively and post—but as someone chronicling pop culture, I find it endlessly beneficial. Whether I’m delving into the puzzling narrative threads of a show through its insightful subreddit or seeking out an obscure 70s paranoid thriller, I turn to R/Movie Suggestions. And I can’t even count the number of bands I’ve discovered on major boards like r/indieheads (boasting 3.6 million members and growing). The last time I visited r/indieheads, a user had commemorated Brian Wilson’s death, sharing everything I wanted to hear in a lively, informed exchange.

Skip past newsletter promotions

That essence prompts me to wonder whether some level of concern could tarnish the site, despite the stock market’s buoyancy. Perhaps one day this fear will dissipate, or perhaps not. Maybe Reddit is simply too significant, too unique, and too defiantly independent to be tamed by large corporations. I hope we can celebrate it again in another twenty years as it continues to cover yet another pedestrian TV series.

If you would like to read the full version of this newsletter, subscribe to receive your guide in your inbox every Friday.

Source: www.theguardian.com

Nvidia Surpasses Wall Street Expectations Despite Trump’s Impact on China Sales | Technology

Nvidia surpassed Wall Street’s projections in its quarterly revenue report on Wednesday, continuing a streak of financial successes for the technology leader. For the quarter ending in April, revenue reached $44.1 billion, a 69% increase from the previous year.

The company outperformed an investor forecast of $43.3 billion. Adjusted earnings per share were reported at $0.81, falling short of the anticipated 88 cents. Additionally, data center revenue soared to $39.1 billion, marking a 73% growth year-over-year.

Nvidia remains optimistic about the AI sector, both in terms of its advanced hardware and the regulatory challenges on the horizon, which investors are keenly monitoring.

“Nvidia has once again surpassed expectations, but maintaining this lead is growing more challenging,” observed Jacob Bourne, an analyst at Emarketer. “China’s export restrictions highlight immediate geopolitical pressures, but Nvidia also faces competition as rivals like AMD strengthen their positions based on certain cost-effectiveness metrics in AI workloads.”

CEO Jensen Huang stated, “The global demand for Nvidia’s AI infrastructure is remarkably strong. Countries worldwide see AI as a vital utility, comparable to electricity and the Internet.”

The chipmaker anticipates revenues of $45 billion for the second quarter of 2026.

Nvidia’s quarterly reports over the past year reflect explosive growth. However, the company is under increasing pressure from U.S. regulations.

Donald Trump’s announcement in April regarding tightened computer chip export regulations effectively barred Nvidia from selling its primary revenue source, the H20 AI chip, to China.

“H20 products were primarily designed for the Chinese market,” the company’s first quarter revenue report stated. Consequently, Nvidia expects to miss out on $8 billion in revenue for its second quarter.

Despite this setback, Huang expressed optimism about Trump’s intentions to allow companies to export chips with limited capabilities to China.

“The president has a plan and a vision. I trust him,” he noted.

However, Huang cautioned that losing access to China’s potential $50 billion AI market could jeopardize U.S. leadership in the global AI infrastructure race. “China is one of the largest AI markets, serving as a launchpad for global success,” he stated during the revenue call.

“China’s AI will progress with or without U.S. chips,” he remarked. “The issue isn’t whether China has AI—it’s already happening; the real question is if one of the world’s largest AI markets will rely on American chips.”

The company revealed that the recent SEC claims could cost them $5.5 billion. They noted only $4.6 billion in claims in the first quarter tied to H20 excess inventory and purchase obligations. Some materials may also be reused, affecting forecasts.

In an interview with Ben Thompson, Huang described the loss as “deeply painful.” Reports suggest a revenue loss of $15 billion. In the first quarter alone, the company could not ship an additional $2.5 billion in H20 revenue.

Skip past newsletter promotions

“We have never written off so much inventory in history,” Huang remarked. “We’re not just losing $5.5 billion; we’ve also missed out on $15 billion in sales… and potentially… $3 billion in taxes.”

The tightened export regulations pose challenges: a committee within the U.S. Congress indicated that Nvidia is seeking feedback on China’s groundbreaking AI model, especially regarding Deepseek, an AI firm that mirrors products from U.S. AI companies without the same computational power.

The committee’s report alleges that Deepseek “secretly leaked American user data to the Chinese Communist Party, manipulated information to align with CCP propaganda, and trained on materials unlawfully acquired from the company.”

Despite the tightening export restrictions, analysts believe Nvidia has shown remarkable resilience this quarter.

“Amid industry integration and rising competition, geopolitical tensions have created a tougher business landscape. Nevertheless, the company has effectively focused on its operational core,” Investing.com commented.

“We’ve effectively managed supply and demand dynamics within data centers. Thus, the $4.5 billion impact from H20 during the quarter underscores NVIDIA’s ability to adapt to market changes,” they added.

Analysts also speculate that U.S.-China negotiations “might yield positive outcomes for Nvidia,” according to Wedbush analyst Dan Ives.

“Nvidia is the sole chipmaker propelling the AI revolution. This narrative is underscored by their results and Jensen’s optimistic remarks,” Ives stated. “This indicates a significant lead in the broader tech landscape, suggesting the AI revolution is poised for further growth, despite the tariff challenges posed by Trump.”

Though Nvidia’s Chinese operations remain uncertain, analysts note a surge in demand for Nvidia chips in Saudi Arabia and the UAE. The company has benefited from AI opportunities arising from Trump’s visit, which secured $600 million for U.S. businesses.

Nvidia announced plans to sell hundreds of thousands of AI chips to Saudi Arabia, including to a startup supported by the nation’s sovereign wealth fund, employing 18,000 individuals with the latest technology.

Source: www.theguardian.com

Republican Proposal to Eliminate EV Tax Credits May Impact GM and Ford Negatively

In recent years, the popularity of electric vehicles has surged, fueled by a $7,500 tax credit from the federal government aimed at making purchases more affordable.

However, the budget bill unveiled by House Republicans on Monday suggests eliminating this tax credit. This proposal also introduces new limitations on other tax incentives that motivate automakers to invest significant sums into establishing new battery facilities in the United States.

Starting next year, the legislation is set to abolish the $7,500 tax credit for new electric vehicle buyers, as well as a $4,000 credit applicable to used car and truck acquisitions.

If signed into law, these changes could lead to a spike in electric vehicle sales in the near term, as consumers rush to take advantage of tax credits before they vanish. Nonetheless, analysts predict that sales may drop or slow drastically once the credits are no longer available.

“This will undoubtedly slow down the adoption rate significantly,” remarked Stephanie Valdez Streaty, director of industry insights at Cox Automotive.

Cox anticipates that electric vehicles will comprise 10% of all new vehicle sales this year. If Congress does not alter the tax credit, that figure is expected to increase by nearly a third by 2030, according to their estimates.

However, if Congress eliminates the credits, Valdez Streaty projects that electric vehicles could make up only 20-24% of new car sales by 2030.

Eliminating these credits would further financially burden automakers who are already dealing with increased costs stemming from a 25% tariff on imported cars and auto parts established during the Trump administration.

The Republican tax proposals could adversely affect numerous automakers striving to launch new models, particularly General Motors and Ford, both of which have made substantial investments in their manufacturing facilities and supply chains with the goal of producing millions of electric vehicles annually.

GM has inaugurated two battery plants located in Ohio and Tennessee, developed through a joint venture with LG Energy Solution. Ford is currently constructing three battery plants, including one in Michigan, in collaboration with two South Korean firms, SK-ON, in Kentucky and Tennessee.

Both Detroit-based automakers are also investing in mining operations to secure domestic lithium supplies, which is crucial for battery production.

Tesla, the leading electric vehicle seller in the U.S., is also facing challenges. Its sales have decreased in recent months due to consumer backlash against CEO Elon Musk, associated with the Trump administration, coupled with the absence of a new affordable model.

However, Tesla enjoys several advantages. While most manufacturers still incur losses on electric vehicles, Tesla has been profitable for over a year, allowing the company to lower prices to stimulate demand if credits are eliminated. Additionally, Tesla relies less on imported components compared to other U.S. manufacturers.

Many large automakers are racing to catch up with Tesla in the electric vehicle landscape, particularly in states with a significant number of Republican lawmakers, by establishing numerous new factories.

Toyota has constructed a battery facility in North Carolina, while Hyundai is set to begin electric vehicle production at its Georgia site, which will also house battery manufacturing. Stellantis, along with its partners, is currently developing two battery plants in Indiana, with the local economies relying on the jobs these plants will create.

Should tax regulations undergo significant changes, automakers may reconsider, scale back, or postpone their plans.

“If the government wishes for the U.S. to effectively compete with China and the rest of the world in the expansive EV sector, as well as encourage GM and Ford to make considerable long-term investments in EV development and domestic production, we must enhance the tax credits instead of causing whiplash,” Valdez Streaty stated.

China dominates global electric vehicle production and is a primary supplier of essential materials for batteries and electric motors, such as processed lithium and rare earth minerals. The elimination of the tax credit would significantly hinder the U.S. automotive industry’s ability to keep pace.

“This could adversely impact our global standing and the competitive capabilities of the U.S. automotive sector,” Valdez Streaty remarked. “It’s likely to slow us down when we are already trailing China.”

Neither Ford nor Stellantis had comments to share, and neither did the policy group, the Automotive Innovation Alliance.

The federal government initially introduced $7,500 in credits during President Barack Obama’s administration, maintaining this incentive throughout President Trump’s first term. These credits were subsequently updated and expanded under the Inflation Reduction Act, enacted by President Joseph R. Biden Jr.

Given the higher costs of electric vehicles compared to traditional combustion engines, such credits have been vital in encouraging consumer purchases.

The credits are applicable to sports utility vehicles and pickups priced under $80,000, as well as sedans priced below $55,000. The vehicle must be assembled in North America, with the battery meeting specifications based on the country of origin for its materials. Additionally, to qualify, individual buyers must earn less than $150,000 per year, while joint filers must earn under $300,000.

Many of these criteria do not apply to leased vehicles. However, tax credits for cars and trucks are typically transferred to leasing companies, which are divisions of automakers. Many leasing firms have passed on their savings to customers, contributing to the notable increase in electric vehicle leases.

According to Valdez Streaty, approximately 595,000 electric vehicles were leased in 2024, a significant rise from roughly 96,000 in 2022, prior to the availability of leasing incentives.

Source: www.nytimes.com

Uncovering the Impact of the LA Wildfire: Key Estimates Lacking After Trump’s Management Changes

Certainly! Here’s your content rewritten while preserving the HTML tags:

As President Donald Trump took office, the wildfires in Los Angeles were still burning, prompting a return to previous Biden-era directives for federal agencies addressing the climate crisis. Flip

January’s fire conditions, exacerbated by climate change, played a significant role in igniting wildfires in Palisades and Eton. Nearly 40,000 acres were affected. By March, Adam Smith, the chief investigator of the $1 billion weather and climate disaster program at the National Oceanic and Atmospheric Administration (NOAA), was still assessing the severe impact of the LA wildfires when he received informal orders to cease all work-related communications.

Each month, Smith’s team maintained an extensive online database tracking losses from over 400 natural disasters since 1980, all causing more than $1 billion in damages. Following the LA wildfire, Smith reported having received restrictions that prevented him from updating this database and sharing initial findings with the public. The wildfire incurred damages amounting to at least $50 billion.

In early May, Smith resigned due to concerns about the agency’s plans for the future. The billion-dollar weather and climate disaster online database Smith had developed over 15 years at NOAA was subsequently shut down. Days later, NOAA confirmed it would cease updates for this important resource, which provides essential data for scientists, citizens, and insurance firms evaluating climate risk.

A NOAA spokesperson stated that the database would no longer be updated “due to changing priorities and staffing adjustments.” The White House did not provide any comments regarding the matter.

According to Smith, the database’s economic losses are particularly vital, as billion-dollar disasters like hurricanes and widespread wildfires are increasingly common. In 2023, the US set new records for billion-dollar disasters, with the database indicating a staggering $28 billion event. Over the past five years, the US has averaged about $24 billion in disasters annually, a significant rise from just $3 billion average during the 1980s.

“We need to be more prepared than ever,” Smith told NBC News. “Some have access to the data and insights for better preparation. Unfortunately, discontinuing resources like these creates a gap in knowledge.”

Researchers have identified rising global temperatures as a key driver in these changes over recent decades. Long-term droughts and increased wildfire risks are affecting regions across the western United States, where warming atmospheres retain more moisture, resulting in more intense storms and hurricanes.

This increase in extreme weather events presents significant challenges for insurance policyholders in areas susceptible to natural disasters. Rates in hurricane-prone states like Louisiana and Florida have surged, with some homeowners facing nearly $10,000 in annual insurance premiums. In California, major insurance firms, including State Farm, have rescinded policies due to escalating fire risks.

A study from the National Bureau of Economic Research revealed that the heightened risk of disasters would drive up annual insurance costs for households affected by climate issues by an estimated $700 over the next three decades. On a global scale, reports from German insurance giant Munich RE indicated that natural disasters resulted in record insurance losses of $140 billion worldwide in 2024.

“You cannot conceal the costs of climate change from those who are already incurring those costs through their insurance premiums,” stated Carly Fabian, a civic policy advocate from a consumer rights nonprofit. “The insurance and reinsurance sectors are built to withstand a limited number of major multi-billion dollar disasters, but are not equipped for consecutive disasters occurring with such frequency.”

Data compiled in the multibillion-dollar disaster database illustrates the financial toll of hurricanes, severe storms, and wildfires across the nation, serving as a critical resource for private insurers modeling climate risks and establishing rates for homeowners in vulnerable areas. Although insurance companies utilize various datasets for their climate risk assessments, the scale of NOAA’s database remains unmatched.

Jeremy Porter, a climate risk expert at the First Street Foundation, emphasized that the database is one of the most effective tools for illustrating the economic impact of climate-related disasters. First Street utilizes the $1 billion disaster database for its national risk assessment reports.

The NOAA database also serves as an essential resource for homeowners facing rising rates, non-renewals, and cancellations in home insurance.

“We are navigating an industry where insurers have extensive access to private data while the average consumer lacks insight into that data,” remarked the policy director for Americans for Financial Reform, a nonprofit advocating for stricter regulations. “The removal of public data sources exacerbates this imbalance, hindering individuals’ ability to understand their risks and the challenges they face from financial service providers.”

Madison Condon, an environmental law professor at Boston University, highlighted that the cuts to NOAA’s $1 billion disaster database are part of a broader trend involving rollbacks of national climate assessments and data resources, including the annual report detailing the impacts of climate change in the US released in late April. The Trump administration notably rejected numerous scientific contributions to these reports.

Additionally, the Trump administration has eliminated data products related to melting Antarctic glaciers and sea ice cover, marking yet another setback for US Antarctic research. Leaked documents obtained by ProPublica indicated that Trump intended to reduce NOAA funding by 27%, particularly for innovative climate-related initiatives, and proposed nearly 75% cuts to the Bureau of Ocean and Atmospheric Research, responsible for maintaining global climate models essential for insurers’ climate risk assessments.

Let me know if you need any further modifications!

Source: www.nbcnews.com

Soviet Spacecraft Make Impact on Earth After Fifty-Year Voyage

After 53 years traversing the cosmos, a quirky Soviet spacecraft known as Cosmos-482 has made its way back to Earth, penetrating the atmosphere at 9:24 am on Saturday, according to Los Cosmos, a Russian state entity overseeing the space program.

Cosmos-482, designed for a landing on Venus, may have survived its descent. As reported by Roscosmos, its remnants were found scattered across the Indian Ocean near Jakarta, Indonesia.

Launched on March 31, 1972, the Kosmos-482 became tethered to Earth’s orbit due to a premature shutdown of one of its rocket boosters. Its return evokes memories of the Cold War space race, sparking images of terrestrial forces projecting into the solar systems.

“It takes me back to a time when the Soviet Union was bold in space exploration. We might all be more adventurous in space,” remarks Jonathan McDowell, an astrophysicist at the Harvard & Smithsonian Center for Astrophysics, who monitors orbiting objects. “In that context, it is a bittersweet occasion.”

While the U.S. triumphed in the lunar race, the Soviet Union set its eyes on Venus through its Venella program.

Between 1961 and 1984, the Soviets dispatched 29 spacecraft towards this enigmatic world, although many missions did not succeed more than a dozen fell short. The Venella missions observed Venus from orbit, gathered atmospheric data, descended through its caustic clouds, collected and analyzed soil samples, and transmitted the first images from the planet’s surface.

“Kosmos-482 serves as a reminder of the Soviet Union’s encounter with Venus 50 years ago, a tangible relic of that endeavor,” states Asif Siddiqi, a historian at Fordham University focusing on Soviet space activities. “It’s oddly fascinating how the past continues to linger in orbit around the Earth.”

Fifty years later, as the country aims to return to the moon and dispatch probes to Mars, Jupiter, and various asteroids, only an isolated Japanese spacecraft remains at Venus amidst proposals facing delays with uncertain timelines and an unpredictable future.

While landing astronauts on the moon during the space race was a monumental achievement, it also highlighted the rest of our solar system. As the U.S. increasingly focused on Mars, the Soviet Union turned its attention to the second planet from the sun.

“Back then, both nations were intrigued by Mars, but Venus proved a more accessible target,” asserts Kathleen Lewis, curator of the International Space Program at the Smithsonian’s National Air and Space Museum.

Often referred to as Earth’s twin due to its similar size, Venus is shrouded in a dense atmosphere of carbon dioxide and veiled under thick layers of sulfuric clouds. Its surface endures scorching temperatures reaching 870 degrees Fahrenheit, coupled with atmospheric pressure nearly 90 times greater than Earth’s.

“How do you create technology capable of surviving a months-long journey across the solar system, entering a thick atmosphere, and capturing images without being destroyed?” Dr. Siddiqi questioned. “It’s an astonishing challenge to consider solving back in the 1960s.”

Venella 9 Descent Craft and Lander

credit…
Via NASA

The Soviets, unbothered by the challenges presented by such a hostile world, persistently launched hardware towards Venus. At that time, no blueprint existed for such endeavors.

“You were essentially inventing the technology to send to Venus,” Dr. Siddiqi explained. “Today, if a country like Japan wishes to send a mission to Venus, they have decades of knowledge and engineering guidebooks. In the ’60s, there was nothing.”

The Soviet Venella program achieved many milestones, including being the first probe to enter the atmosphere of another planet, the first spacecraft to successfully land on another planet, and the first to capture sounds from an alien landscape.

The breakdown of Kosmos-482 occurred midway through this timeline, and its re-entry wasn’t the first encounter with Earth for the intended Venus lander.

Around 1 am on April 3, 1972, merely days after the troublesome launch, several 30-pound titanium spheres, each the size of a beach ball and inscribed with Cyrillic letters, descended upon the town of Ashburton, New Zealand.

One landed in a turnip field, leaving local residents cautious. The New Zealand Herald reported in 2002 that one of these spheres was ultimately confined in a police cell in Ashburton.

According to space law, ownership of a downed space object belongs to the country that launched it; however, the Soviets did not claim ownership of the sphere initially. The “space ball” was eventually returned to the farmers who discovered it.

Although Cosmos-482 was lost, the two other spacecraft launched days earlier successfully reached Venus and relayed data from the surface for 50 minutes. Two years later, when Venera 9 and 10 arrived, the Soviets ensured redundancy by launching both spacecraft.

The Venera program concluded in the mid-1980s with an ambitious Vega probe, which, starting in 1984, deployed a landing craft on Venus’s surface in 1985 and flew by Halley’s Comet in 1986.

“The legacy of Soviet Venus exploration in the 70s and 80s was a point of pride for the Soviet Union,” Dr. Lewis noted.


The re-entry of Cosmos-482 holds unique historical significance but isn’t particularly unusual today, as nations and companies continue to launch more technology into orbit, resulting in an increase of objects descending from the sky.

“We see frequent re-entries nowadays,” says Greg Henning, an Aerospace Corporation engineer and space debris specialist. The nonprofit organization tracks objects in orbit. “We observe dozens of instances each day, most of which go unnoticed.”

This is particularly true now, as heightened solar activity expands the Earth’s atmosphere, increasing drag on orbiting objects.

Some of these re-entries create spectacular light displays, whether through controlled descents like SpaceX’s cargo and crew capsules or unintentional ones, such as the failed test flight of SpaceX’s Starship prototype. Others, like China’s Long March 5B rocket booster, are uncontrolled and potentially hazardous.

However, in rare instances, spacecraft such as Cosmos-482 return to Earth as remnants of humanity’s formative endeavors.

“There exists an archive of the space race that continues to circle Earth. Many objects released in the 1950s, ’60s, and ’70s remain in orbit,” Dr. Siddiqi remarked. “At times, pieces of this living museum may fall on my head, reminding me of its presence.”

Jonathan Wolf contributed to this report.

Source: www.nytimes.com

Does Video Game Monetization Impact Children? Australia’s Response Explained | Games

O Over the last ten years, Dean has built a robust collection of video games, ranging from mainstream blockbusters to niche favorites. His digital library is akin to a cinematic treasure trove, allowing instant access with a simple click. Yet, his son, Sam, has set his sights on just one game: Roblox. This expansive virtual universe and video game, Roblox is the leading title worldwide.

The company reports over 97 million daily active users on Roblox, with around 40% of them, like Sam, aged under 13. In 2024, Roblox generated approximately $5.6 billion (US$3.6 billion) in revenue, mainly from purchases of “Robux,” a form of in-game currency, with the average user spending about $25 a month.


Amid concerns about children’s exposure to bullying and inappropriate content, a recent report highlights the impacts of game monetization on young users.

Experts argue that Australia’s current classification system does not adequately assist child gamers and their parents in navigating the tricky monetization landscape.

New reports from Australian researchers scrutinize the manipulative “dark design patterns” in gaming that encourage spending and confuse children with unclear cryptocurrency transactions.

One recent report from Monash University and the Center for Consumer Policy Research (CPRC) focused on players aged 18 and older, revealing that games designed with dark patterns are almost unavoidable. Of the 800 surveyed, 83% reported “negative effects” from these designs, and 46% faced economic disadvantages, feeling pressured to purchase items and overspending.

Another recent study from University of Sydney researchers sought to understand how children, who represent one-fifth of the gaming population, recognize these mechanisms and perceive the design of video games.

“Concerns about children’s interaction with digital media often lead to panic and policy decisions that overlook the actual experiences of children,” states Taylor Hardwick, lead author of the study.

Hardwick and her team interviewed 22 children aged 7 to 14 and their parents. Each child received a $20 debit card and was instructed to explain their purchases.

Among the 22 children, 18 played Roblox, with 12 spending their entire $20 on Robux. The remaining five used the funds on other games like Call of Duty, Fallout 76, and Minecraft.

Participants expressed concerns about being misunderstood and frustrated by their purchases, especially if they suddenly lost access to their accounts or items.

Sam’s father shared that Sam has spent around $400 a year on Roblox over the past four years, with a recent purchase leaving him disheartened.

Sam had used some of his Robux to buy Godzilla “skins” (digital costumes) in a popular Roblox game called Monster Universe. However, upon logging in, he found his skin had vanished unexpectedly after the game was shut down by Toho, the copyright holder. He did not receive a refund from Roblox.

One major concern raised by Sydney researchers is the impact of “random reward mechanisms” (RRMs) on children. RRMs, like loot boxes, offer players mystery items through lottery-style draws.

While children in this study accepted RRMs as part of gaming, many expressed dissatisfaction with them.

“Even if children talk about game percentages in slang, they don’t entirely grasp the risks of navigating these digital experiences,” the authors note. “Gambling-like mechanisms such as RRMs are harmful and inappropriate for children’s games.”

Recommendations include eliminating RRMs, simplifying refund processes, enhancing account protections for children, and improving transparency regarding cryptocurrency.

Christopher Ferguson, a psychologist at Stetson University, found the study interesting but highlighted the small sample size and questioned the researchers’ definition of “harm.” He argued that while children may feel deceived, the monetization aspects could be more annoying than harmful.

“It’s encouraging that researchers are inquiring about children’s perspectives on their experiences,” he said.

Skip past newsletter promotions

A USTRALIA has attempted to shield children from monetized RRMs by introducing a new classification system that was implemented in September 2024. Currently, games containing RRMs or loot boxes are not recommended for those under 15.

However, these new regulations apply only to newly classified games, and pre-existing games are not required to update their classifications.

Leon Xiao, a researcher from City University of Hong Kong studying loot box regulation, states that Australia faces implementation issues rather than legal ones. He argues that several video games were misrated after the new law came into effect, indicating flaws in consumer education.

A preliminary study by Marcus Carter, co-author of the University of Sydney research, suggests that about 20% of the top 100 grossing mobile games on the Apple App Store and Google Play Store do not comply with Australian regulations. Hardwick and Carter recently noted that Australia’s guidelines “do not fulfill their intended purpose.”

Roblox, with its extensive user-generated content, exemplifies the confusion surrounding ratings. Xiao argues, “Roblox should either be rated or not recommended for players under 15.” However, the game is rated PG on the Google Play Store.

In contrast, Apple’s App Store lists a regional age rating of over 15 years, which aligns with global ratings from Apple that set the limit at 12.

A Roblox spokesperson informed Guardian Australia that developers must use the PolicyService API to comply with jurisdictional requirements, ensuring access only for eligible users with paid random items. Due to an update rolled out to developers in September 2024, paid random items are currently unavailable to users in Australia.

“As a platform for user-generated content, we provide developers with tools, information, and guidelines applicable to various gameplay aspects within games and experiences.

“We are committed to addressing reported content that fails to adhere to guidelines or does not effectively use tools necessary to meet Australia’s local compliance requirements.”

The company strives to inform parents about their children’s purchasing habits, does not store billing information as defaults, and fails to give warnings that real money is being spent during initial transactions. Parents are also alerted via emails regarding high spending activity.

“Our parental controls enable parents and caregivers to receive notifications about their child’s spending on Roblox and set monthly spending limits for their accounts,” said the spokesperson.

Hardwick believes navigating monetization is challenging for parents, who are often busy, informed, and lack resources. She feels they aren’t equipped to manage children’s in-game spending effectively.

Dean is making every effort to guide Sam through these trends, discussing what Sam spends Robux on and why. While Dean acknowledges Sam’s disappointment over the Godzilla skin, he has encouraged Sam to explore a gardening game where he can utilize Robux to purchase new species.

*Name changed

Source: www.theguardian.com

Soviet Probe’s Imminent Crash with Earth: The Impact Location Remains Unknown

Model of Kosmos 482, originally set for Venus

Wikimedia Commons

Over 50 years after its launch, the Soviet spacecraft Cosmos 482 is set to return to Earth. Initially designed to land on Venus, it began to disintegrate in low Earth orbit, never completing its intended mission. After orbiting our planet for decades, it is finally on a path to re-enter.

Kosmos 482 was launched in 1972; however, much about its mission and structure remains classified due to its Cold War origins. The intention to reach Venus is inferred from other Soviet missions focused on the planet at that time, and indications suggest that the spacecraft attempted a maneuver in orbit before fragmenting. The exact reason for its failure is unclear, but three out of four pieces landed in New Zealand shortly after launch.

The last fragment has drifted into a higher orbit, approximately 210 km at its closest to Earth and as far as about 9,800 km. Over time, particles from the Earth’s upper atmosphere have slowed its descent, gradually bringing it closer to re-entering. It is projected to crash on May 9th or 10th.

The capsule remains of the spacecraft are estimated to be over one meter wide and weigh nearly 500 kilograms. Given its size and the possibility that it was engineered to withstand the intense conditions during a Venusian descent, impact speeds may exceed 200 km/h.

Predicting the exact impact site for Kosmos 482 is challenging. Based on its current trajectory, it could land anywhere between the latitudes of 52° south and 52° north, covering a vast area from the southern tip of South America to parts of Canada and Russia. Fortunately, despite the extensive range of potential landing sites, the likelihood of it striking a populated area is minimal. “The numbers are infinitesimally small,” stated Marsin Pilinsky from the University of Colorado Boulder. statement. “The ocean is a likely landing zone.”

Pilinsky is part of a team monitoring the debris. As the re-entry date approaches, landing predictions will become more accurate. Instances of space debris falling to Earth are not rare; for instance, NASA tracks one orbital object entering the atmosphere daily, with most either burning up or landing in oceans. However, Kosmos 482 is notably larger and more robust than typical space debris.

Topics:

Source: www.newscientist.com

Tech Giant Surpasses Quarterly Expectations Amid Trump’s Tariff Impact on Sector

hWelcome to Ello and TechScape! I’m your host, Blake Montgomery. In this week’s Tech News: Trump’s tariffs are impacting a tech firm that focuses on physical goods more than those solely digital. We dive into two stories highlighting the dark implications of AI on the labor market. Additionally, Meta has launched a standalone AI application, boasting an impressive claim of 1 billion users due to its rapid adoption. OpenAI has backed down from a controversial version of ChatGPT, and we revisit the early terminology surrounding Elon Musk.

High-tech revenue: bits rake it up, atoms face uncertainty

Four out of seven major tech giants reported their quarterly earnings last week. Meta, Microsoft, Apple, and Amazon exceeded Wall Street projections, yet their outlooks revealed a clear divide between those moving physical products and those thriving in the digital realm. Atomic vs Bits.

Meta and Microsoft’s earnings skyrocketed, surpassing expectations and offering optimistic guidance for the next quarter.

In contrast, uncertainty loomed over Apple and Amazon. While both companies outperformed Wall Street expectations, recent news emphasized the adverse effects of Trump’s tariffs. At the end of Apple’s earnings call, CEO Tim Cook revealed that import tariffs would cost iPhone manufacturers $900 million in the upcoming quarter. Although Apple managed to adapt, planning to ship around $2 billion worth of iPhones from India to the US before tariffs took full effect, it’s still significant.

Last week, Amazon faced backlash from the Trump administration after it was reported that Punchbowl News might begin detailing tariff-related costs for individual items, much like discount retailers Shein and Temu. White House Press Secretary Karoline Leavitt condemned this move as “hostile and political.” Although Amazon considered the idea, it quickly decided not to pursue it and downplayed its competition with Shein and Temu, dubbed Amazon Haul. Following the controversy, the ecommerce titan announced it would cease the initiative.

Is AI taking jobs?

Photo: Science Photo Library/Aramie

Artificial intelligence (AI) is set to greatly disrupt the job market. Reports detail the direct impacts on jobs, leaving many employees in the lurch.

Technology skeptic Brian Merchant discusses Duolingo’s recent shift to an “AI-First” model, phasing out contractors for tasks that AI can manage. His piece, titled Machine Newsletter Blood, features a former Duolingo contractor who expressed disbelief at the rapid exchange for AI. Similarly, artists and illustrators reported losing opportunities as clients opted for AI solutions instead.

However, on a larger scale, immediate disruption following the launch of ChatGPT isn’t anticipated. Research indicates AI’s broader market impact has been slower than predicted. A study from the University of Chicago and the University of Copenhagen published in a Working Paper reveals that in Denmark, “AI chatbots have not significantly affected job revenue or recorded hours.” Rather than completely displacing jobs, AI is expected to enhance productivity, streamlining tasks and fostering new ideas. The study analyzed two comprehensive recruitment surveys encompassing 25,000 workers and 7,000 workplaces across 11 occupations considered vulnerable to AI.

Special thanks to Register for their insights in this paper.

Mark Zuckerberg will be speaking at Llamacon 2025, an AI developer conference in Menlo Park, California, on April 29th. Photo: Jeff Chiu/AP

Personally, I’ve never engaged Meta’s AI chatbots intentionally. I accidentally tapped a discreet blue circle in Instagram’s search bar during the spring of 2024, triggering a chat with AI agents. The chatbot enthusiastically prompted me to “imagine paradise” instead of using my recent search queries. Meta has integrated its AI into frequent sections of its core app.

The strategic placement of the Meta AI search bar and its integration into existing apps is evident. For example, you can easily tap the Meta AI button at the bottom right corner of the iPhone’s WhatsApp app. Meta has optimized the search functionalities across platforms like Instagram, Facebook, and WhatsApp, thereby promoting its rapidly expanding AI user base through prominently featured options. Recently, Meta AI stated it is “on track to become the world’s most utilized AI assistant,” with nearly 1 billion users reportedly engaged with the platform.

Last week, the company unveiled a standalone AI app, raising questions about user engagement without a physical interaction. For now, executives anticipate most users will continue to encounter AI through the conspicuous blue circles within popular social applications. Barge.

Meta isn’t the only player; Google also boasts a significant user base for its AI features, claiming over 1 billion users for AI-driven searches (recently reported as 1.5 billion). While it’s challenging to determine user engagement levels accurately, it’s evident that companies glean benefits from any interactions with their AI tools, making it nearly impossible for organizations like Google or Meta to be compelled to stop using their data for AI training. In the US, users can only request that Meta remove their data or abstain from utilizing it to aid in AI training, alongside chatting with Meta AI, which also includes posts and profile details.

The reality of AI seems grim, as it appears designed to lead users into its ecosystem early on. Within the US, where minimal privacy regulations exist, users often feel as if they are continuously training AI systems without their consent.

Sam Altman’s Rollback and Debut

“We missed Mark”… Sam Altman. Composite: Carlos Barría / Reuters / Guardian Design

Last week, OpenAI confirmed it would retract the latest ChatGPT update, with Sam Altman stating, “I missed the mark with last week’s GPT-4o update.” He described the prior updates as overly sycophantic and bothersome.

Skip past newsletter promotions

According to a venture capital firm, this update marks an unusual error for the creators of ChatGPT. Andreessen Horowitz is among investors in OpenAI.

The day after announcing the rollback, Altman shared news of the launch of his new startup, World, which specializes in ORBs that scan users’ IRIs for verification purposes. He proudly tweeted, “We did that!” alongside an image of himself in front of an American flag, creatively modified with the logo of another company.

Doge Days

“No modern precedent”… Elon Musk’s extraordinary role in the government. Composite: Guardian/Getty Images

The wealthiest individual in the world and a prominent figure in technology held a position in the White House for roughly 100 days. What impact did he have?

My colleague Nick Robbins notes:

“Musk left little of the federal government intact. In just a few months, he dismantled decades of government agencies and public services, which amassed considerable political power.”

“Musk’s influence within the Trump administration is unparalleled. The world’s richest person took on a role that allowed him to undermine the very institutions overseeing his enterprises. His attempts to radically reshape government branches significantly increased his influence, incorporating allies into key positions across federal agencies and gaining access to personal data from millions of Americans while laying off tens of thousands of workers. His leadership at SpaceX positioned the company to capitalize on billions in government contracts, leaving chaos in his wake.”

Discover more about Doge’s initial 100 days.

If you only read two more Elon Musk stories this week, check these out

Broader Technology Landscape

Source: www.theguardian.com

The Incredible Impact of Brothers on Our Lives

Joshua Goodman, an associate professor of education and economics at Boston University, observed similarly remarkable outcomes at the university level. He analyzed a dataset of students whose scores were close to the cutoff points for entry into what are referred to as “target colleges.” These candidates are nearly identical, differing by just 10 points or less, highlighting that one student’s score can hinge on minor variations. Generally, those who were just above the threshold gained admittance, while those just below did not. Goodman discovered that younger siblings of successful individuals were significantly more likely to attend equally selective universities compared to older siblings who missed the cutoff by a few points. The younger siblings who managed to enter a prestigious university likely elevated their aspirations, seeing a clear path ahead thanks to their older siblings’ experiences.

Michelle Obama’s university experience mirrors Goodman’s findings, despite his research being conducted decades later. Raised in a working-class neighborhood on Chicago’s south side, Obama’s parents did not initially consider Ivy League schools for her. Her older brother Craig excelled academically and gained the advantage of athletic prowess, earning a spot on the basketball team at Princeton University. In her book, Becoming, Obama reflects on the impact of observing her brother’s journey, stating, “No one in my family had any first-hand experience with university, so there was little discussion or exploration.” She noted that she wanted to emulate her brother’s achievements, which ultimately made Princeton a viable choice for her. Despite a guidance counselor suggesting she “wasn’t Princeton material,” Obama remained undeterred, confident in her abilities and potential.

Research by Zang and Goodman indicates that positive interventions for one child from a low-income family can ripple out to benefit their siblings. This suggests that interventions may yield greater overall impact than previously thought, enhancing not just the individual child’s experience, but potentially altering the life trajectories of their entire family.

Zang’s study revealed that nearly one-third of academic similarities among siblings could be attributed to these spillover effects, rather than shared environments or common genetics. However, these ripple effects can also be detrimental, particularly in disadvantaged families. Children from such backgrounds often face academic challenges due to various obstacles. Zang posits that a child with academic ambitions may experience greater distress if their sibling suffers setbacks, as academic performance is a reliable predictor of future income, which can subsequently lead to diminished earnings across the family.

Both Zang and Goodman emphasize that the spillover effect is most pronounced in underprivileged families, suggesting a need for researchers to recognize that sibling influences function differently based on socioeconomic status. For instance, a 2022 study published in the *Frontiers in Psychology* complicated the well-known assertion that the eldest siblings are the most academically esteemed within families. It found that the oldest children in high-risk families and those with non-native English speaking parents do not perform better on cognitive tests, even if they are more prepared for school at age two. This suggests that these families experience increased interaction with older siblings due to their unique challenges.

Source: www.nytimes.com

Halting Submissions: The Impact of NIH Budget Cuts on Scientific Journals

The Environmental Health Perspective is widely regarded as the premier magazine in the field, announcing its suspension of new research submissions due to uncertainties surrounding federal funding cuts.

For over 50 years, this journal has been supported by the National Institutes of Health to evaluate research on the impacts of environmental toxins, including persistent chemicals and air pollution, publishing findings at no cost.

Joel Kaufman, the journal’s editor-in-chief, opted to halt new submissions because of the “lack of confidence” regarding the funding of critical expenses such as copyediting and updating editing software.

He refrained from providing comments on the publication’s future outlook.

“If the journal were to disappear, it would be a tremendous loss,” stated Jonathan Levy, Chair of the Department of Environmental Health at Boston University. “It diminishes access to crucial information needed for insightful decision-making.”

The NEJM editor referred to the letter as “blushy threats.” Recently, the journal Obstetrics and Gynecology, published by the American College of Obstetricians and Gynecologists, reported receiving similar letters.

Scientific journals have been under scrutiny from leading health officials during the Trump administration.

In a book published last year, Dr. Martin A. McCurry, the newly appointed FDA commissioner, indicated that the Editorial Committee of “Gatekeeping” will only disseminate information that aligns with “groupthink narratives.”

In an interview from last year’s “Dr. Hyman Show” podcast, current HHS Secretary Robert F. Kennedy Jr. expressed intentions to take legal action against medical journals under the Federal Anti-Corruption Act.

“If you don’t establish a plan to publish credible science now, I will find a way to sue you,” he warned.

Still, the uncertainty surrounding EHP has left researchers perplexed. They noted that funding cuts seem to conflict with the Trump administration’s declared priorities.

For instance, Kennedy has consistently highlighted the significance of investigating environmental factors in chronic diseases. The new administration has also shown interest in transparency and public access to scientific journals, a principle EHP pioneered.

EHP was among the first “open access” journals, accessible to anyone without a subscription, and unlike many other open access journals that impose substantial fees, EHP’s federal backing allowed researchers at smaller institutions to publish without financial concerns.

“There are several layers of irony in this situation,” Dr. Levy remarked.

EHP isn’t the only journal affected by funding cuts at the Department of Health and Human Services.

A draft budget obtained by The New York Times suggests that two journals published by the CDC—Emerging Infectious Diseases and Chronic Diseases—may face cuts. Both are available at no cost to authors and readers and are among the leading journals in their fields.

HHS spokesman Andrew Nixon stated that there was “no final decision” on the forthcoming budget.

Published monthly, Emerging Infectious Diseases provides state-of-the-art insights on global infectious disease threats.

Jason Kindrachuk, a virologist at the University of Manitoba, who has published studies on Marburg and MPOX in the journal, noted its importance in shaping response strategies during outbreaks.

The news is “very disheartening,” he remarked.

Source: www.nytimes.com

The Impact of the Gaza Conflict on Israel’s AI Innovation

In late 2023, Israel aimed to assassinate Ibrahim Biari, the top Hamas commander in the Northern Gaza Strip, who supported the planning of the October 7th massacre. However, Israeli intelligence could not find Mr. Biari, who was believed to be hidden in a network of tunnels under Gaza.

Israeli officers turned to new military technology infused with artificial intelligence, Israeli and American officials recounted the incident. This technology had been developed a decade ago but had not been utilized in combat. The discovery of Biari prompted new motivations to enhance the tools, leading the engineers of Israeli Unit 8200, akin to the national security agency, to swiftly integrate AI into their technology.

Following this, Israel intercepted Mr. Biari’s communication and tested the AI audio tool. Utilizing that information, Israel ordered an airstrike targeting the area on October 31, 2023, resulting in the killing of Biari. According to Airwars, a London-based conflict monitor, more than 125 civilians were also killed in the attack.

Audio tools were just one example of how Israel leveraged the conflict in Gaza to quickly test and deploy AI-backed military technology.

In the past 18 months, Israel combined AI and facial recognition software to match partially obscured or injured faces to their actual identity, relying on AI to compile potential airstrike targets, developing Arabic AI models, and creating chatbots for scanning and analyzing Arabic language data from text messages, social media posts, and other sources.

Many of these initiatives were collaborations between enlisted soldiers from Unit 8200 and security soldiers from high-tech companies like Google, Microsoft, and Meta. Unit 8200 acted as a hub of innovation known as the “studio,” linking experts with AI projects.

Despite Israel’s advancements in AI technology, deploying such tools could result in false identifications and arrests, as well as civilian casualties, as Israeli and American officials have pointed out. Some officials have expressed concerns about the ethical implications of AI tools, which may increase surveillance and lead to further civilian harm.

European and American defense officials have glimpsed how such technologies will be utilized in future conflicts, contrasting with countries less involved than Israel in experimenting with real-time AI tools in battles.

Hadas Rover, director of the Institute of Responsible AI at the Horon Institute of Technology in Israel and former senior director of the Israeli National Security Council, stated, “It has led to groundbreaking techniques on the battlefield and valuable benefits in combat.”

However, Rover emphasized the serious ethical questions arising from technology and stressed the importance of checks and balances on AI, with humans making final decisions.

An Israeli military spokesperson refrained from commenting on specific technologies due to their classified nature. Still, she affirmed Israel’s commitment to the legal and responsible use of data technology tools, mentioning an ongoing investigation into the strike against Biari.

Meta and Microsoft chose not to comment, while Google clarified that employees fulfilling their reserve duties worldwide are not affiliated with Google.

Israel has previously utilized conflicts in Gaza and Lebanon to experiment and advance military technological tools, including drones, phone hacking tools, and Iron Dome defense systems.

… (continued)

Source: www.nytimes.com

AI-Powered Humanoid Workers and Surveillance Buggy: The Impact on Daily Life in China

oOn Saturday afternoon in Central Park in Shenzhen, a teenage girl gag evacuates from a drizzle under a concrete canopy. Putting stacked bags of potato chips in front of them, they swarm around some smartphones and sing towards the Mandoppo ballad. Their laughter rang across the grass around them, until they drilled holes in a mechanical, lively sound. Someone ordered dinner.

A few meters away from the improvised karaoke session is the “Airdrop Cabinet.” This is one of over 40 things in Deep Shenzhen, run by Meituan, China’s largest food delivery platform. The Hungry Park offers everything you can order, from rice noodles to subway sandwiches and bubble tea.

Loaded with items from a shopping mall less than 3km, drones watch, listen, listen, hover over the delivery station before lowering and depositing the items in a sealed box that can only be unlocked by entering the customer’s phone number. Dinner is served with non-humans. Meituan aims to beat human delivery times by about 10%, perhaps for a journey through the clouds in a thin polystyrene box.

The drone will take off from the rooftop of a shopping mall in Shenzhen, China on April 3, 2025. Photo: Anthony Kwan/The Guardian

Drones are just a part of the broader robotics and artificial intelligence industry that China intends to expand this year.

With the trade war furious, demographic challenges are dragged into the economy, and the prospects for productive relationships with the world’s largest economy seem farther than ever. Chinese leaders see artificial intelligence as key to solving problems created by the shrinking workforce, upgrading its military power, and solving the source of public pride, especially if Chinese companies avoid US-led sanctions on core technologies. And as technology companies have tried to crack down on excessive wealth and influence outside of the control of the Xi Jinping state, which has been shunned by Chinese leaders for many years, Xi’s neighbors are welcomed by folds as they seek to restore confidence in the private sector and encourage domestic innovation.

In March, Prime Minister Li Qiang promised to “unleash the creativity of the digital economy” with a special focus on “embodied AI.” Guangzhou, including the deep Shenzhen high-tech hub, is at the forefront of this movement. The state government recently announced 60m yuan (£6.4m) in new funding for the innovation centre. In particular, Demi Shenzhen is known as China’s drone capital due to its progressive approach to drone regulation, allowing the “low-altitude economy” to develop faster than the rest of the country. China’s Civil Aviation Authority predicts the sector’s value will increase by 3.5TN Yuan by five times over the next decade.

Drones aren’t the only ones who promise or threaten the tempo of Chinese urban life. Humanoid robots are particularly lively. The highlights of this spring festival gala have been seen almost 1.7 billion times, A dance performed by a group of humanoid robots It was created by a company called Unitree. On Saturday, the world’s first humanoid vs humanity – half marathon – took place in the suburbs of Beijing.

The robots will be participating in Saturday’s race. Photo: Ng Han Guan/AP

Rui Ma, a Chinese technology analyst and investor based in San Francisco, said: This shift will enable industry growth In 2025, it’s much faster than in the past few years. Reinforcement learning means training robots to learn from experience rather than relying on hard models, training humanoid robots in months rather than years, speeding up the pace of innovation. Toy robot dogs are already part of everyday life in China. At Yiwu’s wholesale market, a trade hub in Zhijiang province in eastern China, mothers stay with exporters beyond the price of eyelashes while children play with robotic dogs. In Streets in Shanghaiwoman walking robot dog. This carries a shopping basket on its back.

The drone run by Meituan, which has been loaded with products, will take off from the rooftop shopping at Shenzhen, China on April 3, 2025. Photo: Anthony Kwan/The Guardian

The development of China’s robotics industry is closely linked to advances in AI. For years, China has been catching up to the US. XI wants to promote economic growth through “new quality productivity” that includes advanced technology.

Many in Washington fear that the US lead is narrowing. One of the main tools in the US arsenal controls a critical part of the supply chain of semiconductors, the microchips used to train advanced AI models. The US has restricted exports of its most sophisticated chips to China. This is part of a strategy that former national security adviser Jake Sullivan described as “highfence” with the most strategically valuable technologies in the United States.

However, in January, a previously unknown Chinese company called Deepseek sets the Chinese technological scene and releases the R1, a massive linguistic inference model, to perform at a price that leads its US competitors. The model wiped out 1TN from Wall Street’s main technology index, causing a stock market crash as investors feared that US pole positions in high-tech races were no longer guaranteed.

“You can’t stress too much how crazy it is,” says MA.

Since then, China’s AI industry has been filled with optimism. As an answer to China’s long-term and sustainable growth, it was already being promoted by the government, and now the people are beginning to believe it, says Ma.

Meituan drones remove orders at the “Airdrop Cabinet” in Shenzhen, China. Photo: Meituan

Li Shuhao, a Guangzhou-based high-tech entrepreneur who founded AI marketing company TEC-DO in 2017, was in the US when the Deepseek moment happened. Suddenly, he says, “It was much easier to arrange an interview and a meeting with other AI scientists.”

“Deepseek is like a symbol of the oriental way of doing business,” says Li, a confessed “metal head” surrounded by electric guitars and drum kits in her Guangzhou office. He mentions a strategy by Deepseek founder Liang Wenfeng, who will fund it through his own hedge funds rather than seeking external venture capital funds. “This is how a typical Chinese entrepreneur thinks: survive first and then do something new.”

Deepseek has published the work as open source. This is a principle that the government has long supported and a move that encouraged the widespread adoption of the model. Robotics is a special beneficiary.

Technology is the top priority

The robot supply chain can be roughly divided into three areas: brain, body, and application of technology in the real world. China has long been confident in its capabilities in the latter two regions. The advanced supply chains of other high-tech industries, such as electric vehicles and autonomous drones, show that China has both the ability to produce large-scale industrial components and the ability to assemble them into complex commodities. However, once you solved the most difficult part of the puzzle, it was elusive to create a robotic brain that could learn human-like behavior and movements. You need sophisticated AI.

Deepseek’s R1 model is changing the game and hosting ways for domestic humanoid robot companies to keep up with their international competitors, Goldman Sachs analysts said in a recent memo. The fact that Deepseek’s open source model uses less advanced chips can help level the playing field for Chinese companies.

Engineers will train humanoid robots at the Humanoid Robot Innovation Centre in Shogan Park, Beijing, China on March 28, 2025. Photo: Beijing Youth Daily/VCG/Getty Images

The industry still has its challenges. AI models require a large amount of data to train. While LLM, used for things like chatbots, can draw out a vast universe of content, the Internet, the data in robotic AI models is relatively scarce info on how to physically move spaces and interact with objects and people.

Another sector in which China is focused, the car should be able to navigate six axes, or “degrees of freedom”, forward and rear, left and right, up and down, and rotations between these positions. The same goes for general robotics, such as Meituan’s food delivery loan. To enable humanoid robots to mimic humans on everyday tasks such as cooking, they need up to 60 degrees of freedom. There is a 27 H1 model of Unitree that caused a splash on Spring Gala.

A robot does not have to be completely humanoids to be useful. A wheel or humanoid robot with limited movement can take on automated ripe tasks, such as dangerous or repetitive factory work. Based in Shenzhen, Ubtech has already supplying humanoid robots to its car factories. With a shrinking workforce, China is keen to find ways to automate as much as possible.

The organizers, which was the Boao Forum for Asia last month, business meeting, were keen to cook Jianbing, Delicious pancakes are typical Chinese street food made with the robot arms of the booth, similar to the claws that collect toys packed in an arcade (the resulting snacks weren’t as crisp as humans made). Beijing’s parks have increased their surveillance capabilities by pasting cameras into autonomous buggies running along the path.

The humanoid robot will perform at the opening ceremony of the 2025 Zhongguancun Forum (ZGC Forum) Annual Meeting held in Beijing, China’s capital, on March 27, 2025. Photo: Xinhua/Rex/Shutterstock

“Robots

Source: www.theguardian.com

Exploring the Impact and Intrigue of 100 Quantum Theories

David Parker/Science Photo Library

You might say it all started with hay spots. In June 1925, a young physicist named Werner Heisenberg retreated to the barren island of Helgorand in the North Sea, seeking a rest from his allergies. So he scrawled the equations that illuminate European intellectual fires, forming the basis for ideas that ultimately shake our views on how reality works. The idea was quantum theory.

In recognition of the 100th Quantum Anniversary, the United Nations has designated 2025 as the year of International Quantum Science and Technology. There are celebrations, exhibitions and meetings all over the world.

This article is part of a special series celebrating the 100th anniversary of the birth of quantum theory. Click here for details.

If you know only one thing about quantum theory, it’s probably “strange.” Certainly, the idea that the quantum world is too strange to fully understand is infecting our culture. There are also products Like branded cosmetics Or, called “quantums,” they are implicit signals that they have power beyond our understanding.

The idea that the quantum world is too strange to be completely understandable is infecting our culture.

It is true that quantum theory paints strange pictures of the subatomic world, but stopping it will overlook its true importance. This centenary should celebrate its theory of power and provocation, as does the trio of articles in this special issue.

Physicist Carlo Robery gives us his view on the origins of quantum mechanics and presents its bold claims. We see how these ideas revolutionized technology and how they do so. And we explore the deep questions that quantum theory forces us to ask what it means to be “real.” The fact that it draws such an unsettling picture of the subatomic world suggests that we lack something about the workings of the universe, but new interpretations and experiments guide us towards a fresh understanding.

Quantum theory has also been a huge success. Most other scientific ideas have not passed many experimental tests. Its origin may be due to the fever of hay, but it is an irresistible heritage.

This article is part of a special series celebrating the 100th anniversary of the birth of quantum theory.

topic:

  • Quantum Mechanics/
  • Quantum theory

Source: www.newscientist.com

The Unintended Environmental Impact of Trump’s Policies on Online Shopping Emissions

Fashion giants like Shein and Temu have seen significant growth in the US due to tariff exemptions that kept prices low for packages shipped from China.

President Trump has ordered the closure of these loopholes starting with packages from China-Hong Kong, potentially impacting airline emissions related to the fashion industry.

Last year, 1.36 billion packages entered the US through this loophole, mostly from China. This exemption allows items under $800 to enter without customs duty, leading to a rise in emissions from shipping packages by air.

Flying packages across the ocean is 68x more carbon-intensive than marine cargo transport, according to Climate Action Accelerator.

In many countries, freight below a certain value is exempt from taxes. The US set the minimum exemption at $800, allowing foreign e-commerce platforms to compete with domestic retailers like Amazon.

This exemption helped Shein establish a niche in the US market with affordable apparel. However, President Biden announced a crackdown on these imports citing various concerns.

The number of shipments to the US has increased significantly, leading to environmental concerns and the need for stricter regulations.

President Trump took steps to end the De Minimis exemption, aiming to impose taxes on packages from Hong Kong and mainland China.

New rules will phase out exemptions over the next few weeks, with steep taxation coming into effect on June 1st. This move is expected to impact air ticket emissions significantly.

The increase in air freight usage has led to a rise in greenhouse gas emissions. Efforts to reduce emissions in this sector are minimal, posing a challenge for sustainability initiatives.

Shein and Temu did not respond to requests for comment regarding the new regulations.

Trump’s actions to close the loophole in February resulted in declining sales for Shein and Temu, indicating potential shifts in e-commerce practices.

Companies might opt for larger cargo shipments using marine transport to avoid high tariffs and reduce emissions, a change that could impact the industry significantly.

The increase in Antarctic tourism has brought economic benefits to Ushuaia in Argentina but has also raised concerns about environmental impact.

Source: www.nytimes.com

The impact of Trump’s tariffs on iPhone prices and available affordable alternatives

Amid a tariff frenzy that caused panic among consumers eyeing iPhones, President Trump announced tariff exemptions for electronic devices like smartphones and computers on Friday. This brought relief as there were concerns about the possibility of a $2,000 iPhone.

However, just two days later, the Trump administration hinted that smartphones and computers might face new tariffs targeting semiconductors or chips, potentially leading to a more expensive iPhone. Talk about a rollercoaster!

Despite the uncertainty over iPhone prices due to tariffs, there are still cheaper alternatives available, such as purchasing previous models.

The key lesson here is that to save money in the high-tech world, it’s best to use your devices for as long as possible.

“Buy the best and hold on,” advised Ramit Sethi, a personal finance expert. “Keeping an item for longer reduces the overall cost of ownership.”

The future costs of high-tech hardware remain uncertain. Nintendo recently postponed plans to launch the $450 Nintendo Switch 2 due to tariff uncertainty. Additionally, prices for accessories like phone chargers are increasing on platforms like Amazon.

To navigate future technology purchases effectively, consider holding onto your devices for longer periods to maximize their value.

Replacing your tech frequently can add up in costs. Calculating the true cost of ownership can help you make informed decisions when purchasing new devices.

By holding onto your devices and using them for a longer period, you can significantly reduce the total cost of ownership over time.

This principle applies not just to smartphones but also to computers and tablets. The longer you keep your devices, the more value you can extract from them.

High-tech products are designed to be long-term investments. Many devices today are built to last for several years, yet consumers tend to upgrade frequently, similar to how people buy new cars more often than necessary.

Developing the habit of replacing your device’s battery periodically can help extend its lifespan and save you money in the long run.

As manufacturers improve repairability, replacing components like batteries becomes more accessible and cost-effective.

In times of uncertainty regarding tariffs and rising prices, opting for refurbished or second-hand phones can provide a cost-effective alternative to buying new models.

Even in the face of potential price increases due to tariffs, there are plenty of affordable options available in the market, similar to buying used cars instead of brand new ones.

By exploring refurbished options and older models, you can find cost-effective solutions to high-tech purchases.

Rather than worrying about the hypothetical $2,000 iPhone, focus on more pressing financial matters like building an Emergency Savings Fund.

In challenging economic times, it’s essential to prioritize your financial stability over luxury purchases like the latest smartphones. Focus on what truly matters to secure your financial well-being.

Source: www.nytimes.com

The Potential Risks of Cryonics: How They Could Impact Your Chance at Immortality

In these turbulent times, there is a growing interest in cryonics as a way to freeze and preserve human remains for potential revival in the future when medical technology is more advanced.

The concept is intriguing – it’s like a savepoint in a video game where you can “undo” your life experiences and start anew when revived.

Despite the increasing enthusiasm for cryonics, there are significant challenges that need to be addressed before it can be considered a viable option.

Freezing Limitations

Freezing living organisms at ultra-low temperatures often results in irreparable damage, leading to death. The human body, being primarily composed of water, cannot withstand the formation of ice crystals that can cause extensive harm to cells and tissues.

While anti-freeze agents can help mitigate this damage at a cellular level, the complexity of the human body poses a greater challenge when trying to freeze it effectively.

Freezing the human body for cryonics often causes irreversible cell damage, especially in the brain, making revival virtually impossible with current technology. – Photo credit: Getty

read more:

Freezing and thawing the human brain poses a particularly daunting task due to the complexity and vulnerability of brain cells. Neurons, being highly energy-dependent and structurally intricate, are difficult to preserve and repair through cryogenic processes.

Challenges with Brain Preservation

Many proponents of cryonics opt to freeze only the head or brain under the assumption that advancements in medicine can facilitate the replacement of the rest of the body. However, reanimating a frozen brain presents significant hurdles.

Neurons, the building blocks of brain function, are fragile and sensitive to damage. The intricate connections between neurons, which form the basis of memories and identity, are easily disrupted during the freezing process, making reconstruction a monumental task.

Even if future technologies can restore neuronal connections, the complexity of mapping these connections accurately without prior brain scans poses a significant challenge.

Ultimately, while cryonics offers hope for the future, it also requires a substantial amount of optimism given the current limitations and uncertainties surrounding the process.

read more:

Source: www.sciencefocus.com

The Impact of the Season on Your Metabolism: How Your Thinking Can Make a Difference

Ah, the circle of life! Your parents engage in intimate activities, and nine months later, you make your grand entrance into the world (apologies for that mental image).

However, did you know that the temperature during your parents’ romantic encounters could impact your metabolism for the long haul?

According to recent research conducted by a researcher at the University of Tokyo in Japan, this might be the case.

The study analyzed the season when 642 Japanese adults were conceived and discovered that individuals conceived during colder months tend to have lower body mass index (BMI), less visceral (abdominal) fat, and a faster metabolism compared to those conceived in warmer months.

This correlation is linked to brown fat, a type of fat that burns energy even at rest, helps keep the body warm, and assists in regulating blood sugar levels.

“People conceived during colder seasons tend to have more active brown fat as adults,” explained Takeshi Yoneshiro, an associate professor at Tohoku University Graduate School of Medicine, in an interview with BBC Science Focus.

Having more active brown fat means the body burns more energy while resting, potentially resulting in a faster metabolism compared to individuals with lower levels of brown fat.

Our bodies utilize white fat for calorie storage, but brown fat is essential for maintaining warmth. – Credit: nopparit via Getty

Professor Jaswinder Sethi, an expert in immuno-metabolism at the University of Southampton who was not involved in the research, stated to BBC Science Focus: “Brown fat’s primary role is to produce heat and maintain body temperature.

“Moreover, brown fat activity significantly contributes to energy expenditure, aiding in reducing the need for storage and potentially preventing the risk of obesity and metabolic disorders.”

Yoneshiro suggested that parental exposure to cold temperatures could lead to epigenetic modifications, influencing how our genes are expressed.

“In modern times, this metabolic system may help regulate energy balance and protect against metabolic diseases by acting as a heater and air conditioner,” Yoneshiro explained.

However, Sethi cautioned: “It’s crucial to note that, similar to many known genetic variations associated with obesity, these changes are not the sole contributors to future health issues, as individuals may have genetic predispositions affecting their metabolism.”

Additionally, Dr. Adam Collins, an Associate Professor of Nutrition at the University of Surrey not involved in the study, stated to BBC Science Focus that the significance of brown fat in metabolic regulation may be overemphasized.

“Having abundant brown fat might not necessarily equate to a higher metabolic rate,” Collins noted. “The benefit of brown fat lies in its ability to generate heat, particularly in cold conditions, rather than simply burning calories.”

Since this study is observational, it cannot definitively prove that the season of conception impacts a child’s metabolism in the long term.

Nonetheless, Yoneshiro expressed hope: “If other factors can reproduce this effect, targeted interventions may be developed to enhance metabolic resilience in future generations.”

Read more:

About our experts:

Dr. Takeshi Yoneshiro is an associate professor of biomedical sciences specializing in molecular physiology and metabolism at Tohoku University’s Graduate School of Medicine. Prior to joining Tohoku University in 2023, he served as an associate professor at the Center for Advanced Science and Technology Research at the University of Tokyo.

Jaswinder Sethi is a professor of immunotherapy at the University of Southampton. She is also an Honorary NHS Foundation Trust Research Fellow and a member of the Life Sciences Institute. Her research focuses on immune metabolism, obesity, metabolic diseases, and tissue remodeling.

Dr. Adam Collins is an Associate Professor of Nutrition at the School of Biological Sciences, University of Surrey. With over 20 years of experience as a qualified nutritionist, he leads BSc and MSc nutrition programs at the university. His research includes studying exercise intensity and energy balance, intermittent fasting, dietary composition and timing, and carbohydrate manipulation for metabolic health.

Source: www.sciencefocus.com

The Impact of Elon Musk’s Failing Satellite on the Ozone Layer

At present, there are around 13,000 satellites orbiting Earth, with roughly 10,000 of them functioning. However, the number of satellites in orbit is set to increase drastically by 2030, with 50,000 new satellites expected to be launched.

This significant increase is primarily due to the rise of Internet megaconstellations like SpaceX’s Starlink and other satellite projects. Currently, there are approximately 8,000 satellites in low Earth orbit, with nearly 6,500 of them being Starlink satellites.

SpaceX plans to deploy 12,000 satellites and is seeking approval for an additional 30,000, while other companies, like Amazon, are also planning their own megaconstellations.

The influx of satellites in low Earth orbit raises concerns about potential collisions and environmental impacts. Scientists warn that megaconstellations could harm the ozone layer, which protects the planet from harmful UV rays.

When satellites are decommissioned, they re-enter the Earth’s atmosphere and release aluminum oxide particles, which can damage the ozone layer by catalyzing chemical reactions. These particles can linger in the atmosphere for decades, further depleting the ozone.

Research published in the Geophysical Research Journal in 2024 revealed that a single satellite can release a significant amount of aluminum oxide particles, which can accumulate over time and contribute to ozone depletion.

The continuous deployment of megaconstellations could inject large amounts of aluminum oxide into the upper atmosphere every year, significantly increasing the risk of ozone layer damage.

The short lifespan of internet satellites in low Earth orbit poses additional challenges, as they need to be actively removed or they will burn up in the atmosphere. SpaceX’s Starlink satellites, for example, could be pulled out of orbit within five years if not removed.

The constant re-entry of decommissioned satellites could release a stream of burnt-out material into the atmosphere, exacerbating the environmental impact. Scientists predict a significant increase in satellite re-entries in the coming years, which could further impact the ozone layer.

It may take several decades before the full extent of satellite re-entry impacts the ozone layer, but the rapid growth of megaconstellations poses a significant risk to ozone layer recovery efforts.

Future research collaborations are being formed to study the direct link between decommissioned satellites and ozone depletion, aiming to quantify the environmental risks associated with satellite combustion.


This article addresses the query posed by Claudine Best from Dorset: “Do satellites burning in the atmosphere pose a threat to the environment?”

To submit your questions, please email Question@sciencefocus.com or message us on Facebook, Twitter, or Instagram (don’t forget to include your name and location).

Explore more fascinating science topics on our website for fun facts and insights.


Read more:

Source: www.sciencefocus.com

The impact of smoking and vaping: it all varies depending on the perspective

Smokers are becoming more hesitant about the benefits of using e-cigarettes to heat nicotine-containing liquids and inhaling vapors, as opposed to inhaling smoke from burning cigarettes.

Research in the UK last year showed that over a third of smokers now believe vaping is more harmful to health than smoking, compared to 12% four years ago, while another third think vaping is just as bad.

Despite scientific evidence showing the harmful effects of smoking and Cochrane reviews suggesting that vaping can help more people quit than other nicotine products, awareness about vaping remains crucial this year as smokers who perceive it as less harmful are more likely to switch.

While both vaping and smoking have known health effects, experts agree that vaping is less harmful than smoking, exposing individuals to fewer toxins at lower levels. This understanding is supported by research conducted by various experts in the field.

Vaping generally involves inhaling aerosols that may contain nicotine, flavorings, and other chemicals. – Photo credit: Getty

Dr. Jamie Hartman Boyce, a health policy expert, emphasizes that while e-cigarettes are not completely safe, they are significantly less deadly than smoking.

Although there are ongoing discussions in the media regarding the harms of vaping, it is important to consider the well-documented risks associated with smoking, which tend to impact older populations.

Health risks

Smoking remains a major risk factor for various health issues, including cancer, heart disease, infertility, and pregnancy complications, resulting in over 8 million deaths annually. Vaping, on the other hand, produces a lesser mixture of toxic substances compared to burning tobacco.

While more research is needed on the long-term effects of vaping, current evidence suggests that it is less harmful than smoking. Dr. Sarah Jackson highlights the importance of acknowledging potential long-term risks while focusing on the existing evidence supporting the relative safety of vaping.

Research suggests that switching to vaping is a more effective way to give up smoking than other nicotine replacement products – Photo Credit: Getty

Ongoing research by experts like Dr. Maxime Boidin is aimed at understanding the long-term impact of vaping on health, particularly its effects on blood vessels and the cardiovascular system.

As research progresses, it is crucial to rely on peer-reviewed studies to accurately assess the outcomes and implications of vaping. Media reports on ongoing research can sometimes lead to misconceptions and premature conclusions.

Non-smokers turning to vaping

Evidence suggests that vaping can be an effective method for smoking cessation, with e-cigarettes proving to be more useful than traditional nicotine replacements. However, concerns arise when considering individuals who have never smoked and are now turning to vaping.

It is essential to weigh the risks and benefits of vaping, especially for non-smokers, considering factors like exposure to chemicals and potential nicotine addiction. Choosing between vaping and smoking should be approached with caution, prioritizing health and well-being.


About our experts

Dr. Jamie Hartman Boyce: An assistant professor of health policy and management at the University of Massachusetts, whose work is published in reputable journals.

Dr. Sarah Jackson: A leading researcher in the UCL Alcohol and Tobacco Research Group, with work published in esteemed scientific journals.

Dr. Maxime Boidin: A senior lecturer in cardiac rehabilitation at Manchester Metropolitan University, focusing on cardiovascular health research.

Source: www.sciencefocus.com

The impact of tariffs on digital commerce businesses

This year was supposed to be a banner moment for digital commerce companies.

Digital payment giant Klarna was preparing for the first public offer. So did the financial services company Chime. StubHub, an online ticketing business, has been talking to bankers for months about their pursuit of an IPO.

But after President Trump announced the tariff barrage this week, businesses in the industry were rushing to deal with fallout.

Among other moves, Klarna, Chime and Stubhub are all aiming to suspend their IPO plans and wait for market volatility, people with knowledge of the issue said. Additionally, companies that provide payment processing services to online merchants such as Shopify are calling for changes to Trump’s customs policy and are advising customers on how to survive potential financial difficulties. Stripe, payment startups, and Block, a payment and remittance service company previously known as Square, is in a similar move.

It may seem counterintuitive that tariffs bring pain to digital commerce companies. However, these businesses are set up to be affected in a roundabout way.

Retailers like Amazon, which act as clearing houses for online merchants, can feel the impact when fewer people buy foreign exports on their platforms. Companies like Klarna benefit from the fees that charge small businesses for processing digital payments.

“If this chicken game continues until 2025 and continues for longer, this will be extremely painful for the retail industry as a whole,” said Shut Alitakodali, an analyst at Forester, which covers retail and e-commerce. “That would be bad for everyone.”

On Wednesday, Trump said tariffs would reverse decades of what he called unfair treatment in other parts of the world, bringing factories and jobs back to the United States. “The market will be booming,” he said, “the country will be booming.”

However, tariffs are far more wide and more severe than expected, and many tech companies quickly began to feel pain. Apple, Oracle and Dell have global supply chains that are likely to be destroyed by tariffs, but were the most obvious candidates facing fallout.

Digital-first companies dealing in online sales can lose just as much. For example, Meta and Google have been pressured by the threat of bringing back companies, particularly Chinese companies, to buy e-commerce ads on their platforms.

Amazon, the largest e-commerce company, has slid over 9% of its stake in the millions of third-party sellers who ship goods from China (one of the countries that was hit hardest by Trump’s tariffs) since the tariffs have been announced.

TD Cowen analyst John Blackledge has lowered Amazon revenue, operating profit and estimates of 3% to 4% during 2020, particularly as Trump’s “worse than expected” tariffs hurt the company’s market.

Some digital commerce companies could survive the chaos. StubHub, which sells tickets to live events, bounced back after the recession during Covid Pandemic and the 2008 financial crisis. Additionally, Chime customers who provide digital services such as mobile banking apps and checking accounts tend to use their products to buy items such as gasoline and groceries that are usually not sensitive to economic fluctuations.

But Shopify, Klarna and Stripe are all vulnerable to Trump’s tariffs. Payment processing platforms like Stripe tend to be trending due to the global economy and the strength of online shopping. If a large company raises prices due to tariffs, consumers may purchase fewer products online. Additionally, these companies earn a large portion of their revenue from commissions to process sellers’ sales, so lower sales volumes can affect all businesses.

Klarna, Stubhub, Chime and Stripe declined to comment. For more information about Klarna, Stubhub and Chime’s IPO plans, see Wall Street Journal and axios.

A Shopify spokesperson pointed to a recent blog post advising sellers on how to navigate a choppy environment if tariffs hinder their business.

“Without small business protection, legitimate entrepreneurs suffer under policies aimed at curbing exploitation,” the company said. In a blog post. “This hiking cost will disrupt supply chains and hinder trade across borders.”

The company said it supported Trump to address several loopholes in the tariff system. This includes the “de minimis exemption,” in which businesses exempt customs duties on exports to the United States of less than $800.

However, they warned against overdone policies. “Dealing with this abuse is justified, but small businesses cannot be a secondary damage,” Shopify says.

Michael J. de la Mercedo Reports of contributions.

Source: www.nytimes.com

The Impact of Different Coffee Types on Cholesterol Levels

When you arrive at work, what is the first thing you do? Do you unpack your bag, set up your desk, and then head straight for the coffee machine? You’re not alone.

According to the National Coffee Association, the average American drinks more than three cups of coffee a day. In moderation, coffee is often considered part of a healthy lifestyle for good reason. It is linked to a reduced risk of conditions like diabetes and certain types of cancer.

However, your morning brew may not be as healthy as you think. Coffee contains natural compounds that can raise cholesterol levels, and depending on how it is prepared, your daily cup may contain more of these compounds than ideal.

A team of Swedish researchers investigated coffee machines in workplaces and found that many people brewed coffee with high levels of these cholesterol-raising substances.

“For decades, we’ve known that certain types of coffee can elevate cholesterol levels,” said Dr. David Igman, co-author of new research published in the journal Nutrition, Metabolism, Cardiovascular Disease to BBC Science Focus.

In particular, unfiltered or boiled coffee is known to contain two cholesterol-raising compounds (cafestol and kahweol) that belong to a group of naturally occurring fats called diterpenes.

Liquid model coffee machines contain lower levels of diterpenes than other brewers. – Getty

These compounds are associated with an increased risk of high cholesterol and cardiovascular disease, as well as a slight reduction in “good” cholesterol (HDL).

In contrast, filtered coffee typically contains much lower levels of these compounds and is considered a safer choice in terms of cholesterol levels.

Dr. Igman explained, “At work, many people get their coffee from machines, and yet no one has actually tested these machines to see if they produce filtered or unfiltered coffee.”

To investigate, the team tested 14 coffee machines in different workplaces, collecting samples brewed on different days and measuring the levels of cafestol and kahweol in the final cup.

In their analysis, they also examined other common types of coffee, such as Scandinavian-style drip coffee, percolators, French presses, espresso, and boiled coffee.

The results showed significant variations between the machines, with some producing coffee with very low diterpene levels similar to paper coffee.

Paper-filtered coffee contains minimal cholesterol-raising cafestol. – Erik et al. Nutrition, Metabolism, Cardiovascular Disease

Dr. Igman concluded, “From our data, liquid model machines are definitely a better option, producing coffee with very low diterpene levels similar to paper coffee.”

Liquid model machines do not brew coffee in the traditional way; they combine liquid coffee concentrate with hot water to create a cup.

In contrast, traditional brewers use ground or whole beans, passing hot water through a metal filter, resulting in higher levels of cholesterol-raising compounds.

In summary, Dr. Igman advised, “Don’t worry about drinking coffee, as it is associated with various health benefits. However, if you regularly consume machine-made coffee at work, pay attention to how it is brewed, especially if you are monitoring your cholesterol levels.”

“While we don’t fully understand how these machines affect blood lipids, it’s likely dependent on the amount of coffee consumed. Using a paper filter or instant coffee is the safest option for cholesterol levels,” he added.

About our experts

David Igman is a research associate at the Dalana Centre for Clinical Research at the University of Uppsala, Sweden. His research focuses on American Journal of Clinical Nutrition, Diabetes, and Internal Medicine.

Read more:

Source: www.sciencefocus.com

The potential impact of Trump’s tariffs on the US battery boom

President Trump’s recent tariffs may impact the use of grid batteries in the US energy sector. These batteries are crucial for storing excess wind and solar energy to enhance the electric grid’s reliability. Grid batteries have seen significant growth in states like Texas and Arizona over the past five years, being used to store solar power and reduce reliance on natural gas.

Despite their importance, the majority of US lithium-ion batteries are imported, with a large portion coming from China. With the new tariffs imposed by Trump, grid batteries will face significant taxes when imported from China, potentially hindering their deployment and impacting grid reliability.

Jason Burwen, vice president of policy and strategy at battery developer Gridstor, expressed concerns about the implications of these tariffs on the energy storage deployment, labeling it as detrimental to both business and grid reliability.

The grid battery capacity in the US was projected to reach a record 18,200 megawatts this year, according to the US Energy Information Agency. This growth in battery capacity, along with wind and solar power, was expected to contribute significantly to the grid expansion.

Grid batteries have been instrumental in addressing the intermittency of renewable energy sources like wind and solar power. States like California and Texas have seen an increase in battery installations to mitigate the risk of blackouts during peak demand periods.

Besides supporting renewable energy integration, grid batteries also help stabilize power flow, manage disruptions, and alleviate congestion on transmission lines. The decreasing cost of lithium-ion technology has fueled the installation of grid batteries, paralleling the EV battery trend.

Antoine Vagneur-Jones, head of trade and supply chain at Bloombergnef, highlighted the reliance on Chinese imports for batteries in the US clean energy sector. He warned that the tariffs imposed could have a more significant impact on batteries than other technologies.

The US has taken steps to develop a domestic battery supply chain, but the future remains uncertain due to potential policy changes. While investments have been made in new battery plants under the Biden administration, clean energy policies are facing challenges from Congressional President Trump and Republicans.

Vagneur-Jones noted the complexity of assessing the impact of tariffs on the energy mix, particularly in the competition between batteries and natural gas plants to support renewable energy fluctuations.

Utility companies may find it challenging to increase their reliance on gas due to global supply chain constraints and tariffs affecting the oil and gas industry. While tariffs may benefit fossil fuels, they could hinder clean energy progress, ultimately impacting energy solutions for all.

Source: www.nytimes.com