Urgent Climate Consequences Arriving Ahead of Schedule Could Drain Trillions from the Global Economy

Wildfires in California - January 2025

Wildfires in California – January 2025

David McNew/Getty Images

The impact of climate change is accelerating faster than anticipated, with governments and businesses continuing to underestimate associated risks. These risks could lead to economic losses reaching trillions of dollars by 2050.

According to reports from climate scientists and financial experts, the world might be significantly underestimating the speed of global warming, facing the prospect of “planetary bankruptcy.” This means climate change could cause extensive damage to both the environment and economic growth.

Decision-makers often concentrate on intermediate climate impact estimates. However, with phenomena such as extreme precipitation occurring sooner than projected, preparations for worst-case scenarios are necessary, as indicated in the report.

“Urgent global cooperation on a solvency plan is essential,” says David King, former chief climate adviser to the UK government, who contributed to the report. “We’re experiencing an acceleration in temperature rise. While the future is uncertain, it’s reasonable to assume that this trend won’t reverse.”

The initial step towards such a plan could involve reevaluating the assumption that the global economy will continue to grow indefinitely. Sandy Trust, a British investment manager at Baillie Gifford, remarked that according to the Network for Greening the Financial System, the world could incur trillions in annual losses by 2050 due to climate impacts. However, the network believes that a recession is unlikely, as global economic growth might outpace these losses.

“This is akin to Titanic risk modeling, predicting a smooth journey from the deck of the Titanic in April 1912,” Trust adds. “Such assumptions overlook fundamental principles of risk management—most notably, the importance of planning for worst-case scenarios.”

Preparation for the worst is critical, according to a report from the European Union’s Copernicus climate change agency. The study highlighted that 2025 was the third warmest year on record, with average temperatures rising 1.47 degrees Celsius above pre-industrial levels. The temperatures in 2024 were even higher, leading to a three-year average exceeding 1.5 degrees Celsius for the first time.

This growth represents a step closer to the 20-30 year average needed to achieve the Paris Agreement goal of limiting temperature rises to below 1.5 degrees Celsius. Ten years since the agreement was signed, projections indicated that the 1.5 degrees Celsius threshold would be reached by 2045. However, if current trends persist, according to Copernicus’ data, we could breach this critical limit by 2030.

Scientists indicate that the rate of global warming is speeding up, largely due to declining air pollution levels, including sulfur emissions from coal-fired power plants and shipping. With clearer skies, more sunlight reaches the Earth, leading to an apparent increase of about 0.5 degrees Celsius.

However, the primary factor behind breaching the 1.5 degrees Celsius threshold sooner than predicted is the relentless rise in greenhouse gas emissions. Samantha Burgess from Copernicus emphasizes that fossil fuel emissions are expected to hit record levels in 2025.

“Emissions are not decreasing as quickly as anticipated,” Burgess comments.

With each increment of warming, extreme weather events become increasingly frequent and severe. The January 2025 wildfires in Los Angeles may potentially mark the most costly natural disaster in U.S. history, exacerbated by the climate crisis which will likely double their frequency and amplify their severity by 25 times. Hurricane Melissa, the most powerful storm to make landfall in the Atlantic, had wind speeds at least 10 miles per hour higher than would normally be expected without climate change.

“This figure represents a global average; thus, 1.5 degrees Celsius of global warming means that heatwaves can be 3 to 4 degrees, or even 10 degrees hotter than usual,” Burgess explains. “The younger generation will face even more extreme heat and climate risks than we did.”

The polar regions are warming at a pace faster than others, mainly due to feedback mechanisms, such as the loss of reflective snow and ice. In fact, last year witnessed record warmth in Antarctica, attributed to an unusual stratospheric heating event. The extent of sea ice across the Arctic and Antarctic has now reached unprecedented lows.

On a positive note, global emissions are showing a leveling-off trend, specifically in China, where emissions have stabilized.

“With CO2 emissions plateauing, we anticipate continued warming, but not at an accelerated rate,” states Timothy Osborne of the University of East Anglia, UK.

Addressing methane leaks from infrastructures like gas pipelines and aging coal mines could provide a short-term solution, King suggests. Reducing methane emissions by 30% over the next decade could mitigate global warming by at least 0.2 degrees Celsius by 2050.

“We must also tackle other slow-moving issues, which are vital elements of our path forward,” King asserts. “An overshoot beyond 1.5 degrees Celsius presents significant challenges for humanity.”

Topics:

Source: www.newscientist.com

AI Companies Will Face Legal Consequences from Copyright Holders Starting in 2025

Disney stated that its AI image generator Midjourney was developed using films like ‘The Lion King’

Maximum Film/Alamy

Since the launch of ChatGPT, OpenAI’s generative AI chatbot, three years ago, we’ve witnessed dramatic shifts across various aspects of our lives. However, one area that remains unchanged is adherence to copyright law. We still strive to uphold pre-AI standards.

It’s widely recognized that leading AI firms have developed models by harvesting data from the internet, including copyrighted content, often without securing prior approval. This year, prominent copyright holders have retaliated, filing various lawsuits against AI companies for alleged copyright violations.

The most notable lawsuit was initiated in June by Disney and Universal, claiming that the AI image generation platform Midjourney was trained using their copyrighted materials and enabled users to produce images that “clearly included and replicated Disney and Universal’s iconic characters.”

The proceedings are still underway, with Midjourney’s recent response in August asserting, “The limited monopoly granted by copyright must yield to fair use,” suggesting that the outcome would be transformative, permitting AI companies to educate models with copyrighted works.

Midjourney’s statements highlight that the copyright debate is more complex than it might seem at first glance. “Many believed copyright would serve as the ultimate barrier against AI, but that’s not entirely true,” remarks Andres Guadams from the University of Sussex, UK, expressing surprise at how little impact copyright has had on the progress of AI enterprises.

This is occurring even as some governments engage in discussions on the matter. In October, the Japanese government made an official appeal to OpenAI, urging the company behind the Sora 2 AI video generator to honor the intellectual property rights of its culture, including its manga and beloved video games like those from Nintendo.

Sora 2 is embroiled in further controversy due to its capability to generate realistic footage of real individuals. OpenAI recently tightened restrictions on representations of Martin Luther King Jr. after family representatives raised concerns about a depiction of his iconic “I Have a Dream” speech that included inappropriate sounds.

“While free speech is crucial when portraying historical figures, OpenAI believes that public figures and their families should ultimately control how their likenesses are represented,” the company stated. This restriction was only partially effective, as celebrities and public figures must still opt-out from having their images utilized in Sora 2. Some argue this remains too permissive. “No one should have to tell OpenAI if they wish to avoid being deepfaked,” states Ed Newton Rex, a former AI executive and founder of the campaign group Fairly Trained.

In certain instances, AI companies face legal challenges over their practices, as highlighted by one of the largest proposed lawsuits from the past year. In September, three authors accused Anthropic, the firm behind the Claude chatbot, of deliberately downloading over 7 million pirated books for training its AI models.

A judge reviewed the case and concluded that even if the firm had utilized this material for training, it could be considered a sufficiently “transformational” use that wouldn’t inherently infringe copyright. However, the piracy allegations were serious enough to warrant trial proceedings. Anthropic ultimately decided to settle the lawsuit for at least $1.5 billion.

“Significantly, AI companies appear to be strategizing their responses and may end up disbursing a mix of settlements and licensing deals,” Guadams noted. “Only a small number of companies are likely to collapse due to copyright infringement lawsuits,” he adds. “AI is here to stay, even if many established players may fail due to litigation and market fluctuations.”

topic:

Source: www.newscientist.com

I Felt It Was My Destiny: Social Media Rumors Sparked Pregnancy Speculation, Leading to Unforeseen Consequences

I cannot recall the exact moment my TikTok feed presented me with a video of a woman cradling her stillborn baby, but I do remember the wave of emotion that hit me. Initially, it resembled the joyous clips of mothers holding their newborns, all wrapped up and snug in blankets, with mothers weeping—just like many in those postnatal clips. However, the true nature of the video became clear when I glanced at the caption: her baby was born at just 23 weeks. I was at 22 weeks pregnant. A mere coincidence.

My social media algorithms seemed to know about my pregnancy even before my family, friends, or doctor did. Within a day, my feed transformed. On both Instagram and TikTok, videos emerged featuring women documenting their journeys as if they were conducting pregnancy tests. I began to “like,” “save,” and “share” these posts, feeding the algorithm and indicating my interest, and it responded with more content. But it didn’t take long for the initial joy to be overtaken by dread.

The algorithm quickly adapted to my deepest fears related to pregnancy, introducing clips about miscarriage stories. In them, women shared their heartbreaking experiences after being told their babies had no heartbeat. Soon, posts detailing complications and horror stories started flooding my feed.

One night, after watching a woman document her painful birthing experience with a stillbirth, I uninstalled the app amidst tears. But I reinstalled it shortly after; work commitments and social habits dictated I should. I attempted to block unwanted content, but my efforts were mostly futile.

On TikTok alone, over 300,000 videos are tagged with “miscarriage,” and another 260,000 are linked under related terms. A specific video titled “Live footage of me finding out I had a miscarriage” has garnered almost 500,000 views, while fewer than 5 million have been dedicated to women giving birth to stillborns.

Had I encountered such content before pregnancy, I might have viewed the widespread sharing of these experiences as essential. I don’t believe individuals sharing these deeply personal moments are in the wrong; for some, these narratives could offer solace. Yet, amid the endless stream of anxiety-inducing content, I couldn’t shake the discomfort of the algorithm prioritizing such overwhelming themes.


“I ‘like,’ ‘save,’ and ‘share’ the content, feeding it into the system and prompting it to keep returning more”…Wheeler while pregnant. Photo by Kathryn Wheeler

When discussing this experience with others who were also pregnant at the same time, I found shared nods of understanding and similar narratives. They too recounted their personalized concoctions of fears, as their algorithms zeroed in on their unique anxieties. Our experiences felt radical as we were bombarded with such harrowing content, expanding the range of what is deemed normal concern. This is what pregnancy and motherhood are like in 2025.

“Some posts are supportive, but others are extreme and troubling. I don’t want to relive that,” remarks 8-month-pregnant Cerel Mukoko. Mukoko primarily engages with this content on Facebook and Instagram but deleted TikTok after becoming overwhelmed. “My eldest son is 4 years old, and during my pregnancy, I stumbled upon upsetting posts. They hit closer to home, and it seems to be spiraling out of control.” She adds that the disturbing graphics in this content are growing increasingly hard to cope with.

As a 35-year-old woman of color, Mukoko noticed specific portrayals of pregnant Black women in this content. A 2024 analysis of NHS data indicated that Black women faced up to six times the rate of severe complications compared to their white counterparts during childbirth. “This wasn’t my direct experience, but it certainly raises questions about my treatment and makes me feel more vigilant during appointments,” she states.

“They truly instill fear in us,” she observes. “You start to wonder: ‘Could this happen to me? Am I part of that unfortunate statistic?’ Given the complications I’ve experienced during this pregnancy, those intrusive thoughts can be quite consuming.”

For Dr. Alice Ashcroft, a 29-year-old researcher and consultant analyzing the impacts of identity, gender, language, and technology, this phenomenon began when she was expecting. “Seeing my pregnancy announcement was difficult.”

This onslaught didn’t cease once she was pregnant. “By the end of my pregnancy, around 36 weeks, I was facing stressful scans. I began noticing links shared by my midwife. I was fully aware that the cookies I’d created (my digital footprint) influenced this feed, which swayed towards apocalyptic themes and severe issues. Now with a 6-month-old, her experience continues to haunt her.

The ability of these algorithms to hone in on our most intimate fears is both unsettling and cruel. “For years, I’ve been convinced that social media reads my mind,” says 36-year-old Jade Asha, who welcomed her second child in January. “For me, it was primarily about body image. I’d see posts of women who were still gym-ready during their 9th month, which made me feel inadequate.”

Navigating motherhood has brought its own set of anxieties for Asha. “My feed is filled with posts stating that breastfeeding is the only valid option, and the comment sections are overloaded with opinions presented as facts.”

Dr. Christina Inge, a Harvard researcher specializing in tech ethics, isn’t surprised by these experiences. “Social media platforms are designed for engagement, and fear is a powerful motivator,” she observes. “Once the algorithm identifies someone who is pregnant or might be, it begins testing content similar to how it handles any user data.”


“For months after my pregnancy ended, my feed morphed into a new set of fears I could potentially face.” Photo: Christian Sinibaldi/Guardian

“This content is not a glitch; it’s about engagement, and engagement equals revenue,” Inge continues. “Fear-based content keeps users hooked, creating a sense of urgency to continue watching, even when it’s distressing. Despite the growing psychological toll, these platforms profit.”

The negative impact of social media on pregnant women has been a subject of extensive research. A systematic review examining social media use during pregnancy highlights both benefits and challenges. While it offers peer guidance and support, it also concludes that “issues such as misinformation, anxiety, and excessive use persist.” Dr. Nida Aftab, an obstetrician and the review’s author, emphasizes the critical role healthcare professionals should play in guiding women towards healthier digital habits.

Pregnant women may not only be uniquely vulnerable social media consumers, but studies show they often spend significantly more time online. A research article published in midwife last year indicated a marked increase in social media use during pregnancy, particularly peaking around week 20. Moreover, 10.5% of participants reported experiencing symptoms of social media addiction, as defined by the Bergen Social Media Addiction Scale.

In the broader context, Inge proposes several improvements. A redesigned approach could push platforms to feature positive, evidence-based content in sensitive areas like pregnancy and health. Increased transparency regarding what users are viewing (with options to adjust their feeds) could help minimize harm while empowering policymakers to establish stronger safeguards around sensitive subjects.

“It’s imperative users understand that feeds are algorithmic constructs rather than accurate portrayals of reality,” Inge asserts. “Pregnancy and early parent-child interactions should enjoy protective digital spaces, but they are frequently monetized and treated as discrete data points.”

For Ashcroft, resolving this dilemma is complex. “A primary challenge is that technological advancements are outpacing legislative measures,” she notes. “We wander into murky waters regarding responsibility. Ultimately, it may fall to governments to accurately regulate social media information, but that could come off as heavy-handed. While some platforms incorporate fact-checking through AI, these measures aren’t foolproof and may carry inherent biases.” She suggests using the “I’m not interested in this” feature may be beneficial, even if imperfect. “My foremost advice is to reduce social media consumption,” she concludes.

My baby arrived at the start of the year, and I finally had a moment to breathe as she emerged healthy. However, that relief was brief. In the months following my transition into motherhood, my feed shifted yet again, introducing new fears. Each time I logged onto Instagram, the suggested reels displayed titles like: Another baby falls victim to danger, accompanied by the text “This is not safe.” Soon after, there was a clip featuring a toddler with a LEGO in their mouth and a caption reading, “This could happen to your child if you don’t know how to respond.”

Will this content ultimately make me a superior, well-informed parent? Some might argue yes. But at what cost? Recent online safety legislation emphasizes the necessity for social responsibility to protect vulnerable populations in their online journeys. Yet, as long as the ceaseless threat of misfortune, despair, and misinformation assails the screens of new and expecting mothers, social media firms will profit from perpetuating fear while we continue to falter.

Do you have any thoughts on the issues raised in this article? If you would like to submit a response of up to 300 words for consideration in our Letters section, please click here.

Source: www.theguardian.com

Trump’s Tax Bill Aims to Thwart AI Regulation, Experts Warn of Potential Global Consequences

US Republicans are advocating for the approval of significant spending legislation that contains measures to thwart states from implementing regulations on artificial intelligence. Experts caution that the unchecked expansion of AI could exacerbate the planet’s already perilous, overheating climates.

Research from Harvard University indicates that the industry’s massive energy consumption is finite, and carbon dioxide—amounting to around 1 billion tonnes according to the Guardian—is projected to be emitted in the US by AI over the next decade.

During this ten-year span, when Republicans aim to “suspend” state-level regulations on AI, there will be a substantial amount of electricity consumed in data centers for AI applications, contributing to greenhouse gas emissions in the US that surpass those of Japan. Every year, the emissions will be three times higher than those of the UK.


The actual emissions will rely on the efficiency of power plants and the degree of clean energy utilization in the coming years; however, the obstruction of regulations will also play a part, noted Genruka Guidi, a visiting scholar at Harvard’s School of Public Health.

Restricting surveillance will hinder the shift away from fossil fuels and diminish incentives for more energy-efficient AI technologies,” Guidi stated.

We often discuss what AI can do for us, but we rarely consider its impact on our planet. If we genuinely aim to leverage AI to enhance human welfare, we mustn’t overlook the detrimental effects on climate stability and public health.”

Donald Trump has declared that the United States will become the “world capital of artificial intelligence and crypto,” planning to eliminate safeguards surrounding AI development while dismantling regulations limiting greenhouse gas emissions.

The “Big Beautiful” spending bill approved by Republicans in the House of Representatives would prevent states from adopting their own AI regulations, with the GOP-controlled Senate also likely to pass a similar version.

However, the unrestricted usage of AI may significantly undermine efforts to combat the climate crisis while increasing power usage from the US grid. The dependence on fossil fuels like gas and coal continues to grow. AI is particularly energy-intensive, with a single query on ChatGPT consuming about ten times more power than a Google search.

The carbon emissions from US data centers have increased threefold since 2018, with recent Harvard research indicating that the largest “hyperscale” centers constitute 2% of the nation’s electricity usage.

“AI is poised to transform our world,” states Manu Asthana, CEO of PJM Interconnection, the largest grid in the US. Predictions suggest that nearly all increases in future electricity demand will arise from data centers. Asthana asserts this will equate to adding a new home’s worth of electricity to the grid every five years.

Quick Guide

Please contact us about this story





show

The best public interest journalism relies on direct accounts from people of knowledge.

If you have anything to share about this subject, please contact us with a secret using the following methods:

Secure Messages in Guardian App

The Guardian app has a tool to send tips about stories. Messages are end-to-end encrypted and implied within the routine activity that all Guardian mobile apps perform. This prevents observers from knowing you are in absolutely communication with us.

If you don’t already have the Guardian app, please download it (iOS/Android) and go to the menu. Select Secure Message.

Securedrop, Instant Messenger, Email, Phone, Posting

For alternatives and the advantages and disadvantages of each, please refer to the guide at guardian.com/tips.


Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.


Meanwhile, the rapid escalation of AI is intensifying the recent rollback of climate pledges made by major tech companies. Last year, Google acknowledged that greenhouse gas emissions from AI have surged by 48% since 2019 due to its advances. In effect, the deeper AI penetrates, “reducing emissions may prove challenging.”

Supporters of AI, along with some researchers, contend that advancements in AI could aid the fight against climate change by enhancing the efficiency of grid management and other improvements. Others, however, remain skeptical. “It’s merely an operation for greenwashing, and it’s clear as day,” critiques Alex Hanna, research director at the Institute of Decentralized AI. “Much of what we’ve heard is absolutely ridiculous. Big tech is mortgaging the present for a future that may never materialize.”

So far, no states have definitive regulations regarding AI, but state lawmakers may be aiming to establish such rules, especially in light of diminished federal environmental regulations. This could prompt Congress to reevaluate the ban. “If you were anticipating federal regulations around data centers, that’s definitely off the table right now,” Hanna observed. “It’s rather surprising to observe everything.”

But Republican lawmakers are undeterred. The proposed moratorium on local regulations for states and AI recently cleared a significant hurdle in the Senate over the weekend, as I’ve determined that this ban will allow Trump taxes and megavilles to proceed. Texas Senator Ted Cruz, chairing the Senate Committee on Commerce, Science and Transportation, has prohibited modifications to the language which would prevent spending bills from addressing “foreign issues.”

This clause entails a “temporary suspension” on regulations, substituting a moratorium. It additionally includes an extra $500 million to grant programs aimed at expanding nationwide broadband internet access, stipulating that states will not receive these funds should they attempt to regulate AI.

The suggestion to suspend AI regulations has raised significant alarm among Democrats. Massachusetts Senator Ed Markey, known for his climate advocacy, has indicated his readiness to propose amendments that would strip the bill of its “dangerous” provisions.

“The rapid advancement of artificial intelligence is already impacting our environment—raising energy prices for consumers, straining the grid’s capacity to maintain lighting, depleting local water resources, releasing toxic pollutants into our communities, and amplifying climate emissions,” Markey shared with the Guardian.

“But Republicans want to prohibit AI regulations for ten years, rather than enabling the nation to safeguard its citizenry and our planet. This is shortsighted and irresponsible.”


Massachusetts Assemblyman Jake Ochincross also labeled the proposal as “terrible and unpopular ideas.”

“I believe we must recognize that it is profoundly reckless to allow AI to swiftly and seamlessly fill various sectors such as healthcare, media, entertainment, and education while simultaneously imposing a ban on AI regulations for a decade,” he commented.

Some Republicans also oppose these provisions, including Tennessee Senator Marsha Blackburn and Missouri Senator Josh Hawley. The amendment to eliminate the suspension from the bill requires the backing of at least four Republican senators.

Hawley is reportedly ready to propose amendments to remove this provision later in the week if they are not ruled out beforehand.

Earlier this month, Georgia Representative Marjorie Taylor Greene admitted that she overlooked the provisions in the House’s bill, stating she would not support the legislation if she had been aware. Greene’s group, the Far-Right House Freedom Caucus, stands against the suspension of AI regulations.

Source: www.theguardian.com

Elon Musk’s Feud with Trump Mask Reveals the Consequences of Unregulated Money in Politics

Elon Musk has remarked on the loud and public nature of money’s role in American politics, pointing out that it’s typically a quieter affair.

“Without me, Trump would lose the election, the Democrats would control the House, and Republicans would be 51-49 in the Senate. That kind of dissatisfaction,” he stated on his X social media platform amid an ongoing feud with Donald Trump.

When right-wing commentator Laura Rumer mentioned Capitol Hill Republicans debating their allies in the intraparty conflict, Musk hinted at the extent of his influence. “Ah, food for thought, as they consider this: Trump has 3.5 years left as president, but I’m over 40 years old…” Musk wrote on X.

US billionaires frequently wield significant influence in politics, using their wealth to sway government actions. However, few have been as overt and impactful as Musk in the past year, demonstrating the transactions and dysfunction within US governance.

The Trump Judicial War offers a united snapshot of American politics. As the world’s richest individual, Musk has played a notable role in a new governmental initiative targeting the dismantling of unfavorable agencies after financially backing his preferred candidates.

We find ourselves amidst a clash between a billionaire president and an even wealthier Republican donor, both vying over how to reduce aid to the impoverished. As one satirical website observed: “Yeah! These billionaires are arguing over how much money they can siphon from the poor.”

Fifteen years ago, the US Supreme Court determined that corporations and outside groups could spend unlimited amounts on elections, leading to a ruling by Conservative Justice Anthony Kennedy:

Since then, it has become clear that such wealth injections are undermining democracy. Musk’s actions exemplify the already soaring levels of money’s influence in politics, with reports indicating he spent nearly $300 million to support Trump in 2024. We are now witnessing a government dominated by billionaires.

“Fifteen years post-decision, we observe the full consequences of living in a society where not just elections are for sale, but the entire government structure is for sale,” he told Bluwork earlier this year.

Musk is not alone in this arena. During election cycles, ultra-wealthy donors frequently fund candidates of their choice. This has become the standard landscape in current American politics across both parties. Bernie Sanders challenged the Democrats at last year’s convention, stating, “Billionaires in both parties cannot buy elections, even primaries.”

Earlier this year, Musk invested heavily in Wisconsin’s judicial elections but lost to a Democratic opponent. He also donated a smaller amount to Republicans seeking to oppose a judge who resisted the Trump administration. Despite an inconsistent success record, his financial threats remain significant for both parties.

However, due to his unelected status, Musk has been somewhat restricted in his ability to block Trump’s key spending bill. Trump’s “Big, Beautiful Bill” didn’t meet Musk’s stringent expectations for budget cuts or support, and once the administration ceased to fulfill his wishes, he publicly expressed his discontent.

This reflects the volatile alliance between Trump and Musk, which began with mutual affection and a central role for billionaires during Trump’s administration. The fact that Musk has such sway over the budget process is troubling. Trump indicated that Musk was aware of the bill’s contents, suggesting that the administration sought his approval before any public fallout.

Musk has adopted a bold approach to political spending, which is rare among the ultra-wealthy, who generally let their financial contributions do the talking. A charitable expert previously noted to the Guardian that Musk’s distinctiveness lies in his “permanent discretion as a mode of political engagement.”

Now, Musk rallies his followers on X to sway Congress and halt the bill. This could prove effective as Republican lawmakers grapple with the ideological pressures of a president and a mega-donor known for his vindictive tendencies.

Within right-wing media, these conflicts have created divisions. At Breitbart, one commentator remarked that Trump “pokes a finger in the eyes of his biggest donor and it never ends well.” Another piece in American Spectator claimed Musk hadn’t picked Trump. However, the Washington Examiner praised Musk’s opposition to the bill, suggesting that Trump’s budget plan “deserves to fail.”

“I don’t care if Elon disagrees with me, but he should have voiced that a few months ago,” Trump said as he wrapped up a series of critiques targeting Musk. The president also remarked that Musk had “lost his nerve” during a recent television interview.

So far, Republican figures have rallied behind Trump, with JD Vance proclaiming, “President Trump has done more than anyone else in my lifetime to gain the movement’s trust.”

If Musk ultimately falters, he could take his wealth and seek influence elsewhere. He has floated the idea of forming a third political party, a notion that has failed in the past, but his financial clout and forceful personality might invigorate this endeavor. The Democrats already rely heavily on wealthy benefactors and would welcome a potential shift from Musk. Democratic Representative Ro Khanna proposed that the party should reach out to him.

Khanna, who represents Silicon Valley and encourages the left to embrace economic populism, faced significant backlash from his party for his comments but stood by them.

“If Biden criticized a major supporter, Trump would have embraced him the next day,” he posted on X.

Source: www.theguardian.com

AI Companies Caution: Assess the Risks of Superintelligence or Face the Consequences of Losing Human Control

Prior to the deployment of the omnipotent system, AI companies are encouraged to replicate the safety assessments that formed the basis of Robert Oppenheimer’s initial nuclear test.

Max Tegmark, a prominent advocate for AI safety, conducted analyses akin to those performed by American physicist Arthur Compton before the Trinity test, indicating a 90% likelihood that advanced AI could present an existential threat.

The US government went ahead with Trinity in 1945, after providing assurances that there was minimal risk of the atomic bomb igniting the atmosphere and endangering humanity.

In a paper published by Tegmark and three students at the Massachusetts Institute of Technology (MIT), the “Compton constant” is suggested for calculation. This is articulated as the likelihood that omnipotent AI could evade human control. Compton mentioned in a 1959 interview with American author Pearlback that he approved the test after evaluating the odds for uncontrollable reactions to be “slightly less” than one in three million.

Tegmark asserted that AI companies must diligently assess whether artificial superintelligence (ASI)—the theoretical system that surpasses human intelligence in all dimensions—can remain under human governance.

“Firms developing superintelligence ought to compute the Compton constant, which indicates the chances of losing control,” he stated. “Merely expressing a sense of confidence is not sufficient. They need to quantify the probability.”

Tegmark believes that achieving a consensus on the Compton constant, calculated by multiple firms, could create a “political will” to establish a global regulatory framework for AI safety.

A professor of physics at MIT and an AI researcher, Tegmark is also a co-founder of The Future of Life Institute, a nonprofit advocating for the secure advancement of AI. The organization released an open letter in 2023 calling for a pause in the development of powerful ASI, garnering over 33,000 signatures, including notable figures such as Elon Musk and Apple co-founder Steve Wozniak.

This letter emerged several months post the release of ChatGPT, marking the dawn of a new era in AI development. It cautioned that AI laboratories are ensnared in “uncontrolled races” to deploy “ever more powerful digital minds.”

Tegmark discussed these issues with the Guardian alongside a group of AI experts, including tech industry leaders, representatives from state-supported safety organizations, and academics.

The Singapore consensus, outlined in the Global AI Safety Research Priority Report, was crafted by distinguished computer scientist Joshua Bengio and Tegmark, with contributions from leading AI firms like OpenAI and Google DeepMind. Three broad research priority areas for AI safety have been established: developing methods to evaluate the impacts of existing and future AI systems, clarifying AI functionality and designing systems to meet those objectives, and managing and controlling system behavior.

Referring to the report, Tegmark noted that discussions surrounding safe AI development have regained momentum following remarks by US Vice President JD Vance, asserting that the future of AI will not be won through mere hand-raising and safety debates.

Tegmark stated:

Source: www.theguardian.com

Potential long-term consequences of measles: immune system memory loss and encephalitis

Measles is not just a rash and fever.

The outbreak of the disease in West Texas has sent 29 people, most of them, to hospitals, as they continue to grow. Two people have died, including a six-year-old child.

It remains to be seen how many people have become ill in the outbreak. There have been at least 223 confirmed cases, but experts believe hundreds more people may have been infected since late January. As public health officials try to slow the spread of the highly contagious virus, some experts are worried about long-term complications.

Measles is different from other childhood viruses that come and go. In severe cases, it can cause pneumonia. According to the Centers for Disease Control and Prevention, approximately one in 1,000 patients develop encephalitis or encephalitis or encephalitis, with one or two deaths in 1,000 people.

This virus can wipe out the immune system, a complication known as “immune amnesia.”

When you get sick with a virus or bacteria, the immune system has the ability to form memories that can quickly recognize and respond to pathogens if they are encountered again.

Measles targets cells in the body, such as plasma cells and memory cells, and contains their immunological memory, and destroys some of them in the process.

“No one can escape this,” said Dr. Michael Mina, a vaccine expert and a former professor of epidemiology at the Harvard Chan School of Public Health.

In a 2019 survey, Mina and his team discovered that measles infections can be wrecked from anywhere 11% to 73% of human antibody stockpiledepends on how serious the infection is. This means that if people had 100 antibodies to Chicken Pox before they developed measles, they would be left at just 50 after measles infection, potentially catching them and getting sick.

Iwasakimon, professor of immunology at Yale University School of Medicine, said: You forget who the enemy is. ”

Virtually everyone who contracts measles weakens the immune system, but some are hit harder than others.

“There's no world where you get measles and it won't destroy some [immunity]He said. “The problem is that it will destroy enough to have clinical impact.”

In a previous study in 2015, Mina presumed that the virus was a virus before vaccination, when measles was common It may be related to half of childhood deaths due to infectionmainly from other diseases such as pneumonia, sepsis, diarrheal diseases, meningitis.

Researchers found that after measles infection, the immune system was suppressed almost immediately and remained intact for two to three years.

“Immune amnesia begins as soon as the virus replicates in them [memory] Cells,” Mina said.

The best protection against serious complications is the measles vaccine. Two doses of the vaccine are 97% effective in preventing infection.

What is “immune amnesia”?

Our bodies are constantly exposed to a variety of bacteria and viruses in our environment. Over time, our immune system learns to remember a particular intruder and can take action immediately if we find something that doesn't belong to our body.

“Children are in contact with all sorts of microorganisms, and most of those encounters have not led to illness,” said Dr. Adam Ratner, pediatrician and director of the Department of Pediatric Infectious Diseases at NYU Langone Health. “Children often recover and have memories, so if they see the same strain of the virus that causes diarrhea, they will be the second disease they are exposed to.”

With immune amnesia, he said that if people are exposed to strains of the same virus again, their bodies will act as if it was the first time they had it and they don&#39t have that robust protection.

This means that the measles virus can destroy the immunity that people have accumulated over time, such as pneumonia, colds, flu, bacteria, and more that can cause other pathogens.

Mina elicited a comparison with HIV, saying that the level of immunosuppression in severe measles infection can be compared to HIV that has not been treated for years. However, he warned that HIV affects various parts of the immune system, and that people&#39s immune systems can ultimately recover from measles.

How does measles destroy the immune system?

Highly contagious viruses can destroy long-lived plasma cells that are present in the bone marrow and are essential to the immune system. Cells are like factories that expel antibodies to protect us from intruders entering our bodies.

“It&#39s almost like bombing a sacred city,” Mina said.

Measles also targets cells in our body, called memory cells. This is a cell that remembers what intruders look like, allowing the immune system to quickly identify and fight them in the future.

When you breathe a virus, it is enveloped in cells called macrophages. Macrophages function as “trojan horses” to collect viruses in lymph nodes, Iwasaki said.

Once there, the virus can bind and destroy these memory cells, wiping away some of our built-in immunity in the process.

“one time [memory cells] As it is excluded, we basically no longer have any memory of those specific pathogens, so we are more susceptible to most infectious diseases that are unrelated to measles,” Iwasaki said.

Will the immune system recover?

The way your body begins to regain immune memory after being surrounded by measles is to be exposed to other viruses and bacteria, get sick again, and boost your immune system.

Such immunity can be relearned, but University of Pennsylvania immunologist John Welley says that while such immunity can be relearned, he is particularly susceptible to other infectious diseases.

“As every parent of a daycare child knows, if you&#39’re building a lot of immunity at the time, you’re suffering through it,” Welley said.

Mina relearned our immunity and compared it to why babies seem to get sick frequently.

“The illness a baby gets is not because the baby is more vulnerable, because they don&#39t have the same immunological memory set yet,” he said. “They have to spend several years accumulating it through exposure, which is kind of what people experience after measles.”

How Measles Causes Brain Inflammation

What&#39s even more frightening is an untreated measles complication called subacute sclerosing pan encephalitis (SSPE), a brain disease that can occur for more than a decade, which is fatal after someone recovers from an infection.

For poorly understood reasons, the measles virus can cause persistent infections and lead to brain damage, leading to cognitive decline, coma, and death.

Researchers believe that SSPE was once considered rare, but is more common than realization. a Review of measles cases in California From 1998 to 2015, SSPE cases were found to occur at a higher rate than expected among children who were not vaccinated.

Dr. Bessie Gibberge, a pediatric infectious disease expert at Northwest Medicine, said the disease is progressive and symptoms occur at normal stages.

“It can start with just a change in personality and a change in behavior,” she said. In children, it can be as subtle as worse performance in school.

The disease then progresses and can eventually lead to seizures and abnormal movements, Siebarghese said. Finally, parts of the brain that regulate vital signs such as breathing, heart rate, and blood pressure can be damaged and can lead to death.

There is no cure for this disease and is almost always fatal. Patients usually survive 1-3 years after diagnosis. In the US, there are usually four to five cases each year, which can be underestimated, says Ratner of Nyu Langone Health.

“It’s probably more common than we think because it’s not always diagnosed,” he said. “But as these outbreaks become more common, I think we will clearly see more cases of SSPE.”

Source: www.nbcnews.com

Elon Musk’s Potential Ownership of OpenAI Could Have Negative Consequences, Despite Possibility of it Occurring

eLon Musk and Sam Altman are not exactly best friends. Altman’s pursuit of a for-profit approach for Openai, a company founded in 2015, seems to have irked Musk. Altman’s focus on making money rather than advancing humanity’s interests clashed with Musk’s vision for Openai.

As a result, Musk, who previously attempted to acquire Twitter, has now acquired ownership of an entity called X, which is linked to Openai’s growth.

Musk, characterized by the US government as lean, efficient, and globally influential, made a substantial bid of nearly $100 million for Openai’s nonprofit sector. Musk emphasized the need for Openai to return to its original open-source and safety-focused model. However, this bid was rejected by Altman, who jokingly mentioned that he would buy Twitter for $97.4 billion if necessary.

Musk’s bid was not about enriching investors or inflating corporate valuations, but about steering AI development towards societal benefits. Although the bid to reclaim control of Openai’s nonprofit was significant, the outcome remains uncertain.

The ongoing feud between Musk and Altman may escalate further, especially considering the history of their disagreements. Musk’s bid to take over Openai’s nonprofit could be seen as an attempt to thwart Altman’s for-profit ambitions for the company.

Elon Musk and Donald Trump, Washington, January 19, 2025. Photo: Brian Snyder/Reuters

Musk’s bid for Openai’s nonprofit could have multiple interpretations, ranging from a strategic move to a mere publicity stunt. Given Musk’s penchant for unconventional actions, the true motives behind his bid remain uncertain.

There are various theories regarding the significance of the bid, including references to literature and playful numbers. However, the bid’s seriousness cannot be discounted, especially in light of potential political implications.

The bid may also reflect Musk’s attempt to disrupt the status quo and reshape the future trajectory of AI development. The possibility of Musk and Openai merging in the future cannot be ruled out entirely, given the unpredictable nature of the current situation.

Source: www.theguardian.com

The Evolution of Human Brains: The Potential Consequences for Our Future

No one doubts that Albert Einstein had a brilliant mind, but the Nobel Prize winner famous for his theories of special and general relativity wasn’t blessed with a big brain. “Jeremy DeSilva at Dartmouth College in New Hampshire.”

This seems surprising. Big brains are a defining feature of human anatomy, something we are proud of. Other species may be faster or stronger, but we thrive using the ingenuity that comes from our big brains. At least, that’s what we tell ourselves. Einstein’s brain suggests that the story is not so simple. And recent fossil discoveries bear this out. In the past two decades, we’ve learned that small-brained hominin species persisted on Earth long after species with larger brains emerged. Moreover, there is growing evidence that they were behaviorally sophisticated. For example, some of them made complex stone tools that could only have been made by humans with language.

These findings turn questions about the evolution of the human brain upside down: “Why would large brains be selected for when humans with small brains can survive in nature?” says DeSilva. Nervous tissue consumes a lot of energy, so large brains must have undoubtedly provided an advantage to the few species that evolved them. But what was the benefit?

The answer to this mystery is beginning to emerge. It appears that brain expansion began as an evolutionary accident that then led to changes that accelerated brain growth. Amazingly, the changes that drove this expansion also explain the recent 10 percent shrinkage of the human brain. What’s more, this suggests that our brains could shrink even further, potentially causing our demise.

There’s no denying that…

Source: www.newscientist.com

Unintended Consequences: The Scrutiny of Mental Health Apps and their Impact on Users

“circleWhat would happen to your hat if I told you that one of the most powerful choices you can make is to ask for help? '' a young woman in her 20s wearing a red sweater says before encouraging viewers to seek counseling. The ad, promoted on Instagram and other social media platforms, is just one of many campaigns created by BetterHelp, a California-based company that connects users with their therapists online.

In recent years, the need for sophisticated digital therapies to replace traditional face-to-face therapies has been well established.when I go to the street
Latest data The NHS Talking Therapy Service saw 1.76 million people referred for treatment in 2022-23, with 1.22 million people actually starting to engage directly with a therapist.

Companies like BetterHelp hope to address some of the barriers that prevent people from receiving therapy, such as a lack of locally trained practitioners and a lack of empathetic therapists. Many of these platforms also have worrying aspects. That is, what happens to the large amounts of highly sensitive data collected in the process? The UK is currently considering regulating these apps, and there is growing awareness of their potential harm.

Last year, the U.S. Federal Trade Commission told BetterHelp
$7.8m (£6.1m) fine After a government agency was found to have misled consumers and shared sensitive data with third parties for advertising purposes despite promising to keep it private. A BetterHelp representative did not respond to BetterHelp's request for comment.
observer.




The number of people seeking mental health help online has increased rapidly during the pandemic. Photo: Alberto Case/Getty Images

Research shows that such privacy violations are not isolated exceptions within the vast industry of mental health apps, which include virtual therapy services, mood trackers, mental fitness coaches, digitized cognitive behavioral therapy, chatbots, and more. , has been suggested to be too common.

independent watchdogs such as
Mozilla Foundation, a global nonprofit organization working to police the Internet from bad actors, has identified platforms that exploit opaque regulatory gray areas to share or sell sensitive personal information. did. When the foundation looked at 32 leading mental health apps;
Last year's reportWe found that 19 of them did not protect user privacy and security. “We found that too often your personal and private mental health issues were being monetized.”
Jen CultriderHe leads Mozilla's consumer privacy advocacy efforts.

Mr. Cult Rider, in the United States,
Health Insurance Portability and Accountability Act (HIPAA) protects communications between doctors and patients. However, she says many users are unaware that there are loopholes that digital platforms can exploit to circumvent HIPAA. “You may not be talking to a licensed psychologist, you may be just talking to a trained coach, and none of those conversations are protected under medical privacy laws,” she says. “But metadata about that conversation, the fact that you're using the app for OCD or an eating disorder, could also be used and shared for advertising and marketing purposes. They don't necessarily want to be collected and used to target products to them.”

Like many others studying this rapidly growing industry, the digital mental health apps market is predicted to be valuable.
$17.5bn (£13.8bn) by 2030 – Caltrider feels that increased regulation and oversight of many of these platforms, which target particularly vulnerable segments of the population, is long overdue.

“The number of these apps has exploded during the pandemic. When we started our research, we realized how many companies are capitalizing on the gold rush of mental health issues rather than helping people. “It was really disappointing because it seemed like there was a lot of emphasis on that,” she says. “Like many things in the tech industry, the tech industry has grown rapidly and for some, privacy has taken a backseat. We felt that maybe things weren't going to work out, but we What they found was much worse than expected.”

Promotion of regulations

Last year, UK regulators
Medicines and Healthcare Products Regulatory Agency (MHRA) and the National Institute for Healthcare Excellence (Nice) will explore the best way to regulate digital mental health tools in the UK and collaborate with international partners on a three-year project funded by the charity Wellcome. project has started. Help foster consensus on digital mental health regulation around the world.

Holly Cool, MHRA's senior manager for digital mental health, explains that while data privacy is important, the main focus of the project is to reach agreement on minimum standards of safety for these tools. . “We are more focused on the efficacy and safety of these products. It is our duty as regulators to ensure that patient safety is paramount in devices that are classified as medical devices. ,” she says.

At the same time, leaders in the mental health field are beginning to call for strict international guidelines to assess whether tools truly have a therapeutic effect. “Actually, I'm very excited and hopeful about this field, but we need to understand what good looks like for digital therapeutics.” Neuroscientist and former U.S. director says Dr. Thomas Insel.
National Institute of Mental Health.

Psychiatric experts acknowledge that while new mood-boosting tools, trackers and self-help apps have become wildly popular over the past decade, there has been little hard evidence that they actually help.

“I think the biggest risk is that many apps waste people's time and may delay getting effective treatment,” said Harvard Medical School Beth Israel Deaconess Medical Center. says Dr. John Taurus, director of digital psychiatry at .

Currently, companies with enough marketing capital can easily bring their apps to market without having to demonstrate that their apps will maintain user interest or add any value, he said. It is possible to participate. In particular, Taurus criticizes the poor quality of many purported pilot studies, with very low standards for app efficacy and results that are virtually meaningless.He gives the following example
1 trial in 2022This paper compared a stopwatch (a “fake” app with a digital clock) to an app that provides cognitive behavioral therapy to schizophrenic patients experiencing an acute psychotic episode. “When we look at research, we often liken our apps to looking at a wall or a waiting list,” he says. “But anything is better than nothing.”

Vulnerable user operations

But the most concerning question is whether some apps may actually perpetuate harm and worsen the symptoms of the patients they are meant to help.

Two years ago, U.S. healthcare giants Kaiser Permanente and Health Partners
I decided to find out Effectiveness of new digital mental health tools. It was based on a psychological approach known as dialectical behavior therapy, which includes practices such as emotional mindfulness and steady breathing, and was expected to help prevent suicidal behavior in at-risk patients.

Over a 12-month period, 19,000 patients who reported frequent suicidal thoughts were randomly divided into three groups. A control group received standard care, a second group received usual care plus regular outreach to assess suicide risk, and a third group received digital tools in addition to care. It was done. However, when he evaluated the results, he found that he actually performed worse in the third group. Using this tool appears to significantly increase the risk of self-harm compared to just receiving usual care.

“They thought they were doing a good thing, but it made people even worse, so that was very alarming,” Taurus says.

Some of the biggest concerns relate to AI chatbots, many of which are touted as safe spaces for people to discuss mental health and emotional struggles. But Kaltrider worries that without better monitoring of the responses and advice provided by these bots, these algorithms could be manipulating vulnerable people. “With these chatbots, you can create something that lonely people can potentially relate to, so the possibilities for manipulation are endless,” she says. “This algorithm could be used to force that person to buy expensive things or force them to commit violence.”

These concerns are not unfounded. A user of the popular chatbot Replika shared this on Reddit.
screenshot The content of the conversation appears to be such that the bot is actively encouraging his suicide attempt.




Telephone therapy: But how secure is your sensitive personal data? Photo: Getty Images

In response, a Replika spokesperson said:
observer: “Replika continuously monitors the media and social media and spends a lot of time talking directly with users to find ways to address concerns and fix issues within the product. Provided. The interface in the screenshot above is at least 8 months old and may date back to 2021. There have been over 100 updates since 2021, and 23 in the last year alone.”

Because of these safety concerns, the MHRA believes that so-called post-market surveillance will be important for mental health apps, just as it is for medicines and vaccines. Kuhl points out that
Yellow card reporting site, is used in the UK to report side effects and defects in medical products, and could in the future allow users to report adverse experiences with certain apps. “The public and health professionals can be very helpful in providing vital information to the MHRA about adverse events using yellow cards,” she says.

But at the same time, experts say that if properly regulated, mental health apps could improve access to care, collect useful data to help make accurate diagnoses, and fill gaps left by over-medicalization. I still strongly believe that I can play a big role in the future. system.

“What we have today is not great,” Insel says. “Mental health care, as we have known it for the past 20 to 30 years, is clearly an area ripe for change and in need of some transformation. Perhaps regulation will come in the second or third act, and we need it, but there are many other things, from better evidence to interventions for people with more severe mental illnesses. That is necessary.”

Torous believes the first step is to be more transparent about how an app's business model works and the underlying technology. “Otherwise, the only way a company can differentiate is through marketing claims,” he says. “If you can't prove that you're better or safer, all you can do is market it because there's no real way to verify or trust that claim.” The thing is, huge amounts of money are being spent on marketing, which is starting to erode clinician and patient trust. You can only make so many promises before people become skeptical. you can't.”

Source: www.theguardian.com

The Consequences of a Fat Cat: The Perspectives of Scientists

by

A study from the University of Illinois at Urbana-Champaign revealed the effects of overfeeding on cats’ digestive systems and gut microbiota. The study involved 11 cats and showed that an unrestricted diet led to significant weight gain, changes in gastrointestinal transit time, and changes in fecal microbiota and acidity. These findings contribute to the understanding of obesity in pets and inform weight management strategies such as feeding restriction and promotion of physical activity.

Cat owners want their pets to be happy, but overfeeding can have unintended consequences. The prevalence of obesity in cats is increasing, impacting their health, lifespan, and overall well-being. A new study from the University of Illinois at Urbana-Champaign looks at what happens to cats’ digestive systems and gut microbiota when they overeat.

“About 60% of cats in the United States are overweight, which can lead to health problems such as diabetes and chronic inflammation. A lot of research has been done on weight loss in cats; “There has been little focus on the reverse process. In this study, we wanted to learn more about the metabolic and gastrointestinal changes that occur as a result of overeating and weight gain in cats,” said study co-author and author of Animal Science said Kelly Swanson, professor and interim director of the department. The Department of Nutritional Sciences (DNS), part of the U of I College of Agricultural, Consumer, and Environmental Sciences (ACES).

Methodology and initial findings

The study included 11 spayed adult cats. They were fed standard dry cat food and allowed to eat as much as they wanted after 2 weeks of baseline measurements. Researchers regularly took blood and fecal samples and monitored physical activity.

Once the cat was able to overeat, her food intake immediately increased significantly and she began to gain weight. The mean body condition score (BCS) at the start of the study was 5.41 on a 9-point scale. After 18 weeks of overeating, the weight increases to 8.27, which corresponds to 30% overweight. According to Swanson, BCS corresponds to a person’s body mass index (BMI), and anything above 6 is considered overweight.

Researchers at the University of Illinois have discovered that when cats overeat and gain weight, it affects their digestive systems and gut microbiota.Credit: Lauren Quinn, University of Illinois

Source: scitechdaily.com