aAt first glance, the current landscape of artificial intelligence policy indicates a strategic step back from regulation. Recently, AI leaders in the United States and beyond have echoed this sentiment. J.D. Vance describes AI policy as “Deregulation flavor.” Congress seems poised for a 10-year suspension. This is especially true regarding state AI laws. On cue, the Trump administration’s AI action plan warns against obscuring the technology “through bureaucracy at this early stage.”
However, the emphasis on deregulation is a significant misunderstanding. Although the U.S. federal government adopts a hands-off stance toward applications like chatbots and image generators, it is deeply engaged in the fundamental aspects of AI. For instance, both the Trump and Biden administrations have actively dealt with AI chips, crucial components of advanced AI systems. The Biden administration restricts access to these chips to safeguard against competitive nations such as China. The Trump administration sought a deal with countries like the UAE for AI sales.
Both administrations have significantly influenced AI systems in their respective manners. The United States is not deregulating AI; rather, it is regulating where many are not looking. Beneath the rhetoric of a free market, Washington is stepping in to shape the components of AI systems.
Embracing the comprehensive nature of the AI technology stack—analyzing the contributions of hardware, data centers, and software operating in the background of applications like ChatGPT—reveals that nations are targeting different components of AI systems. Early frameworks, such as the EU’s AI law, prioritized prominent applications, banning high-risk uses in sectors like health, employment, and law enforcement to mitigate social harm. However, nations are now focusing on the fundamental building blocks of AI. China restricts certain models to combat deepfakes and misinformation. Citing national security concerns, the U.S. has limited exports of advanced chips, and under the Biden administration, model weights—the “secret sauce” that converts user inputs into results. These AI regulations are embedded within dense administrative terminologies like “implementation of additional export controls” and “end uses of supercomputers and semiconductors,” obscuring their foundational rationale. Nevertheless, clear trends emerge behind this complex vernacular, indicating a shift from regulating AI applications to regulating their foundational elements.
The initial wave of regulations targeted applications within jurisdictions like the EU, emphasizing issues such as discrimination, surveillance, and environmental damage. Subsequently, rival nations like the United States and China adopted a national security approach, aiming to retain military dominance and thwart malicious entities from leveraging AI for obtaining nuclear weapons or disseminating disinformation. A third wave of AI regulation is emerging as countries tackle parallel social and security challenges. Our research indicates that this hybrid approach is more effective as it breaks down silos and minimizes redundancy.
Overcoming the allure of laissez-faire rhetoric necessitates a more thorough analysis. Viewed through the lens of the AI stack, U.S. AI policy resembles a redefinition of regulatory focus rather than an abdication of responsibility. This translates to a facade of leniency while maintaining a firm grip on core elements.
No global framework can be effective if the United States—the host of the world’s largest AI research institution—continues to project an image of complete deregulation. The country’s proactive stance on AI chips undermines this narrative. U.S. AI policy is anything but laissez-faire. Decisions regarding intervention reflect a strategic inclination. While politically convenient, the myth of deregulation is largely a fabrication.
The public demands enhanced transparency concerning the rationale and framework of government regulations on AI. It is difficult to rationalize the ease with which the U.S. government intervenes in chip regulation for national security while remaining muted on social implications. Awareness of all regulatory aspects—ranging from export controls to trade policies—is the first step toward fostering effective global cooperation. Without such clarity, discussions surrounding global AI governance will remain superficial.
Researchers from the University of California, Berkeley and Stanford University have investigated the brain circuits that regulate the release of growth hormone during sleep. Their findings reveal new feedback mechanisms that keep growth hormone levels finely tuned. This discovery could lead to advancements in treating individuals with sleep disorders associated with metabolic issues like diabetes, as well as degenerative diseases such as Parkinson’s and Alzheimer’s.
Sleep is known to promote tissue growth and regulate metabolism by partially promoting growth hormone (GH) release, but the underlying circuit mechanism is unknown. Ding et al. show how GH release, which is enhanced in both rapid eye movement (REM) and non-REM (NREM) sleep, is regulated by sleep-wake-dependent activities of distinct hypothalamic neurons that express GH release hormone (GHRH) and somatostatin (SST). Arcuate nuclei SST neurons inhibit GH release by targeting nearby GHRH neurons that stimulate GH release, while periencephalic SST neurons project onto the median ridge to inhibit GH release. GH release is associated with significant surges of both GHRH and SST activity during REM sleep, while NREM sleep sees moderate increases in GHRH and reductions in SST activity. Furthermore, Ding et al. identified negative feedback pathways where GH increases the excitability of locus ceruleus neurons, leading to increased arousal. Image credit: Ding et al, doi: 10.1016/j.cell.2025.05.039.
“We have gained significant insights into this area,” said Xinlu Ding, a postdoctoral researcher at the University of California, Berkeley.
“We directly recorded the neural activity of mice to understand the underlying processes.”
“Our findings provide a foundational circuit to explore various treatment options moving forward.”
Neurons that manage growth hormone release during the sleep-wake cycle—specifically growth hormone-releasing hormone (GHRH) neurons and two types of somatostatin neurons—are located deep within the hypothalamus, an ancient brain region present in all mammals.
Once released, growth hormone enhances the activity of locus coeruleus neurons, a brainstem region involved in arousal, attention, cognition, and curiosity.
Dysregulation of locus coeruleus neurons is linked to numerous psychiatric and neurological disorders.
“Understanding the neural circuits involved in growth hormone release could ultimately lead to new hormone therapies aimed at enhancing sleep quality and restoring normal growth hormone levels,” explained Daniel Silverman from the University of California, Berkeley.
“Several experimental gene therapies have been developed that target specific cell types.”
“This circuit could serve as a new approach to modulate the excitability of the locus coeruleus, which has not been effectively targeted before.”
The researchers investigated neuroendocrine circuits by implanting electrodes into the mouse brain and measuring activity changes triggered by light stimulation of hypothalamic neurons.
Mice have short sleep bouts (lasting several minutes at a time) throughout day and night, providing ample opportunities to study fluctuations in growth hormone during the sleep-wake cycle.
Utilizing advanced circuit mapping techniques, researchers found that the two peptide hormones (GHRH and somatostatin) regulating growth hormone release operate differently during REM and non-REM sleep.
Both somatostatin and GHRH promote growth hormone release during REM sleep; however, somatostatin decreases and GHRH sees only moderate increases during non-REM sleep, which still results in growth hormone release.
Growth hormone release regulates locus coeruleus activity through a feedback mechanism, creating a homeostatic balance.
During sleep, growth hormone accumulates at a gradual pace, stimulating the locus coeruleus and fostering arousal, according to the new findings.
However, excessive activation of the locus coeruleus can paradoxically lead to drowsiness.
“This indicates that sleep and growth hormone form a delicate balance. Insufficient sleep diminishes growth hormone release, while excessive growth hormone may drive the brain toward wakefulness,” Dr. Silverman noted.
“Sleep facilitates growth hormone release, which in turn regulates arousal. This equilibrium is crucial for growth, repair, and metabolic health.”
Growth hormone functions partially through the locus coeruleus, influencing overall brain alertness during wakefulness, emphasizing the importance of maintaining proper balance for cognitive function and attention.
“Growth hormone is pivotal not only for muscle and bone development and reducing fat tissue, but it also offers cognitive benefits and can elevate overall arousal levels upon waking,” stated Dr. Ding.
US Republicans are advocating for the approval of significant spending legislation that contains measures to thwart states from implementing regulations on artificial intelligence. Experts caution that the unchecked expansion of AI could exacerbate the planet’s already perilous, overheating climates.
Research from Harvard University indicates that the industry’s massive energy consumption is finite, and carbon dioxide—amounting to around 1 billion tonnes according to the Guardian—is projected to be emitted in the US by AI over the next decade.
During this ten-year span, when Republicans aim to “suspend” state-level regulations on AI, there will be a substantial amount of electricity consumed in data centers for AI applications, contributing to greenhouse gas emissions in the US that surpass those of Japan. Every year, the emissions will be three times higher than those of the UK.
The actual emissions will rely on the efficiency of power plants and the degree of clean energy utilization in the coming years; however, the obstruction of regulations will also play a part, noted Genruka Guidi, a visiting scholar at Harvard’s School of Public Health.
“Restricting surveillance will hinder the shift away from fossil fuels and diminish incentives for more energy-efficient AI technologies,” Guidi stated.
“We often discuss what AI can do for us, but we rarely consider its impact on our planet. If we genuinely aim to leverage AI to enhance human welfare, we mustn’t overlook the detrimental effects on climate stability and public health.”
Donald Trump has declared that the United States will become the “world capital of artificial intelligence and crypto,” planning to eliminate safeguards surrounding AI development while dismantling regulations limiting greenhouse gas emissions.
The “Big Beautiful” spending bill approved by Republicans in the House of Representatives would prevent states from adopting their own AI regulations, with the GOP-controlled Senate also likely to pass a similar version.
However, the unrestricted usage of AI may significantly undermine efforts to combat the climate crisis while increasing power usage from the US grid. The dependence on fossil fuels like gas and coal continues to grow. AI is particularly energy-intensive, with a single query on ChatGPT consuming about ten times more power than a Google search.
The carbon emissions from US data centers have increased threefold since 2018, with recent Harvard research indicating that the largest “hyperscale” centers constitute 2% of the nation’s electricity usage.
“AI is poised to transform our world,” states Manu Asthana, CEO of PJM Interconnection, the largest grid in the US. Predictions suggest that nearly all increases in future electricity demand will arise from data centers. Asthana asserts this will equate to adding a new home’s worth of electricity to the grid every five years.
Quick Guide
Please contact us about this story
show
The best public interest journalism relies on direct accounts from people of knowledge.
If you have anything to share about this subject, please contact us with a secret using the following methods:
Secure Messages in Guardian App
The Guardian app has a tool to send tips about stories. Messages are end-to-end encrypted and implied within the routine activity that all Guardian mobile apps perform. This prevents observers from knowing you are in absolutely communication with us.
If you don’t already have the Guardian app, please download it (iOS/Android) and go to the menu. Select Secure Message.
For alternatives and the advantages and disadvantages of each, please refer to the guide at guardian.com/tips.
Illustration: Guardian Design / Rich Cousins
Meanwhile, the rapid escalation of AI is intensifying the recent rollback of climate pledges made by major tech companies. Last year, Google acknowledged that greenhouse gas emissions from AI have surged by 48% since 2019 due to its advances. In effect, the deeper AI penetrates, “reducing emissions may prove challenging.”
Supporters of AI, along with some researchers, contend that advancements in AI could aid the fight against climate change by enhancing the efficiency of grid management and other improvements. Others, however, remain skeptical. “It’s merely an operation for greenwashing, and it’s clear as day,” critiques Alex Hanna, research director at the Institute of Decentralized AI. “Much of what we’ve heard is absolutely ridiculous. Big tech is mortgaging the present for a future that may never materialize.”
So far, no states have definitive regulations regarding AI, but state lawmakers may be aiming to establish such rules, especially in light of diminished federal environmental regulations. This could prompt Congress to reevaluate the ban. “If you were anticipating federal regulations around data centers, that’s definitely off the table right now,” Hanna observed. “It’s rather surprising to observe everything.”
But Republican lawmakers are undeterred. The proposed moratorium on local regulations for states and AI recently cleared a significant hurdle in the Senate over the weekend, as I’ve determined that this ban will allow Trump taxes and megavilles to proceed. Texas Senator Ted Cruz, chairing the Senate Committee on Commerce, Science and Transportation, has prohibited modifications to the language which would prevent spending bills from addressing “foreign issues.”
This clause entails a “temporary suspension” on regulations, substituting a moratorium. It additionally includes an extra $500 million to grant programs aimed at expanding nationwide broadband internet access, stipulating that states will not receive these funds should they attempt to regulate AI.
The suggestion to suspend AI regulations has raised significant alarm among Democrats. Massachusetts Senator Ed Markey, known for his climate advocacy, has indicated his readiness to propose amendments that would strip the bill of its “dangerous” provisions.
“The rapid advancement of artificial intelligence is already impacting our environment—raising energy prices for consumers, straining the grid’s capacity to maintain lighting, depleting local water resources, releasing toxic pollutants into our communities, and amplifying climate emissions,” Markey shared with the Guardian.
“But Republicans want to prohibit AI regulations for ten years, rather than enabling the nation to safeguard its citizenry and our planet. This is shortsighted and irresponsible.”
Massachusetts Assemblyman Jake Ochincross also labeled the proposal as “terrible and unpopular ideas.”
“I believe we must recognize that it is profoundly reckless to allow AI to swiftly and seamlessly fill various sectors such as healthcare, media, entertainment, and education while simultaneously imposing a ban on AI regulations for a decade,” he commented.
Some Republicans also oppose these provisions, including Tennessee Senator Marsha Blackburn and Missouri Senator Josh Hawley. The amendment to eliminate the suspension from the bill requires the backing of at least four Republican senators.
Hawley is reportedly ready to propose amendments to remove this provision later in the week if they are not ruled out beforehand.
Earlier this month, Georgia Representative Marjorie Taylor Greene admitted that she overlooked the provisions in the House’s bill, stating she would not support the legislation if she had been aware. Greene’s group, the Far-Right House Freedom Caucus, stands against the suspension of AI regulations.
Proposals for regulating artificial intelligence are lagging by at least a year as the UK minister aims to advance a significant bill addressing the use of this technology and its associated copyrighted content.
Technology Secretary Peter Kyle is set to present a “detailed” AI bill in the upcoming Congressional session to tackle pressing issues, including safety and copyright concerns.
This delay in regulation raises concerns ahead of the next King’s speech. While no date has been confirmed for this event, some reports suggest it may occur in May 2026.
Initially, Labour had intended to introduce a concise, targeted AI bill shortly after taking office, focusing specifically on large-scale language models like CHATGPT.
The proposed legislation would have mandated companies to provide their models for assessment by the UK AI Security Institute, aiming to address fears that advanced AI models might pose threats to humanity.
However, with the bill behind schedule, the minister has opted to align with the approach of Donald Trump’s administration in the US, fearing that excessive regulations might dissuade AI companies from the UK.
Now, the minister is eager to incorporate copyright regulations for AI firms within the AI bill.
“We believe this framework can help us tackle copyright issues,” a government source commented. “We’ve been consulting with both creators and tech experts, and we’ve uncovered some intriguing ideas for the future. Once the data bill is finalized, our efforts will begin in earnest.”
The government is currently facing a dispute with the House over copyright provisions in a separate data bill. AI companies can utilize copyrighted materials for model training unless the rights holders opt out.
This has led to a strong backlash from the creative community, with notable artists like Elton John, Paul McCartney, and Kate Bush lending their support to a campaign against these changes.
Recently, Piers backed an amendment to the data bill that would require AI companies to declare whether they are using copyrighted materials for model training, ensuring compliance with existing copyright laws.
Despite Kyle’s expressed concerns over the government’s approach, he has resisted calls to backtrack. The government contends that the data bill does not adequately address copyright matters and has vowed to publish an economic impact evaluation alongside several technical papers on copyright and AI.
In a letter to legislators on Saturday, Kyle further pledged to create a cross-party working group on AI and copyright.
Beevan Kidron, a film director and crossbench peer advocating for the creative sector, remarked on Friday that the minister “has neglected the creative industry and disregarded Britain’s second-largest industrial sector.”
Kyle mentioned in Commons last month that AI and copyright should be included in another “comprehensive” legislative package.
An overwhelming majority of the UK populace (88%) believes the government should have the authority to halt AI product usage if deemed a significant risk. This finding was published in March by the ADA Lovelace Institute and the Alan Turing Institute, which shows that over 75% of people feel that safety oversight for AI should be managed by governments or regulators, alongside private companies.
Scott Singer, an AI specialist at Carnegie Endowment for International Peace, noted: “The UK is strategically navigating between the US and the EU. Similar to the US, the UK is aiming to avoid overly stringent regulations that could stifle innovation while exploring meaningful consumer protection methods.”
Distal regulation—the capacity to control genes across vast distances, spanning tens of thousands of DNA letters—emerged during the early stages of animal evolution, approximately 650-700 million years ago (the Kleigenian era).
Diagram of DNA molecules. Image credits: Christophe Bock, Max Planck Informatics Institute/CC BY-SA 3.0.
Distal adjustment relies on the physical folding of DNA and proteins, along with intricate loops.
This mechanism enables regions distant from a gene’s starting point to activate their functions.
This additional regulatory layer may have assisted the first multicellular organisms in developing specialized cell types and tissues without necessarily inventing new genes.
Key innovations likely originated from marine creatures or common ancestors shared by all existing animals.
Ancient organisms developed the ability to fold DNA in a controlled manner, forming 3D loops that facilitated direct contact between different segments of DNA.
“These organisms can utilize their genetic toolkit in various ways, akin to a Swiss Army knife, which allows them to fine-tune and explore innovative survival strategies,” explains Dr. Nacional Accidental Accidental Genmica, a postdoctoral researcher at the Center for Genome Regulation.
“I was surprised to find that this level of complexity dates back so far.”
Dr. Kim and his team discovered these insights by examining some of the oldest branches of the animal family tree, including species such as walnut-shaped comb jellies (Mnemiopsis leidyi), placozoans, cnidarians, and sponges.
They also investigated single-celled relatives that share a common ancestor with animals more recently.
“Studying unique sea creatures enables us to uncover much new biology,” states Professor Arnau Sebe-Pedrós, a researcher at the Center for Genome Regulation.
“Previously, we focused on comparing genomic sequences, but thanks to new techniques, we can now analyze the gene regulatory mechanisms that influence genomic function across species.”
A large individual of Mnemiopsis leidyi with two aboral ends and two apical organs. Image credit: Jokura et al., doi: 10.1016/j.cub.2024.07.084.
Researchers applied a method known as Micro-C to map the physical folding patterns in each of the 11 types of DNA analyzed. To provide context, each human cell nucleus contains approximately 2 meters of DNA.
Scientists sifted through 10 billion sequencing data points to create detailed various 3D genome maps.
Although no evidence of distal regulation was found in single-celled relatives of animals, early branches such as comb jellies, placozoans, and cnidarians exhibited numerous loops.
Over 4,000 loops were identified across the genome, particularly in the sea walnut.
This discovery is remarkable considering its genome consists of roughly 20 million DNA characters.
In contrast, the human genome contains 3.1 billion characters, with our cells housing tens of thousands of loops.
Previously, distal regulation was believed to have first emerged in the last bilateral ancestors, which appeared on Earth around 500 million years ago.
However, the comb jelly’s lineage branched off early from other animal lineages roughly 650-700 million years ago.
“The debate over whether the comb jelly predates the sponge in the tree of life has persisted in evolutionary biology, but this study suggests that distal regulation occurred at least 150 million years earlier than previously thought,” the authors concluded.
A paper detailing these findings was published today in the journal Nature.
____
IV Kim et al. Chromatin loops are characteristic of the ancestors of animal regulatory genomes. Nature Published online on May 7, 2025. doi:10.1038/s41586-025-08960-W
Ministers have postponed the regulation of artificial intelligence in line with the Trump administration, as reported by The Guardian.
Three labor sources revealed that the AI bill, originally planned for release before Christmas, is now expected to be delayed until summer.
The Minister had intended to issue concise invoices shortly after taking office.
The bill aims to address concerns about the potential risks of advanced AI models to humanity and to clarify the use of copyrighted materials by AI companies, differing from individual suggestions.
However, Trump’s election prompted a reconsideration of the bill. Senior labor sources said the bill was being carefully reconsidered, and there are no firm proposals yet on its content. The source added that they had aimed to pass it before Christmas, but it is now delayed until summer.
Another labor source, familiar with the legislation, mentioned that earlier drafts of the bill had been prepared months ago, but they are now being held back due to Trump’s actions, which could negatively impact British businesses. They expressed reluctance to proceed without addressing these concerns.
Trump’s actions have undermined Biden’s plans for AI regulation, including revoking an executive order aimed at ensuring technology safety and reliability. The future of the US AI Safety Institute is uncertain following the resignation of its director. Additionally, US Vice President JD Vance opposed planned European technical regulations at the AI Summit in Paris.
The UK government opted to align with the US by not signing the Paris Declaration endorsed by 66 other countries at the summit. UK Ambassador to Washington Peter Mandelson reportedly proposed making the UK a major US AI investment hub.
During a December committee meeting, Science and Technology Secretary Peter Kyle hinted that the AI bill was in advanced stages. However, Science Minister Patrick Balance stated earlier this month that there is no bill currently in place.
A government spokesperson stated, “This government remains committed to enacting legislation that will ensure the safe realization of the significant benefits of AI for years to come.
“We are actively engaged in refining our proposals for publication soon to ensure an effective approach against this rapidly evolving technology. Consultations will soon commence.”
The Minister faces pressure regarding individualized plans to allow AI companies access to online materials, including creative works for training models without requiring copyright permission.
Artists like Paul McCartney and Elton John have criticized this move, warning that it could undermine traditional copyright laws protecting artists’ livelihoods.
vinegarWe've reached a point where the CEO of a major social network is being arrested and detained. This is a big change, and it happened in a way that nobody expected. From Jennifer Rankin in Brussels:
French judicial authorities on Sunday extended the detention of Telegram's Russian-born founder. Pavel DurovHe was arrested at Paris airport on suspicion of misconduct related to the messaging app.
Once this detention phase is over, the judge can decide whether to release the defendant or to charge him or her and detain him further.
French investigators had issued a warrant for Durov's arrest as part of an investigation into charges of fraud, drug trafficking, organized crime, promoting terrorism and cyberbullying.
Durov, who holds French citizenship in addition to the United Arab Emirates, St. Kitts and Nevis and his native Russia, was arrested as he disembarked from a private jet after returning from the Azerbaijan capital, Baku, on Sunday evening. Telegram released a statement::
⚖️ Telegram complies with EU law, including the Digital Services Act, and its moderation is within industry standards and is constantly being improved.
✈️ Telegram CEO Pavel Durov has nothing to hide and travels frequently to Europe.
😵💫 It is absurd to claim that the platform or its owners are responsible for misuse of their platform.
French authorities said on Monday that Durov's arrest was part of a cybercrime investigation.
Paris prosecutor Laure Vecuot said the investigation concerns crimes related to illegal trading, child sexual abuse, fraud and refusal to provide information to authorities.
On the surface, the arrests seem decidedly different from previous years. Governments have had tough talk with messaging platform providers in the past, but arrests have been few and far between. Often, when platform operators are arrested, as in the cases of Silk Road's Ross Ulbricht and Megaupload's Kim Dotcom, authorities can argue that the platforms would not have existed without the crimes.
Telegram has long operated as a lightly moderated service, partly because of its roots as a chat app rather than a social network, partly because of Durov's own experience dealing with Russian censors, and partly (as many argue) because it is simply cheaper to have fewer moderators and less direct control over the platform.
But even if a company's moderation team's weaknesses can expose it to fines under laws such as the UK's Online Safety Act or the EU's Digital Services Act, they rarely lead to personal charges, and even less to executives being jailed.
Encryption
But Telegram has one feature that makes it slightly different from its peers, such as WhatsApp and Signal: the service is not end-to-end encrypted.
WhatsApp, Signal and Apple's iMessage are built from the ground up to ensure that content shared on the services cannot be read by anyone other than the intended recipient, including not only the companies that run the platforms but also law enforcement agencies that may be called upon to cooperate.
This has caused endless friction between the world's largest tech companies and the governments that regulate them, but for now, it seems the tech companies have won the main battle: No one is seriously calling for end-to-end encryption to be banned anymore, and regulators and critics are instead calling for messaging services to be monitored differently, with approaches such as “client-side scanning.”
Telegram is different. The service offers end-to-end encryption through a little-used opt-in feature called “Secret Chats,” but by default, conversations are encrypted only enough to be unreadable by anyone connected to your Wi-Fi network. To Telegram itself, messages sent outside of “Secret Chats” (including all group chats, and all messages and comments in one of the service's broadcast “channels”) are effectively unencrypted.
This product decision sets Telegram apart from the pack, yet oddly enough, the company's marketing suggests that the difference is almost the exact opposite. Cryptography expert Matthew Green:
Telegram CEO Pavel Durov continues to aggressively promote the app as a “secure messenger.” issued a scathing criticism He blocked Signal and WhatsApp in his personal Telegram channel, suggesting that these systems were rigged with US government backdoors and that only Telegram's independent encryption protocol could truly be trusted.
Watching Telegram urge people to forego using a messenger that's encrypted by default while refusing to implement a key feature that would broadly encrypt messages for its own users is no longer amusing. In fact, it's starting to feel a bit sinister.
I can't v won't
Paper planes are placed outside near the French Embassy in Moscow in support of Pavel Durov, who was arrested in France. Photo: Yulia Morozova/Reuters
The result of Telegram's mismatch between technology and marketing is a disappointing one: The company, and Durov personally, are selling the app to people who worry that even the gold standards of secure messengers — WhatsApp and Signal — aren't secure enough for their needs, especially from the U.S. government.
At the same time, if the government were to knock on Telegram's door and ask for information about actual or suspected criminals, Telegram would not have the same security as other services. End-to-end encrypted services could honestly tell law enforcement that they could not cooperate. In the long run, this could easily create a rather hostile atmosphere, but the conversation could also become a general conversation about privacy and policing principles.
Telegram, by contrast, is faced with a choice: cooperate with law enforcement, ignore it, or declare that it will not actively cooperate. This is no different from the choice facing the vast majority of online companies, from Amazon to Zoopla, except that Telegram's user base is the only one that demands security from law enforcement.
Every time Telegram says “yes” to police, it infuriates its user base; every time it says “no,” it plays a game of chicken with law enforcement.
The contours of the differences between France and Telegram will inevitably be swamped in conversations about “content moderation” and supporters will rally around it accordingly (Elon Musk has already weighed in, saying, “#FreePavel“) But the conversations are usually about publicly available material and what X or Facebook should or shouldn't do to moderate the discussion on their sites. Private messaging services and group messaging services are fundamentally different services, which is why mainstream end-to-end encrypted services exist. But by trying to straddle both markets, Telegram may have lost both defenses.
Final Question
My last day at the Guardian is fast approaching and next week's emails will be handed over to you, the reader. If you have a question you'd like an answer to, a doubt that's been simmering in the back of your mind for years, or are just curious about the inner workings of Techscape, please reply to this email or get in touch with me directly at alex.hern@theguardian.com. Ask me anything.
If you'd like to read the full newsletter, sign up to receive TechScape in your inbox every Tuesday.
WASHINGTON – The Environmental Protection Agency issued a rule on Thursday that will require coal-fired power plants to capture smokestack emissions or shut down. This new regulation aims to limit greenhouse gas emissions from fossil fuel-fired power plants, which are a major contributor to global warming. It is part of President Joe Biden’s pledge to eliminate carbon pollution from the power sector by 2035 and the entire economy by 2050.
The rule includes measures to reduce toxic wastewater pollutants from coal-fired power plants and safely manage coal ash in unlined retention ponds. EPA Administrator Michael Regan stated that the rule will reduce pollution, protect communities, and improve public health while ensuring a reliable electricity supply for the nation.
Industry groups and Republican-leaning states are expected to challenge the rule, citing concerns about the reliability of the power grid. However, environmental groups have praised the EPA’s actions as crucial in combating climate change and protecting public health.
The rule sets standards for existing coal-fired power plants to control carbon emissions, with future plants required to capture up to 90% of their carbon pollution. Coal-fired power plants must reduce or capture 90% of their carbon emissions by 2032 to continue operating beyond 2039. Plants scheduled to be retired by 2039 will also face stricter standards.
The EPA rule does not mandate carbon capture and storage technology but sets a cap on carbon pollution that power plant operators must adhere to. The regulation also addresses toxic wastewater pollution from coal-fired power plants and the safe management of coal ash, a hazardous byproduct of coal combustion.
Overall, the EPA’s new rule represents a significant step in reducing carbon pollution, protecting public health, and moving towards a cleaner energy future for the United States.
UUntil recently, visitors to New York essentially had two options. A hotel room or a short-term rental platform like Airbnb. But in September 2023, the city began enforcing a 2022 law that prohibits people from renting a home for less than 30 days (unless the host stays in the home with a guest).
Currently, hotel rooms are the only legitimate option for people visiting the city, but they are out of reach for many. Most Times Square hotels don’t have rooms for less than $300 a night. Searches on Thursday, May 2nd found Muse for $356, Hampton Inn for $323, and Hard Rock for $459 (but due to dynamic pricing, these can change regularly). They become more expensive. Hotel prices rose at twice the rate of inflation from the first quarter of this year to the first quarter of 2023, said Jan Freitag, an analyst at real estate data firm Coster Group.
Many visitors and New Yorkers are turning to the underground rental market, where Facebook groups, Craigslist posts, Instagram listings, and reviews have become the go-to for finding short-term rentals in the five boroughs.
If you have friends in New York, you’ve probably seen their Instagram stories. “Hello everyone! I’m renting out my room in my 5-bed apartment to him again for 4 days over Easter! I have to deal with a dog and a rude roommate! DM me if you’re interested!”
Other travelers headed to New Jersey, making the kaleidoscopic city across the Hudson the nation’s fastest-growing Airbnb demand market, according to analytics site AirDNA. Other companies are snapping up hotels, which are expected to become even more expensiven the coming years. For many tourists, a good answer to the so-called Airbnb ban has not yet been found.
Yoya Busquets, 56, had been considering an Airbnb in New Jersey, but she really wants to stay there when she visits from Barcelona with her husband and two teenage daughters in early September. . She took a quick peek at her Facebook, where she chatted on Messenger with some people advertising short-term rentals. The last time she visited New York was in 2012, when she stayed at an Airbnb in Brooklyn, and she hopes to have a similar experience. She might get lucky.
“I’ve been in contact with a girl who has a room available for a week, and it’s listed on Airbnb as in New Jersey, but when I contacted her, she said it was in Brooklyn,” she said.
The apartment happened to be close to the area she had previously stayed in and was within her $160 per night budget. Considering the cost of a hotel and the space her daughters needed to relax after a busy day, it was the best option she found. But that setup is probably in violation of the new law, which is why the apartment is listed in Jersey.
Williamsburg Bridge in Brooklyn. For a hotel, “you have to pay about $400 a night, and we don’t have that kind of money,” said one New Yorker who tried to accommodate his parents. Photo: Ryan DeBerardinis/Alamy
AirDNA, which tracks data from short-term rental sites like Airbnb and Vrbo, says listings for stays of less than 30 days have declined by 83% since August 2023, when the regulations began taking effect. At one time in New York City he had 22,200 short-term properties available. That number currently stands at just 3,700, according to AirDNA.
Tesin Parra, 24, was looking for a job that would allow her to continue living in the United States after completing her thesis and classes, while also looking for a place for her family to stay as she graduates from New York University in May. Program for Journalism.
“This is their first time in New York City, so I want them to have a good experience,” Para, who is originally from India, said of her parents and grandmother. “She wanted to do an Airbnb so she could also cook,” she said.So she was disappointed when she learned that short-term rentals weren’t really an option anymore.
Parra wants a place with space for her family to gather. As a sign of her gratitude and respect, she wants to cover the cost of her family’s accommodation and has budgeted around $200 (£160) per night for a week-long stay.
“I’m kind of stuck as to what to do,” Parra said. “Probably a hotel, but I’d have to pay about $400 a night, and I don’t have that kind of money.”
Now, with the double stress of finishing school and facing hotel bills she can’t afford, she’s at a crossroads. She either chooses a hotel, has her parents pay for it, or rents something short-term, which is technically impossible in New York. Legal?
Without the accountability and protection that platforms like Airbnb offer, avoiding scams when searching for short-term rentals has become the norm. So Pala skipped scanning his Craigslist altogether. Currently, she is considering booking an Airbnb in New Jersey, but she worries that the local PATH train traffic will be an inconvenience for her grandmother.
This regulation was passed with the goal of keeping rent prices in check for New Yorkers by putting apartment inventory back on the market, but it is often important for New York renters and homeowners who lived in apartments while still living in apartments. It also cut off a major source of income. Where they were when they were out of town. Some New Yorkers are still looking for ways to bring in funds.
Kathleen, whose last name is withheld for privacy reasons, only recently began renting an East Village apartment on the underground rental market. The 29-year-old travels frequently for her personal finance job and to visit her family in North Carolina. According to her, she’s out of town for about four months a year, and of course, she still has to pay $2,600 a month in rent while she’s away. To make up for some of her lost money, she started connecting with undocumented people through Facebook groups.
In 2015, Airbnb protesters gathered at New York City Hall. Photo: Shannon Stapleton/Reuters
“I thoroughly vetted a lot of people,” she said, voicing concerns about how her space would be treated given the lack of protection that short-term rental platforms offer hosts. I made it. She has two guests: her. One is a weekend visitor, the other stays at her apartment for three weeks in the summer. They pay her $50 a night.
“I always have a side hustle,” she said. “If I can make extra money, why not make extra money? I live in a great place. I thought it would be a nice, cute place.”
This is the spot where a visitor like Juan José Tejada could become a champion. Tejada, a wellness influencer from Bogotá, Colombia, is visiting New York for nine days in July with his best friend. He began his location search by looking at hotels, but he soon realized they were too expensive.
“I’m 25 years old. I’m traveling with my best friend. And, you know, we don’t have that much of a budget,” he said. At the suggestion of a cousin who lives in the city, Tejada used Facebook to search for short-term rental properties. What he discovered was four times his budget of $100 to $200 per night. But that wasn’t the only problem.
“When I was looking for short-term rental properties, the payment situation was a little tough,” Tejada said. Not in Colombia. “
Tejada and her friend ended up booking a hostel called Hi New York City on the Upper West Side, which cost about $55 a night for a bunk room with a shared bathroom. was. Tejada said she considered Airbnb, which has an on-site host, but couldn’t find a suitable option. It’s not the apartment he dreamed of breezed in and out of as if he were a local, but it’s good enough.
People are coming up with their own solutions for short stays. On Instagram, there are accounts like Book That Sublet NYC, where over 4,000 followers tune in to frequently posted daily and weekly sublets, as well as endless “sublets.”Book my apartment!“, or an apartment exchange callout shared on Instagram Stories. And there are long-standing apartment exchange sites like HomeExchange and HomeLink that offer visitors another way to get their foot in the door of a city apartment.
Supporters of the new regulations thought that limiting short-term rentals would bring long-term rentals back onto the market and perhaps help lower rents in the notoriously expensive city. Jamie Lane, chief economist at AirDNA, said after nearly seven months, there was still no widespread impact.
Jonathan Miller, CEO of appraisal firm Miller Samuel, said that although a small number of apartments have returned to the rental market since the law was changed, mortgage rates remain high and mortgage rates are declining. He explained that this is because it has been gradually increasing since its inception. In 2017, prospective buyers refrained from making purchases for the time being, and rents rose.
Parra, a New York University student, doesn’t think the regulations are the most effective way to address New York’s housing crisis. “I don’t understand how this regulation makes sense. Not in terms of relieving the burden of the number of Airbnbs, but considering that New York City is an immigrant city. ‘Is it fair?’ she said.
But Busquets, who will be visiting in September, has seen firsthand the impact of tourism and short-term rentals on the world-renowned destination.
“I come from a city where the Airbnb craziness is actually displacing local residents and people who have lived there for years,” she said. “The owners wanted to keep people who were there just for short-term rentals because it was more profitable.”
Busquets said Airbnb made Barcelona uninhabitable and she eventually left for the suburbs herself. She added: “It’s changed. It’s not the same city it was 10, 15 years ago.”
The European Parliament has approved the EU’s proposed AI law, marking a significant step in regulating the technology. The next step is formal approval by EU member states’ ministers.
The law will be in effect for three years, addressing consumer concerns about AI technology.
Guillaume Cournesson, a partner at law firm Linklaters, emphasized the importance of users being able to trust vetted and safe AI tools they have access to, similar to trust in secure banking apps.
The bill’s impact extends beyond the EU as it sets a standard for global AI regulation, similar to the GDPR’s influence on data management.
The bill’s definition of AI includes machine-based systems with varying autonomy levels, such as ChatGPT tools, and emphasizes post-deployment adaptability.
Certain risky AI systems are prohibited, including those manipulating individuals or using biometric data for discriminatory purposes. Law enforcement exceptions allow for facial recognition use in certain situations.
High-risk AI systems in critical sectors will be closely monitored, ensuring accuracy, human oversight, and explanation for decisions affecting EU citizens.
Generative AI systems are subject to copyright laws and must comply with reporting requirements for incidents and adversarial testing.
Deepfakes must be disclosed as human-generated or manipulated, with appropriate labeling for public understanding.
AI and tech companies have varied reactions to the bill, with concerns about limits on computing power and potential impacts on innovation and competition.
Penalties under the law range from fines for false information provision to hefty fines for breaching transparency obligations or developing prohibited AI tools.
The law’s enforcement timeline and establishment of a European AI Office will ensure compliance and regulation of AI technologies.
IIn 1914, on the eve of World War I, H.G. Wells published a novel about the possibility of an even bigger conflagration. liberated world Thirty years before the Manhattan Project, “humankind'' [to] Carry around enough potential energy in your handbag to destroy half a city. ” A global war breaks out, precipitating a nuclear apocalypse. To achieve peace, it is necessary to establish a world government.
Wells was concerned not just with the dangers of new technology, but also with the dangers of democracy. Wells's world government was not created by democratic will, but was imposed as a benign dictatorship. “The ruled will show their consent by silence,” King Ecbert of England says menacingly. For Wells, “common man” means “Violent idiots in social issues and public affairs”. Only an educated, scientifically-minded elite can “save democracy from itself.”
A century later, another technology inspires similar awe and fear: artificial intelligence. From Silicon Valley boardrooms to the backrooms of Davos, political leaders, technology moguls, and academics are exulting in the immense benefits of AI, but they are also concerned about its potential. ing. announce the end of humanity When super-intelligent machines come to rule the world. And, as a century ago, questions of democracy and social control are at the heart of the debate.
In 2015, journalist Stephen Levy Interview with Elon Musk and Sam Altmanthe two founders of OpenAI, a technology company that gained public attention two years ago with the release of ChatGPT, a seemingly human-like chatbot. Fearful of the potential impact of AI, Silicon Valley moguls founded the company as a nonprofit charitable trust with the goal of developing technology in an ethical manner to benefit “all of humanity.”
Levy asked Musk and Altman about the future of AI. “There are two schools of thought,” Musk mused. “Do you want a lot of AI or a few? I think more is probably better.”
“If I used it on Dr. Evil, wouldn't it give me powers?” Levy asked. Altman responded that Dr. Evil is more likely to be empowered if only a few people control the technology, saying, “In that case, we'd be in a really bad situation.” Ta.
In reality, that “bad place” is being built by the technology companies themselves. Musk resigned from OpenAI's board six years ago and is developing his own AI project, but he is now accused of prioritizing profit over public interest and neglecting to develop AI “for the benefit of humanity.” He is suing his former company for breach of contract.
In 2019, OpenAI created a commercial subsidiary to raise money from investors, particularly Microsoft. When he released ChatGPT in 2022, the inner workings of the model were hidden. I didn't need to be too open about it, Ilya SatskevaOne of OpenAI's founders, who was the company's chief scientist at the time, responded to criticism by claiming that it would prevent malicious actors from using it to “cause significant damage.” Fear of technology became a cover for creating a shield from surveillance.
As AI develops, Sutskever wrote to Musk: The “open” in openAI means that everyone should benefit from the results of AI once it is developed. [sic] It's built, but it's totally fine if you don't share the science. ” “Yes,” Musk replied. Regardless of the nature of the lawsuit, Musk, like other tech industry moguls, has not been as open-minded. The legal challenges to OpenAI are more a power struggle within Silicon Valley than an attempt at accountability.
Wells wrote liberated world At a time of great political turmoil, when many people were questioning the wisdom of extending suffrage to the working class.
“Was that what you wanted, and was it safe to leave it to you?” [the masses],” Fabian Beatrice Webb wondered., “The ballot box that creates and controls the British government with its vast wealth and far-flung territories”? This was the question at the heart of Wells's novel: Who can one entrust their future to?
A century later, we are once again engaged in heated debates about the virtues of democracy. For some, the political turmoil of recent years is a product of democratic overreach, the result of allowing irrational and uneducated people to make important decisions. “It's unfair to put the responsibility of making a very complex and sophisticated historical decision on an unqualified simpleton.” Richard Dawkins said: After the Brexit referendum, Mr Wells would have agreed with that view.
Others say that such contempt for ordinary people is what contributes to the flaws in democracy, where large sections of the population feel deprived of a say in how society is run. .
It's a disdain that also affects discussions about technology.like the world is liberated, The AI debate focuses not only on technology, but also on questions of openness and control. Alarmingly enough, we are far from being “superintelligent” machines. Today's AI models, such as ChatGPT, or claude 3, released last week by another AI company, Anthropic, is so good at predicting what the next word in a sequence is that it makes us believe we can have human-like conversations. You can cheat. However, they are not intelligent in the human sense. Negligible understanding of the real world And I'm not trying to destroy humanity.
The problems posed by AI are not existential, but social.from Algorithm bias to surveillance societyfrom Disinformation and censorship to copyright theftOur concern is not that machines might someday exercise power over humans, but that machines already function in ways that reinforce inequalities and injustices, and that those in power strengthen their own authority. It should be about providing tools for
That's why what we might call “Operation Ecbert,” the argument that some technologies are so dangerous that they must be controlled by a select few over democratic pressure, It's very threatening. The problem isn't just Dr. Evil, it's the people who use fear of Dr. Evil to protect themselves from surveillance.
BRetain wants to lead the world in AI regulation. However, AI regulation is a rapidly evolving and contested policy area, with little agreement on what a good outcome looks like, let alone the best way to get there. And the fact that he is the third most important AI research center in the world does not give him so much power if the first two are the United States and China.
How do we cut this Gordian knot? Simple: Act quickly and decisively and do nothing.
The UK Government has today taken the next step towards legislation to regulate AI. From our story:
The government will admit on Tuesday that binding measures to oversee
cutting-edge AI development will be needed at some point, but not immediately. Instead, ministers will set out “initial thoughts on future binding requirements” for advanced systems and discuss them with technical, legal and civil society experts.
The Government will also give regulators £10m to help tackle AI risks and require them to develop their approach to the technology by April 30th.
When the first draft of the AI whitepaper was published in March 2023, the reaction was negative. The government’s proposal was withdrawn on the same day as the now-infamous call for a six-month “pause” on AI research to control the risks of an out-of-control system. Against this background, this white paper seemed pathetic.
The proposal would give regulators no new powers and would not give responsibility for guiding AI development to any private group. Instead, the government planned to align existing regulators, such as the Competition and Markets Authority and the Health and Safety Executive, and set out five principles to guide the regulatory framework when considering AI.
This approach has been criticized by the UK’s leading AI research group, the Ada Lovelace Institute, as having “significant gaps”, and even the fact that a multi-year legislative process will leave AI unregulated during the interim period. Ignored.
So what has changed?Well, the government Really awesome £10 million
Asking regulators to “upskill”, has set an April 30 deadline for the largest companies to publish their AI plans. A Department for Science, Innovation and Technology spokesperson said: “The UK Government is in no hurry to legislate and will not risk introducing ‘ready-to-read’ rules that quickly become outdated or ineffective.” Ta.
This is a strange definition of “global AI leadership” and it’s important to immediately say “we’re not doing anything.” The government is also “considering” actual regulations, envisioning “future binding requirements that may be introduced for developers building cutting-edge AI systems.”
The second, slightly larger fund will cost “almost” £90m to launch “nine new centers of excellence across the UK”. The government also announced £2 million in funding to support “new research projects that help define what responsible AI looks like”.
There is an element of tragedy when reading the government press release that triumphantly revealed £2 million in funding from Yoshua Bengio, one of the three “godfathers” of AI, just a week later. Asks Canada to spend $1 billion We are building publicly owned supercomputers to keep up with the big tech companies. It’s like bringing a spoon to a knife fight.
You can say you’re agile in the face of conflicting demands, but after more than 11 months, it just seems impossible to commit. The day before the latest update to the AI White Paper was published, the Financial Times broke the news that another pillar of AI regulation had collapsed. from that story (lb):
The Intellectual Property Office, the UK government’s agency that oversees copyright law, is working with AI companies and rights holders to produce guidance on text and data mining, where AI models are trained on existing materials such as books and music. We are discussing with.
But a group of industry executives convened by the IPO to oversee the work was unable to agree on a voluntary code of conduct, handing responsibility back to officials at the Department of Science, Innovation and Technology.
Unlike broader AI regulation, which has a quagmire of conflicting opinions and very vague long-term goals, copyright reform is a very clear trade-off. On the one hand, creative and media companies that own valuable intellectual property. On the other side are technology companies that can use their intellectual property to build valuable AI tools. One group or the other will be frustrated by the outcome. A perfect compromise simply means that both are true.
Last month, the head of Getty Images was one of many to call on the UK to support its creative industries, which make up a tenth of the UK economy, citing the theoretical benefits that AI could bring in the future. And, faced with difficult choices with no right answers, the government chose to do nothing. Then you cannot lead the world in the wrong direction. And isn’t that what leadership is all about?
completely fake
Joe Biden poses with his smartphone while on the campaign trail. The President of the United States was the subject of a fake video posted on Facebook. Photo: Evan Vucci/AP
To be fair to the government, there are obvious problems with moving too quickly. Let’s take a look at social media to see some of them. Facebook’s rules do not prohibit deepfake videos of Joe Biden, the company’s Oversight Board (also known as the “Supreme Court”) has found.But honestly, it’s not clear what they are do Prohibition will become increasingly problematic. From our story:
Meta’s oversight board found that a Facebook video that falsely suggested that U.S. President Joe Biden is a pedophile did not violate the company’s current rules, but said the rules were “disjointed”. Yes, we believe that the focus is too narrow on AI-generated content.
The board, which is funded by Facebook’s parent company Meta but operates independently, took over the Biden video case in October in response to user complaints about a doctored seven-second video of the president.
Facebook rushed out a “manipulated media” policy several years ago, before ChatGPT and large-scale language models became AI trends, and amid growing interest in deepfakes. The rule
s prohibited misleading and altered videos created by AI.
The problem, the oversight committee said, is that the policy is impossible to apply because it has little clear rationale behind it and no clear theory of the harm it seeks to prevent. How can moderators differentiate between videos created by AI (which is prohibited) and videos created by skilled video editors (which are allowed)? Even if they could, Why is only the former problematic enough to be removed from the site?
The Oversight Committee proposed updating the rules to remove the temporary reference to AI altogether and instead require labels to identify manipulated audio and video content, regardless of the manipulation method. Mehta said it would update its policy.
Brianna Gee’s mother is calling for stricter restrictions on smartphones and social media. Photo: Handout to families/Cheshire Police/PA
Brianna Gee’s mother is calling for a revolution in how teens approach social media after her daughter was murdered by two of her classmates. Under-16s, she says, should be limited to devices made for teenagers that allow parents to easily monitor their technological lives, which are age-restricted by governments and tech companies.
I spoke to Archie Brand, editor of the daily newsletter First Edition, about her plea:
This lament will resonate with many parents, but in Brianna’s case it has special power. She was “secretly accessing sites on her smartphone that promoted anorexia and self-harm.” Petition created by Esther Say. And prosecutors
said her killers used Google to search for poisons, “serial killer facts” and ways to combat anxiety, and searched Amazon for rope.
“We don’t need new software to do everything Esther Gee wants us to do,” says Alex Hahn. “But there’s a broader problem here. Just as this sector has historically moved faster than governments can keep up, it’s also moving faster than parents can keep up. This varies from app to app and changes regularly, so it’s a large and difficult job to keep track of.”
You can read Archie's full email here (and sign up here to get the first edition every weekday morning).
Wider TechScape
Taylor Swift is one of the Universal Music artists whose work has been stripped from TikTok. Photo: Natasha Pisarenko/AP
The Frontier AI Taskforce, set up by the UK in June in preparation for this week’s AI Safety Summit, is expected to become a permanent fixture as the UK aims to take a leading role in future AI policy. UK Chancellor Rishi Sunak today formally announced the launch of the AI Safety Institute, a “global hub based in the UK tasked with testing the safety of emerging types of AI”.
The institute was informally announced last week ahead of this week’s summit. This time, the government announced that the committee will be led by Ian Hogarth, an investor, founder and engineer who also chaired the taskforce, and that Yoshuo Bengio, one of the most prominent figures in the AI field, will lead the committee. It was confirmed that the Creating your first report.
It’s unclear how much money the government will put into the AI Safety Institute, or whether industry players will pick up some of the costs. The institute, which falls under the Department of Science, Innovation and Technology, is described as “supported by major AI companies,” but this may refer to approval rather than financial support. do not have. We have reached out to his DSIT and will update as soon as we know more.
The news coincided with yesterday’s announcement of a new agreement, the Bletchley Declaration. The Bletchley Declaration was signed by all countries participating in the summit, pledging to jointly undertake testing and other commitments related to risk assessment of ‘frontier AI’ technologies. An example of a large language model.
“Until now, the only people testing the safety of new AI models were the companies developing them,” Sunak said in a meeting with journalists this evening. Citing efforts being made by other countries, the United Nations and the G7 to address AI, the plan is to “collaborate to test the safety of new AI models before they are released.”
Admittedly, all of this is still in its early stages. The UK has so far resisted moves to consider how to regulate AI technologies, both at the platform level and more specific application level, and the idea of quantifying safety and risk has stalled. Some people think that it is meaningless.
Mr Sunak argued it was too early to regulate.
“Technology is developing at such a fast pace that the government needs to make sure we can keep up,” Sunak said, focusing too much on big ideas but too little on legislation. He spoke in response to accusations that he was “Before we make things mandatory and legislate, we need to know exactly what we’re legislating for.”
Transparency appears to be a very clear goal of many long-term efforts around this brave new world of technology, but today’s series of meetings at Bletchley, on the second day of the summit, It was far from the ideal.
In addition to bilateral talks with European Commission President Ursula von der Leyen and United Nations Secretary-General António Guterres, today’s summit focused on two plenary sessions. Though not accessible to journalists watching from across a small pool as people gather in the room, attendees at the event included the CEOs of DeepMind, OpenAI, Antrhopic, InflectionAI, Salesforce, and Mistral, as well as Microsoft The president of the company and the head of AWS were also included. Among those representing governments were Sunak, US Vice President Kamala Harris, Italy’s Giorgia Meloni and France’s Finance Minister Bruno Le Maire.
Remarkably, although China was a much-touted guest on the first day, it did not appear at the closed plenary session on the second day.
Elon Musk, owner of X.ai (formerly Twitter), also appeared to be absent from today’s session. Mr. Sunak is scheduled to have a fireside chat with Mr. Musk on his social platforms this evening. Interestingly, it is not expected to be a live broadcast.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.