The UK government has accessed customer information and intensified its dispute with Apple by requesting a backdoor to the cloud storage services of high-tech companies.
Previously, the Home Office sought access to data tied to Apple’s Advanced Data Protection (ADP) services uploaded by users globally, leading to tensions with the White House.
On Wednesday, The Financial Times reported that the government has introduced a new access order known as the Technical Capacity Notice (TCN), which aims to gain access to encrypted cloud backups for UK citizens.
A spokesperson for the Ministry of Home Affairs noted that the department does not comment on operational matters such as “confirming or denying the presence of such notices.” The spokesperson added: “We will always take all necessary actions at the national level to ensure the safety of our British citizens.”
In February, Apple withdrew ADP for new UK users, advising that existing users would need to deactivate security features in the future. Messaging services such as iMessage and FaceTime continue to be end-to-end encrypted by default.
Tulsi Gabbard, director of US national intelligence, mentioned that the UK had backed down in August by insisting on access to US customer data. Donald Trump characterized the demand for access as “what you hear is China.”
While Apple did not directly address the FT report, it expressed regret over its inability to provide ADP (an optional additional layer) to UK customers, stating it would “never” implement backdoors in its products.
“Apple remains dedicated to delivering the highest level of security for personal data, and we hope to achieve this in the UK in the future. As I’ve reiterated many times, we’ve never created a backdoor or a master key for any product or service.”
Apple has challenged the initial TCN via the Investigatory Powers Tribunal, questioning whether the national intelligence agency acted unlawfully. The Home Office had attempted to keep the case’s details confidential, but after a ruling in April, it was confirmed that Apple’s appeal resulted in some information being released for the first time.
However, the specifics of the TCN remain undisclosed, and recipients of such notices are prohibited from revealing their existence under investigatory rights. The FT indicates that the original TCN is “not limited to” data stored under the ADP, suggesting the UK government seeks access to fundamental and widespread iCloud services.
The ADP service employs end-to-end encryption, ensuring that only account holders can decrypt files like documents and photos, leaving no one else, including Apple, with that capability.
Privacy International, the organization that initiated a legal challenge against the first TCN, remarked that this new order “may pose as significant a threat as the previous ones.” It noted that if Apple is compelled to compromise end-to-end encryption in the UK, it would create vulnerabilities affecting all users by undermining the entire system.
“Such vulnerabilities could be exploited by hostile states, criminals, and other malevolent entities across the globe,” the organization stated.
On Thursday, the U.S. government filed a lawsuit against Uber, alleging that the ride-sharing service has breached federal laws by discriminating against passengers with disabilities.
The complaint, submitted in federal court in San Francisco, claims that Uber drivers frequently refuse to transport disabled riders, including those accompanied by service animals or using wheelchairs.
Additionally, the department stated that Uber and its drivers unlawfully impose cleaning fees for service animals on riders denied service and also charge cancellation fees.
Some drivers are reportedly dismissing legitimate requests, such as humiliating persons with disabilities or preventing passengers with mobility challenges from sitting in the front seats.
According to the Justice Department, “Uber’s discriminatory actions have inflicted significant financial, emotional, and physical harm on individuals with disabilities,” violating the Americans with Disabilities Act.
In response, Uber stated that it disputes the allegations and is dedicated to enhancing access and the overall experience for riders with disabilities.
Uber further asserts that riders utilizing guide dogs or requiring other assistance “deserve a safe, respectful, and welcoming experience with Uber. A complete stop.”
The complaint outlines 17 instances of alleged misconduct involving Uber.
One instance involves JE, a seven-year-old amputee from the Bronx, New York, who reportedly faced refusal from an Uber driver after attending his brother’s birthday party due to his wheelchair.
Another case highlights Jason Ludwig, a Gulf War veteran with a service dog, who was denied a ride to Norfolk Airport in Virginia, causing him to miss his flight and return to Yarmouth, Massachusetts, after 16 hours of travel.
Jeff Clark, a third rider from Mount Laurel, New Jersey, claims that four drivers canceled their ride in Philadelphia within 17 minutes.
The lawsuit aims for an injunction to prevent further violations of the ADA, along with demands for improvements in Uber’s practices and training, financial compensation, and civil penalties.
A spokesperson for the Department of Justice was not available for immediate comment.
The government is under pressure to clarify why it has not yet acted on all recommendations from the 2023 review. This includes findings concerning Afghans, victims of child sexual abuse, and 6,000 disability claimants working alongside the British military.
On Thursday, the Minister finally published an information security review. This move followed a 2023 leak involving personal data of approximately 10,000 military personnel from Northern Ireland’s police service.
The Cabinet Office’s review of 11 public sector data breaches revealed three overarching themes affecting entities such as HMRC, the Metropolitan Police, Benefits Systems, and the MOD.
Insufficient control over incidental downloads and the aggregation of sensitive data.
Disclosure of sensitive information through “wrong recipient” emails and improper use of BCC.
Undisclosed personal data emerging from spreadsheets set for release.
The review was released 22 months after the database of 18,700 Afghans was finalized just a month following its publication and was praised by Chi Onwurah, chair of the Science, Innovation and Technology Committee. However, she remarked:
Data breaches concerning Afghans have instilled fear among those concerned for their safety under the Taliban and those wary of the UK government, which promised relocation to thousands of Afghans under a confidential plan.
The government reported that it has acted on 12 of the 14 recommendations aimed at enhancing data security. Onwurah stated: “There are still questions that the government must address regarding the review. Why have only 12 out of the 14 recommendations been executed?”
“For governments to leverage technology to boost the economy and fulfill their aspirations of public sector transformation, they must earn their citizens’ trust in safeguarding their data.
Intelligence Commissioner John Edwards urged the government to “encourage the broader public sector to expedite the organization of its practices to secure Whitehall.”
He emphasized to Cabinet Secretary Pat McFadden on Thursday, “It is imperative that the government fully actualizes the recommendations from the Information Security Review.”
It remains unclear which of the 14 recommendations are still pending implementation. The full list includes collaboration with the National Cybersecurity Centre to disseminate existing guidance on the technical management of “official” labeled products and services, marking of “official” information, launching a “behavioral impact communication campaign” to combat ongoing deficiencies in information processing, and the necessity for a “review of sanctions related to negligence.”
McFadden and Peter Kyle, the secretaries of state for science, innovation, and technology, communicated to Onwurah in a letter on Thursday.
A spokesperson for the government stated: “This review concluded in 2023 under the previous administration.
“Safeguarding national security, particularly government data security, remains one of our top priorities. Since taking office, we have introduced plans to enhance inter-sector security guidance, update enforcement training for civil servants, and improve the digital infrastructure throughout the public sector, aligning with the shift towards modern digital governance.”
Donald Trump and Secretary of Commerce Howard Lutnick have announced that the US government has secured a groundbreaking 10% stake in Intel through a partnership with struggling chip manufacturers. This marks another significant intervention by Corporate America’s White House.
Lutnick stated on X: “Big News: The United States now owns 10% of Intel, one of our nation’s leading technology firms. We extend our gratitude to Intel CEO @Lipbutan1 for negotiating fair agreements benefit Americans.”
Trump met with Lipbu Tang on Friday and posed for a photo with Lutnick. This move was prompted by the US president’s demand for Intel’s resignation regarding its ties with Chinese companies after a previous meeting between Tang and Trump earlier this month.
“He approached us to continue his efforts and ultimately committed $1 billion to the US, so we secured a billion,” Trump shared on Friday.
Although Trump did not detail the $10 billion sum, it approximately corresponds to the financial assistance Intel receives from the government under the Chips and Science Act to build a US chip manufacturing facility.
Intel’s investment is the latest in a series of extraordinary deals brokered by the US administration under Trump, including allowing AI chip giant Nvidia to sell H20 chips to China. Amd has similarly pursued a comparable transaction.
Additionally, the Department of Defense is poised to become the principal stakeholder in small mining companies, enhancing the production of rare earth magnets, with the US government negotiating specific veto rights and “golden shares” as part of a deal enabling Nippon Steel to acquire US steel.
The extensive range of US government interventions in corporate affairs is raising concerns among critics who argue that Trump’s measures will establish a new category of corporate risk.
This development follows a $2 billion capital infusion from SoftBank Group, a significant endorsement for a troubled US chipmaker now navigating a turnaround. Daniel Morgan, senior portfolio manager at Synovus Trust, mentioned that Intel’s challenges extend beyond the financial boosts from SoftBank or government profits.
“Without government backing and strong financial allies, it’s tough for Intel’s Foundry units to generate enough capital to keep expanding fabs at a reasonable pace,” he stated. “We need to catch up with TSMC [Taiwan Semiconductor Manufacturing Company] to be competitive technically.”
The 10% stake is valued at approximately $10 billion at the current stock price. Lutnick noted this week that these shares do not confer voting rights, meaning the US government cannot dictate the company’s operational decisions.
Federal backing could provide Intel with more leeway to revitalize its struggling casting business, analysts observe, though it still faces weaknesses in its product roadmap and challenges in attracting customers to its new factories.
Tang, who took on a leading role at Intel in March, has the responsibility of reviving the iconic American chipmaker, which reported a loss of $18.8 billion in 2024—the first loss since 1986.
wHeng Min* discovered a concealed camera in her bedroom, initially hoping for a benign explanation, suspecting her boyfriend might have set it up to capture memories of their “happy life” together. However, that hope quickly morphed into fear as she realized her boyfriend had been secretly taking sexually exploitative photos of her and her female friends, as well as other women in various locations. They even used AI technology to create pornographic images of them.
When Ming confronted him, he begged for forgiveness but became angered when she refused to reconcile. I said to a Chinese news outlet, Jimu News.
Ming is not alone; many women in China have fallen victim to voyeuristic filming in both private and public spaces, including restrooms. Such images are often shared or sold online without consent. Sexually explicit photos, frequently captured via pinhole cameras hidden in everyday objects, are disseminated in large online groups.
This scandal has stirred unrest in China, raising concerns about the government’s capability and willingness to address such misconduct.
A notable group on Telegram, an encrypted messaging app, is the “Maskpark Tree Hole Forum,” which reportedly boasted over 100,000 members, mostly male.
“The Mask Park incident highlights the extreme vulnerability of Chinese women in the digital realm,” stated Li Maizi, a prominent Chinese feminist based in New York, to the Guardian.
“What’s more disturbing is the frequency of perpetrators who are known to their victims: committing sexual violence against partners, boyfriends, and even minors.”
The scandal ignited outrage on Chinese social media, stirring discussions about the difficulties of combating online harassment in the nation. While Chinese regulators are equipped to impose stricter measures against online sexual harassment and abuse, their current focus appears to prioritize suppressing politically sensitive information, according to Eric Liu, a former content moderator for Chinese social media platforms and present editor of the Digital Times based in the US.
Since the scandal emerged, Li has observed “widespread” censorship concerning the Mask Park incident on Chinese internet. Posts with potential social impact, especially those related to feminism, are frequently subject to censorship.
“If the Chinese government had the will, they could undoubtedly shut down the group,” Li noted. “The scale of [MaskPark] is significant. Cases of this magnitude have not gone unchecked in recent years.”
Nevertheless, Li expressed that he is not surprised. “Such content has always existed on the Chinese internet.”
In China, individuals found guilty of disseminating pornographic material can face up to two years in prison, while those who capture images without consent may be detained for up to ten days and fined. The country also has laws designed to protect against sexual harassment, domestic violence, and cyberbullying.
However, advocates argue that the existing legal framework falls short. Victims often find themselves needing to gather evidence to substantiate their claims, as explained by Xirui*, a Beijing-based lawyer specializing in gender-based violence cases.
“Certain elements must be met for an action to be classified as a crime, such as a specific number of clicks and subjective intent,” Xirui elaborated.
“Additionally, there’s a limitation on public safety lawsuits where the statute of limitations is only six months, after which the police typically will not pursue the case.”
The Guardian contacted China’s Foreign Ministry for a statement.
Beyond legal constraints, victims of sexual offenses often grapple with shame, which hinders many from coming forward.
“There have been similar cases where landlords set up cameras to spy on female tenants. Typically, these situations are treated as privacy violations, which may lead to controlled detention, while victims seek civil compensation,” explained Xirui.
To address these issues, the government could strengthen specialized laws, enhance gender-based training for law enforcement personnel, and encourage courts to provide guidance with examples of pertinent cases, as recommended by legal experts.
For Li, the recent occurrences reflect a pervasive tolerance for and lack of effective law enforcement regarding these issues in China. Instead of prioritizing the fight against sexist and abusive content online, authorities seem more focused on detaining female writers involved in homoerotic fiction and censoring victims of digital abuse.
“The rise of deepfake technology and the swift online distribution of poorly filmed content have rendered women’s bodies digitally accessible on an unparalleled scale,” stated Li. “However, if authorities truly wish to address these crimes, it is entirely feasible to track and prosecute them, provided they invest the necessary resources and hold the Chinese government accountable.”
*Name changed
Additional research by Lillian Yang and Jason Tang Lu
Sam Altman, at the helm of one of the world’s leading artificial intelligence firms, has inked an agreement with the UK government to investigate the use of sophisticated AI models in various sectors, including the judiciary, safety, and education.
The CEO of OpenAI, with a valuation of $300 million (£220 billion), offers a comprehensive suite of ChatGPT language models. On Monday, he reached a memorandum understanding with the Secretary of State for Science and Technology, Peter Kyle.
This agreement closely follows a similar pact between the UK government and OpenAI’s competitor, Google, a prominent technology company from the U.S.
See the latest contracts. OpenAI and the government have committed to “collaborate in identifying avenues for the deployment of AI models throughout government,” aiming to “enhance civil servants’ efficiency” and “assist citizens in navigating public services more efficiently.”
They plan to co-develop AI solutions that address “the UK’s toughest challenges, including justice, defense, security, and educational technology,” fostering a partnership that “boosts public interaction with AI technology.”
Altman has previously asserted that AI laboratories could achieve a performance milestone referred to as artificial general intelligence this year, paralleling human-level proficiency across various tasks.
Nonetheless, public sentiment in Britain is split regarding the risks and benefits of swiftly advancing technologies. An IPSOS survey revealed that 31% of respondents felt excited about the potential, although they harbored some concerns. Meanwhile, 30% remained predominantly worried about the risks but were somewhat intrigued by the possibilities.
Kyle remarked, “AI is crucial for driving the transformation we need to see nationwide. This involves revitalizing the NHS, eliminating barriers to opportunities, and stimulating economic growth.”
He emphasized that none of this progress could be attained without collaboration with a company like OpenAI, underscoring that the partnership would “equip the UK with influence over the evolution of this groundbreaking technology.”
Altman stated: “The UK has a rich legacy of scientific innovation, and its government was among the pioneers in recognizing the potential of AI through its AI Opportunity Action Plan. It’s time to actualize the plan’s objectives by transforming ambition into action and fostering prosperity for all.”
OpenAI plans to broaden its operations in the UK beyond its current workforce of over 100 employees.
In addition, as part of an agreement with Google disclosed earlier this month, the Ministry of Science, Innovation and Technology announced that Google DeepMind, the AI division led by Nobel laureate Demis Hassabis, will “collaborate with government tech experts to facilitate the adoption and dissemination of emerging technologies,” thus promoting advances in scientific research.
OpenAI already provides technology that powers AI chatbots, enabling small businesses to more easily obtain guidance and support from government websites. This technology is utilized in tools like the Whitehall AI assistant, designed to expedite the processes for civil servants.
AI can streamline government paperwork, yet significant risks exist
Brett Hondow / Alamy
A number of nations are exploring how artificial intelligence might assist with various tasks, ranging from tax processing to decisions about welfare benefits. Nonetheless, research indicates that citizens are not as optimistic as their governments, potentially jeopardizing democratic integrity.
“Focusing exclusively on immediate efficiency and appealing technologies could provoke public backlash and lead to a long-term erosion of trust and legitimacy in democratic systems,” states Alexander Utzke, at Ludwig Maximilian University in Munich, Germany.
Utzke and his team surveyed around 1,200 individuals in the UK to gauge their perceptions regarding whether human or AI management was preferable for government functions. These scenarios included handling tax returns, making welfare application decisions, and assessing whether a defendant should be granted bail.
Participants were divided; some learned only about AI’s potential to enhance governmental efficiency, while others were informed about both the advantages and the associated risks. The risks highlighted included the challenges in discerning how AI makes decisions, an increasing governmental reliance on AI that may be detrimental in the long run, and the absence of a straightforward method for citizens to challenge or modify AI determinations.
When participants became aware of these AI-related risks, there was a marked decline in their trust towards the government and an increased feeling of losing control. For instance, the percentage of those who felt government democratic control was diminishing rose from 45% to over 81% when scenarios depicted increasing governmental dependence on AI for specific functions.
After learning about the risks, the percentage of individuals expressing skepticism regarding government use of AI surged significantly. It jumped from under 20% in the baseline scenario to over 65% when participants were informed of both the benefits and risks of AI in the public sector.
Regardless of these findings, democratic governments assert that AI can be utilized responsibly to uphold public trust, according to Hannah Key de la Vallee from the Center for Democracy and Technology in Washington, DC. However, she notes that there have been few successful applications of AI in governance to date, with several instances of failures already observed, which can have serious consequences.
For instance, attempts by various US states to automate public interest claim processing have resulted in tens of thousands of individuals being incorrectly charged with fraud. Some affected individuals faced bankruptcy or lost their homes. “Mistakes made by the government can have significant, long-lasting repercussions,” warns Quay de la Vallee.
The steam detector can identify traces of fentanyl and other substances in the air
Elizabeth Denis/Pacific Northwest National Institute
The U.S. Customs and Border Protection Agency is evaluating technology capable of detecting illegal substances in the air without any physical contact. This device aims to screen border objects quickly to combat the trafficking of drugs like fentanyl, a major contributor to the U.S. opioid crisis.
Detecting drugs and explosive compounds poses a challenge due to their release of relatively few molecules into the already vapor-laden air. To tackle this issue, the U.S. Pacific Northwest National Laboratory (PNNL) has spent over a decade developing an advanced system known as VaporID. This system can accurately identify certain substances from distances of 0.6 to 2.4 meters with sensitivity levels as low as a quarter of a part per trillion. This level of precision equates to locating a single coin in a stack of pennies that is 17 million times taller than Mount Everest.
Government scientists improved sensitivity by allowing molecules more time to create detectable chemical reactions through random collisions with other molecules. While most devices used for identifying unknown substances react to molecules within a few milliseconds, Robert Ewing at PNNL notes that “we created an atmospheric flow tube. This expands the reaction time to 2-3 seconds, which boosts sensitivity by three orders of magnitude.”
The technology has been integrated into a compact, microwave-sized device weighing 18 kilograms. Developed by California-based company Bayspec, this miniaturized machine is still less sensitive than the larger, fridge-sized version used at the PNNL lab. However, Bayspec’s CEO, William Yang, claims it is “more accurate and sensitive than a dog.”
In October 2024, researchers from Bayspec and PNNL tested the portable device at a Customs and Border Protection (CBP) facility in Nogales, Arizona. During separate trials, researchers swiped the surfaces of seized tablets and heated the swabs to generate steam for detection. “Both techniques yielded strong and reliable results,” says Christian Thoma of Bayspec.
The prototype is still under evaluation and requires further scientific data analysis, according to a CBP spokesperson.
Alex Krotulski of the Center for Forensic Research and Education, a nonprofit in Pennsylvania, comments, “We’ve encountered many devices that promise too much, so we’re cautious until they demonstrate efficacy through extensive research and assessments.”
There are already existing portable methods, such as x-rays, to uncover hidden drugs. Richard Crocombe, an independent consultant in Massachusetts, considers the new tool “another valuable addition to the arsenal,” but cautions that it “doesn’t fulfill all requirements.” A CBP spokesman acknowledged that while it could expedite drug testing in field labs, new devices may still require analysis by trained chemists.
These screening methods are also prone to false positives; “drug residues can be quite ubiquitous,” states Joseph Palamar at New York University. A related study indicated that most U.S. banknotes are contaminated. “If you happen to be near someone using fentanyl, the device could react positively based on trace amounts they might have on clothing or shoes. This raises concerns about innocent individuals being wrongly detained,” warns Chelsea Schauber at UCLA.
Preventing drugs from entering the country is just one piece of the larger strategy needed to address the opioid crisis, according to Schauber. This also calls for robust public health agencies, better access to healthcare, and comprehensive treatment options. She emphasizes that these resources are currently being diminished under the Trump administration. “To save lives, we need evidence-based, effective treatments that are more accessible than illegal drugs,” Schober concludes.
Soon, Australians will have the opportunity to download apps from sources outside the Apple App Store and circumvent additional fees on iPhone purchases, thanks to a proposal from the federal government. However, tech companies have expressed concerns that competition regulations similar to those in the EU might jeopardize security and adversely affect competition.
Currently, Australian users can’t subscribe to services like Netflix or Spotify through the iOS app. Additionally, Google imposes a premium for YouTube subscriptions via the App Store, while Amazon does not permit Kindle users to buy e-books through the app.
The reason for this is that Apple imposes a fee of up to 30% on in-app purchases, significantly impacting high-grossing apps. Due to Apple’s policies, companies are restricted from guiding customers on alternative purchase methods.
In released papers last November, the government proposed to “designate” digital platforms like the Apple App Store.
This would compel these platforms to meet obligations aimed at mitigating what the government perceives as anti-competitive practices.
The document underscores Apple’s preferred in-app payment structure as an example of behaviors that regulatory entities could target. This would facilitate users downloading apps from outside the official app store, a process known as sideloading.
In response to the proposal, Apple cautioned that the government should refrain from adopting the EU digital market as a “blueprint” for its strategy.
Apple stated, “DMA demands adjustments to Apple’s ecosystem, which may elevate privacy and security threats to users, create opportunities for malware, fraud, and expose users to illegal or harmful content.”
The company asserted that the 30% fee applies only to the highest-grossing apps, emphasizing that about 90% of transactions on iOS apps do not incur Apple’s cut. Many developers reported being charged a lower fee of 15%.
Apple has also expressed concerns about sideloading apps, highlighting security issues that could arise if users install apps without any vetting process. The EU indicated that such apps could include explicit content or tools for copyright violations.
This process would enable users to download apps on MacBooks and other conventional computing devices. Conversely, the Android platform accommodates sideloading apps and third-party transactions outside the Google Play Store.
Apple has also indicated that the DMA is responsible for delaying the rollout of its AI features.
Foad Fadaghi, managing director and principal analyst at Telsyte, mentioned that while opening the Apple platform could benefit some users, the majority are unlikely to alter their usage of the iPhone.
“Users may have concerns about enhancing security and privacy with Apple devices. In many cases, we select lockdown mode as the default,” he noted.
Australia isn’t isolated in this regard; Apple faces restrictions and legal challenges surrounding its App Store controls in Asia, Europe, and the US. The company adheres to local regulations but resists pressure to maintain uniform App Store practices globally. Apple previously modified its hardware worldwide to comply with EU regulations mandating a USB-C connector.
The government has yet to announce the next steps in this process, and the Ministry of Finance has not yet released submissions to the paper.
The federal court ruling regarding Epic Games’ lawsuit against Google concerning App Store practices is still pending nearly a year after the hearing concluded.
The defiant peers have presented a significant challenge to the government. They urge artists to provide copyright protections for artificial intelligence companies, or they risk losing essential legal protections.
The government has encountered its fifth defeat in the House regarding a controversial initiative that would permit AI companies to train their models using copyrighted materials.
With a vote of 221-116 on Wednesday, they insisted on amendments that would enhance transparency regarding the materials used by AI companies for training their models.
At the awards event following the vote, Elton John emphasized that copyright protection is an “existential issue” for artists and called on the government to “do the right thing.”
He remarked: “We will not let the government forget their promise to support the creative industry. We will not retreat, and we will not go quietly. This is just the beginning.”
Wednesday night’s vote highlights the ongoing conflict between the Commons and the Lords over a data bill utilized by campaigners to challenge the government’s proposed copyright reforms.
Leading the opposition to the Lords’ changes is crossbench peer and film director Beeban Kidron, whose amendments consistently receive support from the upper chamber.
The data bill faces the likelihood of being shelved unless the Commons agrees to Kidron’s amendments or presents alternative solutions.
Maggie Jones, the minister for digital economy and online safety, urged her colleagues to vote against the Kidron amendment after the government proposed last-minute concessions to avoid another setback.
Before the vote, Jones stated that her colleagues “must decide whether to jeopardize the entire bill” and claimed that voting for Kidron’s amendment would “appear unprecedented”—attempting to disrupt a bill that does not undermine copyright law, while also addressing important issues like combating sexually explicit deepfake images.
Kidron told Piers: “This is the last chance to urge the government to implement meaningful solutions,” pressing the minister to take solid steps ensuring AI companies adhere to copyright regulations.
“It is unfair and irrational for the creative industry to suffer at the hands of those who take their jobs and assets. It’s not neutral.”
“We have repeatedly asked both houses: What is the government doing to protect creative jobs from being stolen? There has been no response.”
Several peers criticized the notion that the Lords’ actions were unprecedented, arguing that the government itself is breaking precedent by refusing to compromise. Tim Clement Jones, a Liberal Democrat spokesman for the digital economy, voiced strong support for Kidron’s amendments.
Beeban Kidron expressed concern, asking: Why is the government neglecting the interests of the UK while attempting to hand over the wealth and labor of the country? Photo: Curlcoat/Getty
The Lords’ amendments place the data bill in a state of double claims, indicating that both the Commons and the Lords are unable to agree on the legislation. Under this circumstance, the bill will be dropped unless ministers accept the rebellious revisions or offer other changes through parliamentary processes. Although the bill’s failure is uncommon, it has occurred before, notably in the 1997-98 session regarding the European Parliament election bill.
According to parliamentary tradition, the Commons holds a favorable position as the elected House, and in rare situations, if the Lords refuse to concede, the minister can utilize parliamentary law to enact the bill in the following session, which may significantly delay the legislation.
As a concession to the peers on Tuesday night, the government pledged to release additional technical reports on the future of AI and copyright regulations within nine months, rather than the previously proposed twelve.
“Many peers have expressed experiencing a lack of hearing during ping pong,” Jones noted in her letter.
Jones pointed out that by updating the Data Protection Act, the data bill is projected to yield £10 billion in economic benefits, enhancing online safety and strengthening the authority to require social media companies to retain data following a child’s death.
Kidron asserted: “It would be wise for the government to accept the amendment or propose something meaningful in its place. They have failed to listen to the Lords, to the creative sector, and even to their own supporters.”
Under the proposed government regulations, AI companies would be authorized to train their models using copyrighted works unless the owners specifically opt out. This plan has garnered heavy criticism from creators and publishers, including renowned artists such as Paul McCartney and Tom Stoppard.
Technology Secretary Peter Kyle expressed regret over the decision to initiate consultations regarding the opt-out system associated with changes to copyright laws as a “priority option,” indicating that there may be resistance within Downing Street to make more concessions.
Sir Elton John labeled the UK government an “absolute loser” over its proposal that would enable tech firms to utilize copyrighted material without authorization.
The renowned singer-songwriter described the alteration of copyright laws in favor of artificial intelligence companies as a “crime.”
In a Sunday interview with BBC One’s Laura Kuenssberg programme, John expressed that the government “has robbed the youth of their legacy and income,” adding, “I consider it a criminal act. The government is just an absolute loser, and I’m extremely upset about it.”
John referred to technical secretary Peter Kyle as “a little idiot,” stating that he would take legal action against the minister if the government does not revise its copyright strategies. Recently, Kyle faced criticism for being too aligned with Big Tech, following reports of increased meetings with companies like Google, Amazon, Apple, and Meta since Labour’s election victory last July.
Before casting his vote for a proposal from CrossbenchPiabe Bankidron, which mandated senators to disclose their use of copyrighted material to AI companies, John voiced his concerns.
He mentioned a similar amendment proposed last week, which is likely to be discarded by the Commons government in a parliamentary procedure that could jeopardize the data bill.
“I feel like a criminal in that I am profoundly betrayed. The Senate’s vote was 2-1 in our favor. Yet the government appears to think, ‘Well, old man… I can manage it as I wish,'” John stated.
The government is currently reviewing proposals that would permit AI companies to train their models (a technology that underpins products like chatbots) using copyrighted work without obtaining permission. A source close to Kyle indicated that this option is no longer favored in consultations, but it remains under consideration.
Alternative options include maintaining the status quo, requiring AI companies to acquire licenses for using copyrighted content, or allowing AI companies to exploit copyrighted works without creative professionals having a say.
A government spokesman remarked, “We will not entertain copyright modifications unless we are fully assured they benefit creators. The spokesman further noted that the government’s recent commitment to conducting an economic impact assessment of the proposal will investigate “a broad array of issues and options across all aspects of the discussion.”
For the first time, AI tools are being utilized to evaluate public feedback on government consultations, with plans for broader adoption to help conserve money and staff resources.
The tool, referred to as “consultation,” was initially implemented by the Scottish government to gather insights on regulating non-surgical cosmetic procedures like lip fillers.
According to the UK government, this tool is employed to analyze responses and deliver results comparable to human-generated outputs, with ongoing development aimed at reviewing additional consultations.
It examined over 2,000 responses while highlighting key themes, which were subsequently verified and enhanced by experts from the Scottish government.
The government has developed the consultation tool as part of a new suite of AI technologies known as “Humphrey.” They assert it will “accelerate operations in Whitehall and decrease consulting expenditures.”
Officials claim that, through the 500 consultations conducted each year, this innovative tool could save UK taxpayers £20 million annually, freeing up approximately 75,000 hours for other tasks.
Michael Lobatos, a professor of artificial intelligence at the University of Edinburgh, notes that while the benefits of consultations are significant, the potential for AI bias should not be disregarded.
“The intention is for humans to always oversee the process, but in practice, people may not have the time to verify every detail, leading to bias creeping in,” he stated.
Lobatos also expressed concerns that domestic and international “bad actors” could potentially compromise AI integrity.
“It’s essential to invest in ensuring our systems are secure and effective, which requires significant resources,” he remarked.
“Maximizing benefits while minimizing harm demands more initial investment and training than is typically expected. Ministers and civil servants might see this merely as a cost-saving quick fix, but it is crucial and complex.”
The government asserts that the consultation tool operates 1,000 times faster than humans and is 400 times less expensive, with conclusions “remarkably similar” to those of experts, albeit with less detail.
Discussing the launch of the tool, technology secretary Peter Kyle claimed it would save “millions” for taxpayers.
“There’s no reason to spend time on tasks that AI can perform more quickly and effectively, let alone waste taxpayer money contracting out such work,” he said.
“With promising outcomes, Humphrey helps lower governance costs and efficiently compiles and analyzes feedback from both experts and the public regarding vital issues.”
“The Scottish government has made a courageous first move, and will soon implement consultations across their own department and others within Whitehall.”
While there’s no set timeline for consultations still pending governmental approval, deployment to government agencies is anticipated by the end of 2025.
The government is facing another challenge in the House of Representatives regarding proposals that would permit artificial intelligence firms to utilize copyrighted materials without authorization.
An amendment to the data bill, which required AI companies to specify which copyrighted content is used in their models, received support from peers despite government resistance.
This marks the second instance in Congress where a Senator has requested that a tech firm clarify whether it has used copyrighted material.
The vote took place shortly after a coalition of artists and organizations, including Paul McCartney, Janet Winterson, Dua Lipa, and the Royal Shakespeare Company, urged the Prime Minister to “not sacrifice our work for the benefit of a few powerful foreign tech companies.”
The amendment, represented by Crossbench Peer Baroness Kidron, garnered 125 votes, achieving a total of 272 votes.
The bill is now poised to return to the House of Representatives. Should the government eliminate Kidron’s amendments, it will create yet another point of contention for the Lords next week.
Baroness Kidron stated: “We aim to refute the idea that those opposing government initiatives are against technology. Creators acknowledge the creative and economic benefits of AI, but we dispute the notion that AI should be developed for free using works that were appropriated.”
“My Lords, this poses a substantial threat to the British economy, impacting sectors worth £120 billion. The UK thrives in industries central to our industrial strategy and significant cultural contributions.”
The government’s copyright proposal is currently under reviews in this year’s report, but opponents are using the data bill as a platform to voice their objections.
The primary government proposal would allow AI companies to incorporate copyrighted works into model development without prior permission. Critics argue that this is neither practical nor feasible, unless copyright holders indicate they prefer not to use their works in the process.
Nevertheless, the government contends that the existing framework hinders both the creative and technical sectors and necessitates legislative resolutions. They have already made one concession by agreeing to an economic impact assessment of their proposals.
Peter Kyle, a close aide to the technical secretary, mentioned this month that the “opt-out” scenario is no longer his favored path, and various alternatives are being evaluated.
A spokesperson from the Department of Science, Innovation, and Technology stated that the government would not rush into copyright decisions or introduce relevant legislation hastily.
According to court documents submitted on Monday in a deletion lawsuit, the Agriculture Department plans to reinstate climate change information that was removed from its website when President Trump took office.
The omitted information encompassed pages detailing federal funding and loans, forest conservation, and rural clean energy initiatives. This also included sections from the U.S. Forest Service and the Natural Resources Conservation Services, featuring climate risk viewers, including comprehensive maps that illustrate how climate change impacts national forests and grasslands.
The February lawsuit indicated that farmers’ access to pivotal information was hindered, affecting their ability to make timely decisions amid business risks tied to climate change, such as heat waves, droughts, floods, and wildfires.
The lawsuit was filed by the Organic Farming Association in Northeast New York alongside two environmental organizations, the Natural Resources Defense Council and the Environmental Working Group.
The plaintiffs sought a court mandate requiring the department to restore the deleted pages. On Monday, the government affirmed that this restoration would be compulsory.
Jay Clayton, a U.S. attorney for the Southern District of New York, informed Judge Margaret M. Garnett that he represents the agricultural division in this suit and has commenced the process of restoring the pages and interactive tools highlighted in the complaint. He indicated that the department “anticipates completing the restoration process significantly in about two weeks.”
Clayton requested a postponement of the hearing set for May 21, suggesting a report on the restoration progress be submitted in three weeks, and mentioned he is working on determining “the appropriate next steps in this lawsuit.”
“The USDA is pleased to recognize that the unlawful removal of climate change-related information is detrimental to farmers and communities nationwide,” stated Jeffrey Stein, assistant attorney for Earthjustice, an environmental law nonprofit that represents the plaintiffs, alongside the Knight First Amendment Institute at Columbia University.
Sources familiar with the matter revealed that the Federal Railroad Administration (FRA) has approached a tunneling firm established by Elon Musk.
FRA officials engaged with employees from the Boring Company to discuss cost assessments and progress related to the Frederick Douglass Tunnel Program, a new tunnel intended to enhance the heavily trafficked Amtrak route linking Baltimore and Virginia. Amtrak’s initial development cost was projected at $6 billion, but estimates have now surged to $8.5 billion.
During discussions, a Department of Transportation official who oversees the FRA met with Boring Company staff last month, learning that the firm might pinpoint ways to construct tunnels more affordably and efficiently, according to two insiders.
Nathaniel Sizemore, a spokesperson for the Department of Transportation, confirmed the involvement of the Boring Company among various entities under consideration for a new engineering contract, but he withheld the names of other firms.
These discussions have sparked concerns regarding Musk’s potential conflict of interest as he manages his business interests while simultaneously advising President Trump. Musk oversees at least six companies, including the electric car manufacturer Tesla and the aerospace company SpaceX, while also aiming to boost the efficiency of government operations, which has resulted in reduced employment and resources within federal agencies regulating his ventures.
In various instances, conflicts of interest have surfaced. Trump showcased a Tesla on the White House lawn in March, even as federal agencies push for broader adoption of SpaceX’s Starlink Satellite Internet Service.
Last month, Musk mentioned reducing his time in Washington amid criticisms that he was sidelining his responsibilities at Tesla.
The projected tunnel cost has skyrocketed by $2.5 billion, and Amtrak has yet to devise strategies for cost containment, the Department of Transportation indicated in a statement.
“The department recognizes the significance of engaging with multiple stakeholders in the infrastructure engineering domain to realign the project,” Sizemore stated.
Amtrak has not provided immediate comments. Neither the Boring Company nor Musk responded to inquiries for statements.
The Frederick Douglass Tunnel is intended to replace the 152-year-old Baltimore and Potomac Tunnels, a 1.4-mile stretch along Amtrak’s northeastern corridor, described as “the largest infrastructure initiative” supported by Amtrak. Report In the past year, Amtrak’s Office of the General Inspectors also expressed concerns that costs had ballooned and deadlines were not met, with the tunnel originally scheduled for completion by 2035.
Amtrak awarded last year the construction contract to a joint venture between two firms, Kiewit and JF Shea. The company did not immediately provide comments.
Formerly, it faced ownership scrutiny from Republican figures, including Senator Ted Cruz from Texas and current Vice President JD Vance. Criticism arose for awarding federal funding to projects favoring “Northeastern states over others.”
Musk has previously criticized Amtrak and suggested prioritizing privatization of federally-owned railways.
“If you’re from another country, do not rely on our national railway,” Musk remarked about Amtrak during a March discussion with bankers. “It leaves a negative impression of America.”
Musk and his company have encountered challenges with the Department of Transportation. Following a deadly incident involving an Army helicopter and a commercial jet in January, Transportation Secretary Shawn Duffy noted that SpaceX staff would forward safety proposals to the Federal Aviation Administration’s Air Traffic Control Command Center in Virginia within the following month.
Musk is advocating for the FAA to terminate its substantial air traffic control agreement with Verizon in favor of the Starlink system.
Throughout the years, Musk has championed various transportation innovations, from Tesla electric vehicles to SpaceX rockets, hyperloops, and vacuum tubes designed for high-speed transit of people and goods. The Boring Company, which has raised over $900 million in venture capital, has yet to realize most of its proposed plans in the U.S.
In a 2017 tweet, Musk claimed he would transport passengers from New York to the capital in 30 minutes, stating he had secured “oral government approval” to construct an underground hyperloop connecting New York City, Philadelphia, Baltimore, and Washington.
Two years later, the Boring Company proposed a plan to the Department of Transportation to create a 35-mile underground vehicle loop between Baltimore and Washington, promising completion within two years. However, by 2021, the project was removed from the company’s site and appears inactive now.
Steve Davis, the leader of the Boring Company, has collaborated with Musk and the Trump administration on initiatives aimed at enhancing government efficiency. Davis, a trusted associate of Musk’s, was appointed to helm the tunneling firm in 2018 to execute Musk’s vision for cost-effective governance.
Musk has expressed dissatisfaction with the Boring Company’s performance, criticizing Davis for the lack of project completions. In a recent Fox News interview, Davis characterized his attempts as efforts to avert national bankruptcy, emphasizing a commitment to assist Musk.
Davis did not respond to a request for commentary.
The National Oceanic and Atmospheric Administration announced on Thursday that they will cease tracking the nation’s most costly disasters, those inflicting damages of at least $1 billion.
This decision means insurance firms, researchers, and policymakers will lack crucial data necessary for understanding trends associated with significant disasters like hurricanes, droughts, and wildfires, which have become more prevalent this year. While not all disasters stem from climate change, such occurrences are intensifying as global temperatures rise.
This latest move marks another step by the Trump administration to restrict or eliminate climate research. Recently, the administration has rejected contributions to the country’s largest climate study, proposed cuts to grants for national parks addressing climate change, and unveiled a budget that would significantly reduce climate science funding at the U.S. Geological Survey, the Department of Energy, and the Department of Defense.
Researchers and lawmakers expressed their disapproval of this decision on Thursday.
Jesse M. Keenan, an associate professor and director of climate change and urbanism at Tulane University in New Orleans, stated that halting data collection will hinder federal and state governments in making informed budgetary and infrastructure investment decisions.
“It’s illogical,” he remarked. Without a comprehensive database, “the U.S. government will be blind to the financial impacts of extreme weather and climate change.”
In comments on Bluesky, Senator Ed Markey, a Democrat from Massachusetts, described this move as “anti-science, anti-secure, and anti-American.”
Virginia Iglesias, a climate researcher at the University of Colorado, emphasized that few organizations can replicate the unique information provided by this database. “This represents one of the most consistent and trustworthy records of climate-related economic losses in the nation,” she said. “The database’s strength lies in its reliability.”
The so-called billion-dollar disasters—those with costs exceeding ten digits—are on the rise. In the 1980s, there were, on average, three such events annually, adjusted for inflation. By contrast, between 2020 and 2024, the average rose to 23 per year.
Since 1980, the U.S. has experienced at least 403 of these incidents. Last year, there were 27, and this year is projected to see the second-highest number (28 events).
Last year’s incidents included Hurricane Helen and Milton, which together resulted in approximately $113 billion in damages and over 250 fatalities in Colorado. Additionally, drought conditions that year caused around $3 billion in damages and claimed more than 100 lives nationwide.
NOAA’s National Environmental Information Center plans to cease tracking these billion-dollar disasters as priorities, statutory mandates, and staffing change, according to an email from the agency.
When asked whether NOAA or another branch of the federal agency would continue to publicly report data on such disasters, the agency did not respond. The communication indicated that archived data from 1980 to 2024 would be available, but incidences from 2025, such as the recent wildfires in Los Angeles, will not be monitored or published.
“We can’t address problems that we don’t measure,” noted Erinsikorsky, director of the Climate Security Centre. “Without information regarding the costs of these disasters, Americans and Congress will remain unaware of the risks posed by climate change to our nation.”
Sikorsky highlighted that other agencies may struggle to replicate this data collection as it involves proprietary insurance information that companies are reluctant to share. “It’s a remarkably unique contribution.”
Following Elon Musk’s exit from his role in overseeing the “Government Efficiency” initiative (DOGE), numerous governance analysts express concerns that Doge failed to enhance the quality of services provided by the government to American citizens.
“Across various efforts, we’ve observed significant attempts to influence public policy at the University of Michigan,” noted Donald Moynihan, public policy professor at the university. “Indeed, we have seen a decline in the quality of several government services.”
The world’s richest individual, Musk, was appointed by Donald Trump in January to lead the efficiency initiative but was restricted from serving as a “special government employee” for over 180 days due to his own business challenges.
While Musk claims that Doge has saved $150 billion during his tenure, many budget analysts have raised doubts about the validity of these figures. Musk has repeatedly been accused of exaggerations and false claims regarding savings, which represent just a fraction of the intended $1 trillion cuts.
Moynihan and other experts lament that Musk and Doge predominantly focus on the interests of business leaders aiming to maximize profits, rather than adopting a holistic strategy to enhance service efficiency.
Martha Guin Bell, executive director of Yale Budget Lab, emphasized Musk’s apparent disinterest in service improvement: “They referred to it as the ‘Governmental Slavery Ministry,’” said Gimbel. “There doesn’t seem to be a comprehensive plan to identify areas where government services can genuinely improve. Enhancing these services requires time, investment, and a commitment to building effective solutions.”
When inquired about whether Musk and Doge had improved government services, Zimbel burst into laughter. “Absolutely not,” she remarked. “There’s undeniably a decline in government services.”
Public policy analysts and citizens highlight numerous ways in which the Doge reductions have worsened government services, including longer appointment waits at veterans’ hospitals, extended holding times when calling the IRS, and increased wait times at Social Security offices. The departure of numerous experienced Social Security staff has resulted in much less assistance for welfare inquiries.
During a White House press conference on May 1, Musk defended Doge’s contributions: “I believe we have been effective overall. It may not be as effective as I had hoped, and we could achieve more,” Musk stated. “However, we’ve made advancements.”
Musk conceded that his $1 trillion goal proved to be more challenging than anticipated. “It’s truly about the discomfort the Cabinet and Congress are experiencing,” he remarked. “We can accomplish this, but we must address numerous complaints.”
The White House has not responded to inquiries regarding the decline in certain government services or how Doge has improved them.
Gimbel cautioned that many Americans may not realize the impending decline in government services as tens of thousands of ordered job eliminations unfold. “It’s certainly going to worsen,” she noted. For instance, the government is set to reduce 80,000 positions within the Veterans Affairs Department.
Numerous public policy experts believe Trump and Musk are greatly exaggerating claims of rampant waste, fraud, and abuse within the government, although Zimbel acknowledged that inefficiencies do exist. “There’s definitely room for improvement, and we can pursue it,” she stated. “Government officials are aware of where these inefficiencies lie. Much modernization of technology is needed. Yet, Doge seems uninterested in pursuing these concerns, as well as issues with Medicare and Medicaid over-expenditures.”
Max Stier, president of the Public Services Partnership, a nonprofit research organization, criticized the approach taken by Musk and Doge, likening it to actions of business executives like Jack Welch known for prioritizing cost-cutting over understanding organizational intricacies. Stier lamented that Musk and his team made abrupt cuts without adequate comprehension of the roles and responsibilities of those affected.
“Jack Welch would disapprove of the approach Doge has taken,” Stier remarked. “It’s not solely about saving costs; it disrupts organizational capabilities. Welch never let go of staff without understanding how the organization functions and the competencies of those laid off.”
Stier highlighted Musk’s assertion that Doge was meant to cut costs and enhance organization, stating, “It’s difficult to find a rational basis for the decisions being implemented. Americans certainly witness no improvements.”
“We are compromising the government’s capabilities,” he continued. “It’s evident that people are being let go aggressively, disrupting government services without any comprehension of the outcomes and results. It’s broken. It’s broken. This mindset is not prevalent in Silicon Valley.”
The claim of $150 billion in savings attributed to Musk appears to be a substantial overestimation, as it disregards significant costs associated with the Doge initiative, Stier argued. His group has indicated that due to layoffs, reemployment, retirement benefits, paid leave, and decreased productivity linked to over 100,000 workers, taxpayers are likely to incur $135 billion this year. Several public policy experts believe increased wait times and frustration should also count against the purported $150 billion in savings from Doge reductions.
Moynihan stated that Musk’s vision fundamentally misunderstands the role of government efficiency. “His perspective suggests that government officials are incapable of delivering value,” Moynihan commented. “Consequently, the notion of tools to enhance government services is completely foreign to Musk.”
“It appears he thinks civil servants lack competence, so there’s no harm in cutting their positions,” Moynihan added. “This perspective fails to recognize the importance of public services, their existence, and the benefits they provide to society.”
Liz Scheller, president of the AFL-CIO, the leading U.S. labor federation, remarked that Doge’s cuts adversely affect workers. She referenced the rapid reductions at the National Institute of Occupational Safety and Health, indicating that the agency plays a crucial role in ensuring the safety of personal protective equipment for firefighters.
“Doge essentially cuts line items from a spreadsheet, which has real-life implications for real people,” Shuler said. “Federal workers have been treated with blatant indifference, exhibiting nothing but dehumanization and humiliation.”
Gimbel of Yale Budget Lab cautioned about another significant flaw in Doge’s cuts. “One of the government’s responsibilities is to mitigate risks,” she stated. “Ensuring food safety is one such example. Government inspectors help prevent threats like Listeria or Salmonella. Reducing the number of food inspectors won’t lead to immediate increases in illnesses, but it may enhance the chances of outbreaks like Listeria and Salmonella in the ensuing years.”
Keir Starmer, the British Prime Minister, aims to establish the UK as a leader in artificial intelligence.
PA Images/Alamy
Numerous civil servants within the UK government are utilizing their own AI chatbots to assist with their duties, including those supporting Prime Minister Keir Starmer, as revealed by New Scientist. Officials have not accurately recorded how the Prime Minister is receiving AI-generated advice, whether civil servants are addressing the risks of inaccurate or biased AI outputs, or how the Prime Minister utilizes these tools. Experts express concerns over this lack of transparency and its implications for the reliability of governmental information.
Following the acquisition of the world’s first ChatGPT logs under the Freedom of Information (FOI) Act, New Scientist has reached out to 20 government departments to document their interactions with Redbox. Redbox is a generative AI tool being trialed among UK government employees, enabling users to analyze government documents and generate initial drafts for briefings. According to one of the developers involved, early tests reported that a civil servant managed to consolidate 50 documents in mere seconds, a task that typically would take a day.
All contacted departments stated they do not use Redbox or declined to provide a record of interactions, which New Scientist deemed “troubling.” This is a formal term used in responses to FOI requests, as defined by the Office of Information Commissioner, which describes it as likely to cause undue distress, confusion, or irritation.
However, two departments divulged information regarding Redbox’s usage. The Cabinet Office, which assists the Prime Minister, reported that 3,000 individuals engaged in 30,000 chats with Redbox. After reviewing these exchanges, they noted that redacting sensitive information requires more than a year before any content can be released under FOI regulations. The Trade Bureau acknowledged retaining “over 13,000 prompts and responses” while also requiring review before release.
Both departments were contacted for additional inquiries about Redbox use. The Department of Science, Innovation and Technology (DSIT), which oversees these tools, declined to respond to specific questions about whether the Prime Minister or other ministers received AI-generated advice.
A DSIT representative informed New Scientist that “time should not be wasted on AI which operates faster and faster.” They added that Redbox is integrated into Whitehall to help civil servants utilize AI safely and effectively, simplifying document summarization and agenda drafting.
Nonetheless, some experts raise concerns regarding the use of generative AI tools. Large language models are known to have significant challenges related to bias and accuracy, making it hard to ensure Redbox delivers trustworthy information. DSIT did not clarify how Redbox users could mitigate those risks.
“My concern is that the government exists to serve the public, and part of its mandate is providing transparency regarding decision-making processes,” asserts Catherine Flick from Staffordshire University.
Due to the “black box” nature of generative AI tools, Flick emphasizes the difficulty of evaluating or understanding how a specific output is produced, especially if certain aspects of a document are emphasized over others. When governments withhold such information, they diminish transparency further, she argues.
This lack of transparency also extends to the Treasury, the third government department. The Ministry of Finance stated, in response to the FOI request, that New Scientist staff members cannot access Redbox, indicating that “GPT tools are available within HM [His Majesty’s] Treasury without maintaining a log of interactions.” The specific GPT tool referenced remains unidentified. While ChatGPT is well-known, other large language models also bear the GPT label, suggesting that the Treasury employs AI tools but lacks a comprehensive record of their usage, as New Scientist sought clarification on.
“If prompts aren’t documented, it’s challenging to replicate the decision-making process,” Flick adds.
John Baines from Mishcon De Reya remarked that it’s unusual for a UK law firm to forego recording such information. “It’s surprising that the government claims it cannot retrieve the prompts used in the internal GPT system.” While courts have ruled that public agencies aren’t required to maintain records before archiving, “good data governance implies that retaining records is crucial, particularly when they may influence policy development or communication,” he explains.
However, data protection specialist Tim Turner believes the Treasury is justified in not retaining AI prompts under the FOI Act. “This is permissible unless specific legal or employee regulations determine otherwise,” he states.
Sure! Here’s the rewritten content while retaining the HTML tags:
<div id="">
<p>
<figure class="ArticleImage">
<div class="Image__Wrapper">
<img class="Image" alt="A new scientist. Science News and Long reads from expert journalists, covering science, technology, health, and environmental developments in various publications." width="1350" height="899" src="https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg" sizes="(min-width: 1288px) 837px, (min-width: 1024px) calc(57.5vw + 55px), (min-width: 415px) calc(100vw - 40px), calc(70vw + 74px)" srcset="https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=300 300w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=400 400w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=500 500w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=600 600w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=700 700w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=800 800w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=837 837w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=900 900w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1003 1003w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1100 1100w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1200 1200w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1300 1300w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1400 1400w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1500 1500w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1600 1600w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1674 1674w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1700 1700w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1800 1800w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1900 1900w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=2006 2006w" loading="eager" fetchpriority="high" data-image-context="Article" data-image-id="2477989" data-caption="Disinformation is particularly prevalent on social media platforms." data-credit="Stefani Reynolds/AFP via Getty Images"/>
</div>
<figcaption class="ArticleImageCaption">
<div class="ArticleImageCaption__CaptionWrapper">
<p class="ArticleImageCaption__Title">Disinformation is particularly prevalent on social media platforms.</p>
<p class="ArticleImageCaption__Credit">Stefani Reynolds/AFP via Getty Images</p>
</div>
</figcaption>
</figure>
</p>
<p>The National Science Foundation (NSF) has terminated a government research grant aimed at examining misinformation and disinformation. This decision comes amid a surge of propaganda and deceit proliferated by the latest AI technologies, coinciding with tech companies scaling back their content moderation efforts and disbanding fact-checking teams.</p>
<p>The grant was canceled on April 18, as stated by the NSF in a <a href="https://www.nsf.gov/updates-on-priorities">public announcement</a>. The statement asserts that it no longer backs research on misinformation or disinformation, citing potential conflicts with constitutionally protected free speech rights...</p>
</div>
During the warm Antarctic season, a refined Norwegian passenger ship is known as Ms Fridtjof NansenDeparting regularly from Argentina, head south along the turbulent drake passageway to the Antarctic Peninsula. The cruise is home to more and more wealthy adventurers, bucket listers, and increasingly polar scientists seeking to collect data as public funds for research in Antarctica under the Trump administration.
The National Science Foundation is one of the world’s largest funders of scientific research and has an annual budget. Approximately $9 billion This supports most of the research in the United States Antarctic. Over the past few months, the Trump administration has ordered agencies to cut deeper, making scientists wonder how they will study everything, from melting glaciers and ice sheets to the effects of pollution from power plants and wildfires.
On Thursday, National Science Foundation director Seturaman Panchanashan resigned after the White House directed him to cut the agency’s budget and staff by more than half. According to an exclusive report from Science.
Panchanathan’s resignation follows Elon Musk’s previous orders from government efficiency Freeze fund All new research grants from the National Science Foundation, and the announcement that Doge will be over last week Over $200 million “Wild” research grants given by the agency.
Some experts are concerned that the Trump administration continues its National Science Foundation It may inform you of the end For research into the United States of Antarctica.
Leopard seals along the Antarctic Peninsula. Chase Cain / NBC News
James Burns, co-founder of the Antarctic and Southern Ocean Coalition, is an international alliance for environmental and non-governmental organizations focusing on Antarctic conservation and research, and says the National Science Foundation has become “wicked language” within much of the Trump administration. “For whatever reason, there’s so much to learn in Antarctica, that’s not good on many levels for us.”
Antarctica-based research projects have already declined for several years – disrupted decades of robust fieldwork; Never recovered from Covid-19 restrictions. Currently, research on the world’s southernmost continent has been facing several years under Trump’s slash and burning policies.
However, I’m riding on Fridjov Nansen.And its sister ship, Ms. Roald Amundsen, Polar Scientist, has reliable funds for their research. HX Expeditions, which operates two Antarctic ships, hosts researchers from institutions such as West Washington University. University of California, Santa Cruz. National Snow and Ice Data Center. Their rooms and boards are covered by the purchase of tickets from tourists sailing to Antarctica for a once-in-a-lifetime trip.
“If we can’t pay customers to allow our ship to go south, we can’t support the research we are helping out,” said Verena Meraldi, chief scientist on the HX Expedition. “It’s not easy [to get there]. There are not many flights coming down here, and fewer research vessels. ”
Gentleman penguins along the Antarctic Peninsula. Chase Cain / NBC News
Tourists traveling on the HX expedition are part of the explosive ecotourism industry, focusing on experiencing nature while helping to preserve the local area. The number of visitors to Antarctica has increased from about 8,000 each year in the 1990s to over 120,000 per year. International Antarctic Tour Operators Association. By 2035, the ecotourism market will be like that projection It will grow to over $550 billion. Ms Fridtjof Nansen on a late March expedition to the Antarctic PeninsulaIt was home to over 400 ecotourists and several researchers, including Freia Aardred, a doctoral student at Durham University in the UK.
Alldred moved along with sterilized bags to collect samples of seaweed grown in Antarctica waters and snow algae. She has studied how climate change affects the carbon content of these Antarctic species, and Cruises has provided a unique opportunity to collect new samples.
“We’ve never been anywhere with a research foundation,” says Alldred. “Instead, if I went to a base in the Antarctic in England, I could only sample within my area. Here I have gone to five different sites throughout the peninsula that may not have been previously studied.”
The boat was housed nearby scientists and ecotourists, giving scientists the unusual opportunity to explain their work directly to non-scientists through interactive sessions in an onboard lab. For ten days, enthusiastic passengers attended lectures from resident researchers, ate with them at the ship’s restaurant, sharing their first steps in the vast polar deserts of Antarctica.
“It’s incredible to share these experiences with people, explain why we do research, what kind of questions we answer, and they see them firsthand,” said Chloe Lou, a researcher who works with the California Ocean Alliance to capture the impact of tourist boats on Antarctica whales. “It fires me for my passion for my work.”
The UK’s attempt to make details of its legal battle with Apple public has been unsuccessful.
The Investigatory Powers Court, responsible for investigating potential illegal actions by the national intelligence agency, rejected a request from the Home Office to keep “details” of the case confidential on Monday.
Presidents of the Investigatory Court, Judges Singh and Johnson, initially disclosed some aspects of the case on Monday.
They confirmed that the case involves Apple challenging the Home Office regarding a technical capability notice under the Investigatory Powers Act.
The Home Office argued that revealing the existence of the claim and the identities involved would jeopardize national security.
The judge stated, “We do not believe that disclosing specific details of the case would harm public interest or endanger national security.”
Reports from The Guardian and other media outlets claimed that the Home Office issued a Technical Capability Notice to Apple, seeking access to Apple’s advanced data protection services.
Apple has stated it will not comply with the notice, refusing to create a “backdoor” in its products or services.
Judges Singh and Johnson noted that neither Apple nor the Home Office confirmed or denied the accuracy of the Technical Capability Notice and media reports on its contents.
The judge added, “This ruling should not be taken as confirmation of the accuracy or inaccuracy of media reports. Details about the Technical Capability Notice remain undisclosed.”
A journalist was denied access to a hearing last month related to the incident.
Various media organizations requested the court to confirm the participants and the public nature of the hearing on March 14th.
Neither journalists nor legal representatives were allowed at the hearing, with the identities of the involved parties remaining anonymous beforehand.
The judges mentioned the potential for future hearings to have public elements without restrictions, but the current stage of the process does not allow it.
Recipients of Technical Capability Notices cannot reveal the order unless authorized by the Home Secretary, and hearings should only be private if absolutely necessary, as per the rule on the court’s website.
Ross McKenzie, a data protection partner at Addleshaw Goddard law firm, stated that despite the ruling, it is unlikely that detailed information regarding the Home Office’s case for accessing Apple user data will be disclosed.
An Interior Ministry spokesperson declined to comment on the legal proceedings but emphasized the importance of investigative powers in preventing serious threats against the UK.
Apple chose not to provide a comment on the matter.
Daisy Greenwell has long felt that the idea of letting her eldest son do something inevitable. But until early last year, when her daughter was eight, it filled her with fear. When she spoke to other parents, “Everyone said, ‘Yes, that’s a nightmare, but there’s no choice,'” recalls Greenwell, 41.
She decided to test it. My friend Claire Fergnou shared concerns about the impact of social media on the addictive quality of smartphones and mental health, so I created a WhatsApp group to help develop a strategy. Then Greenwell lives in Suffolk, a countryside in eastern England; I posted her thoughts on Instagram.
“If we could switch social norms like giving your child a smartphone at 11am in our school, our town, our country, we could do it, like giving your child a smartphone at 11am,” she wrote. “What if they could hold off until they were 14 or 16?” she added a link to the WhatsApp group.
The post has gone viral. Within 24 hours, the group was oversubscribed for parents to participate. Today, more than 124,000 parents of children in UK schools have signature A pact created by the free childhood of smartphones, a charity founded by Greenwell, her husband Joe Riley and Ferniev. “I will act in the best interests of my kids and our community and wait until I get my smartphone until the end of my ninth year.” (The ninth year is equivalent to the eighth graders in America.)
Movement aligns with a A broader change in British attitudesmounts of harm caused by smartphone addiction and algorithm-driven social media as evidence. 1 investigation Last year, the majority of respondents (69%) felt that social media had negatively affected children under the age of 15.
Meanwhile, with the police Intelligence Services We warned about extreme and violent content torrents reaching children online. This is a trend that was examined during adolescence of hit television shows, where school men are accused of murder after being exposed to online misogyny. It’s become British Most of them were seen Show, and on Monday, Prime Minister Kiel Starmer met. The creator and I told her I had seen it on Downing Street with my son and daughter. But he also said, “This is not a challenge politicians can simply legislate.”
During a flurry of executive orders signed by President Trump, significant changes were made affecting the content on government web pages and public access to data related to climate change, the environment, energy, and public health.
In the past two months, hundreds of terabytes of data have been removed from government websites, raising concerns about potential deletions. While the underlying data still exists, tools for public and researcher access have been taken down.
Now, hundreds of volunteers are actively recreating digital tools to gather and download as much government data as possible, making it readily available to the public.
Volunteers working on the project Public Environment Data Partner have already recovered over 100 datasets that were removed from government sites and aim to store a growing number of 300 datasets.
Efforts to download climate, environmental, energy, and public health data began in 2017 amidst fears about its future under a president who dismissed climate change as a hoax. Federal information has since disappeared, prompting a new response.
Environmental scientist Gretchen Gerke emphasized the importance of resilient public information in the digital age, expressing concern over the removal of vital data access tools. The need for data like climate measurements collected by NOAA is crucial for various parties, yet efforts to restrict public access continue.
The technology director at the Center for Environmental Policy Innovation highlighted the removal of public access and emphasized the taxpayer-funded nature of these tools.
Requests for two essential data tools, Climate and Economic Justice Screening Tool (CEJST) and Environmental Justice Screening Tool (EJScreen), have been frequent. These tools, crucial for addressing environmental justice and climate change issues, were removed from access.
The removal of these tools has hindered efforts to address structural racism and disproportionate impacts on communities of color, as highlighted by Dr. Geke.
India’s IT Ministry has unlawfully extended its censorship authority to facilitate the removal of online content and allow “countless” government officials to enforce such orders.
The lawsuit and accusations indicate the escalation of the ongoing legal dispute between X, who is being instructed by New Delhi to take down content, and Indian Prime Minister Narendra Modi. This comes as Musk prepares to launch Starlink and Tesla in India.
In a recent court filing dated March 5, X argues that India’s IT ministry is utilizing a government website launched by the Home Office last year to issue content blocking orders and compel social media companies to participate on the website. According to X, the process lacks stringent Indian legal safeguards concerning content removal, requiring the issuance of an order in cases of sovereignty or public order harm and involving strict monitoring by top officials.
India’s IT Ministry redirected a request for comment to the Home Office, but did not respond.
The government’s website stated it was attempting to counter the directive by establishing an “unacceptable parallel mechanism” that would lead to “unchecked censorship of Indian information.”
X’s court documents have not been publicly released and were initially reported by the media on Thursday. The case was briefly heard earlier this week by a judge from the Southern High Court of Karnataka, but a final decision was not reached. The next hearing is scheduled for March 27th.
In 2021, X, previously known as Twitter, faced a dispute with the Indian government over defying a legal order to block certain tweets related to farmers’ protests against government policies. X eventually complied after facing backlash from the public, but the legal challenge remains ongoing in Indian courts.
According to a new report, Australian government agencies could potentially be customers of military-grade spyware from Israeli company Paragon Solutions.
Earlier this year, Meta disclosed that over 90 individuals, including journalists, were targeted on WhatsApp using this software, although it remains uncertain if Australians were among the targets.
In reports released by Citizen Lab on Wednesday, two Australian IP addresses were identified as potential users of Paragon’s spyware tools. Citizen Lab managed to map out Paragon’s server infrastructure based on tips they received.
The spyware allows access to messaging apps on users’ devices and is exclusively sold to governments worldwide, not to private entities.
The Australian domains mentioned in the report do not have a history of previous ownership according to WHOIS domain searches. These domains could potentially be utilized by federal or state agencies, although sources indicate that Paragon Solutions is not linked to the Ministry of Interior or Australia’s Signals Bureau.
When questioned about Australian customers or the targeting of Australians, Paragon did not provide direct answers to these queries.
John Fleming, the executive chairman of the company, stated, “Paragon’s ultimate goal is to aid national security and law enforcement in combating serious crimes and terrorism within the boundaries of the law, while also considering privacy implications. They ensure that customers operate within legal frameworks and enforce strict rules against misuse.”
A recent report from Citizen Lab followed Meta’s announcement in January that journalists and civil society members were targeted on WhatsApp using spyware owned by Paragon Solutions.
Meta sent a cease and desist letter to Paragon and explored legal actions against them after the incident.
Meta declined to comment further when asked if Australians were among the targets.
Italian investigative journalist Francesco Cancerato uncovered a young fascist within the far-right party of Italian Prime Minister Giogia Meloni after receiving alerts from WhatsApp regarding the attack.
Following this revelation, Paragon Solutions terminated its contract with Italy. Meloni’s office denied any involvement by the national intelligence agency or government in alleged violations against journalists and activists.
Citizen Lab, headquartered at the University of Toronto, specializes in research on cyber and surveillance technologies.
Federal employees in a lesser-known office dedicated to high-tech and consulting services were working when Elon Musk first tweeted about the agency on the afternoon of February 3rd.
The world's wealthiest man had responded to a tweet from a right-wing activist who mistakenly claimed that the 18th floor within the General Services Agency (GSA) was a left and right cell within the government. Activists accused the 18th floor of creating a programme that would take care of bureaucrats to prepare people's tax returns. This was one of several false claims about the offices circulating on X, the social media platform Musk owns and spends much of his day.
Musk's tweet quickly sparked widespread confusion on the 18th floor. This is not a radical leftist cabal, but is tasked with partnering with government agencies to consult and develop software solutions. Former staff and current GSA employees described the 18th floor as a workforce focused on providing high-tech services and improving efficiency within the bureaucracy. Mask's so-called “government efficiency” (DOGE) is designed to perform accurately.
When Musk insisted on deletion, the partner agency was already in work and was hoping for the office's help on civic technology projects, which are key to updating the business. Will they still get that help? What does “delete” mean? What will happen to the technical tools 18F was building? According to three former workers, staff at the sub-agency were unable to get a definitive answer from the leadership of the new Musk Alliance and were unsure what to tell the other institutions.
The confusion lasts for several weeks. On Saturday, March 1st, staff on the 18th floor received an email around 1am, informing them that everything would be fired and closed “in an explicit direction from the top level of both the administration and the GSA leadership.”
The 18F episode fits a common pattern that appears to be amplifying masks by ingesting misinformation online. It is also a window into the influence of right-wing media and activists on Musk when he attacks and disbands some of the government that he believes does not fit the ideological worldview.
For a week after cutting the 18F, the recently appointed director of GSA's Technology Transformation Services, which oversees the 18F, held a meeting explaining the decision. Thomas Shed, a 28-year-old former Tesla software engineer and Musk's ally who sent the mass layoff email, told staff that the 18th floor had been shut down. Employees' hourly wages were too high And external consultants will be cheaper. Shedd did not respond directly to requests for comment on this article.
“After a thorough review of the 18th floor, the leadership of the GSA has determined that the business unit is not in line with President EOS, following consent from the administration and all OPM guidelines.
The explanation, according to former staff, not only misunderstands how the 18th floor operates and how its cost structure operates, but also ignores the frequent savings of agents by advising private vendors on costs and unnecessary contracts. Instead, former employees and current staff at GSA thought the layoffs were politically motivated.
“The only reason I can see when 18F is chosen to be eliminated in front of other offices is to make Elon Musk happy,” said a GSA employee who spoke anonymously out of fear of retaliation.
Misleading tweets and musk destiny workers dedicated to government efficiency
The 18th floor worked with various government agencies to create popular services, but little known to the public. The group quietly helped create dozens of services each year for various stations, including the IRS direct file free tax return system. Many 18F software projects, including streamlining government weather websites for easy use in the event of natural disasters, have a clear intention to make government services more efficient and reduce taxpayer costs.
When Musk claimed he “deleted” 18F, he was retweeting a February 3 post from right-wing activist Alex Rorsso, producer of conservative media influencer Benny Johnson. Musk of X, and the one courted by Donald Trump's administration. he I'm working As a paid consultant for Musk's Super PAC, he is also a fan. His first post on X profile is a 2023 photo of a mask and laughing, pinned to the top so that others don't push out in the sight.
Lorusso's post allegedly claiming that the 18th floor is “in charge of preparing people's tax returns,” suggesting that it is “a far left government wide computer office.” His claims about 18F were later revised by other X users in the Community Notes. Instead, the office explained that it helped Americans build a service that would allow them to file taxes online for free. Set to extended Nationwide.
Like Musk, Lorusso's posts about X were retweets from another conservative media person. Luke Rosiak, author of the conservative news site The Daily Wire, posted a long thread on January 31st attacking 18F. He framed the technology consulting unit as “a far left agency” and “a contract in which transgender and queer hire each other.” The thread included a profile of former 18F employees using the pronoun for “them” in their BIOS, as well as images of employee crowdfunding campaigns for gender-affirming healthcare. I was also caught up in an article published by Rosiak about the GSA and the 18F in 2023. He suggested that he focused on agency diversity. The Russian chain's first post has received over 13.5 million views and was retweeted by Musk.
According to a former employee, the Russian attack on the 18th floor included a misleading statement. Daily Wirewriters refused to insert facial recognition software into the government's website login.gov for “racial equity,” so 18F claimed 18F's at-risk security for Americans on 18F. The claim blended several different parts of the GSA and misinterpreted security issues with facial recognition, said one former employee, with 18F denounced leadership decisions related to completely different business units.
The GSA faced a legitimate scandal when former Technology Transformation Services Director Dave Zvenyach misrepresented the level of security operated by Login.gov, but Login.gov was an independent entity from the 18th floor and had no direct staffing with the office. According to a former 18F employee, facial recognition software is well known for not being able to recognize non-white faces, and using it as an identity verification tool creates security issues for users, resulting in racial stock testing of facial recognition technology.
“I think it's impossible to imagine people putting their partisans aside while they work for the government,” the former 18F employee said in response to conservative vitriol against the 18F.
In response to requests for comment on the thread's statement, a Daily Wire spokesperson said the Russian report on the 18th floor speaks for itself.
Some former staff members following a massive layoff on the 18th floor Set up your website They attempt to revise the right-wing narrative that their group is partisan within the government, and instead highlight the various projects they have completed. Others warned that their group was an early warning sign of how the Doji and the Trump administration target other agencies based on ideological evidence, not on what they do.
“We lived through the evidence that the topic of this administration was wrong. Lindsay Young, former executive director of 18F, said in a LinkedIn post: “This targeted us.”
The Trump administration wants to streamline the US government to use AI to increase efficiency
Greggory Disalvo/Getty Images
What is artificial intelligence? This is a question scientists wrestled in the 1950s when Alan Turing asked, “Can you think of a machine?” With large-scale language models (LLMs) like ChatGpt unlocking around the world, finding the answer is more pressing than ever before.
Although their use is already widespread, the social norms around these new AI tools are still evolving rapidly. Should students use them to write essays? Will they replace your therapist? And can they turbocharge the government?
That last question is being asked in both the US and the UK. Under the new Trump administration, Elon Musk’s Department of Government Efficiency (DOGE) task force is eliminating federal workers and deploying chatbots with those who have left GSAIs behind. Meanwhile, British Prime Minister Kiel Starmer calls it a “money opportunity” that will help rebuild the nation.
Certainly there are government jobs that can benefit from automation, but is LLMS a suitable tool for the job? Part of the problem is that they don’t agree with what they actually are. This was properly demonstrated this week New Scientist Using the Freedom of Information (FOI) law, we acquired the ChatGPT interaction of Peter Kyle, Secretary of State for Science, Innovation and Technology. Politicians, data privacy experts, journalists, and in particular we were amazed at how a request was recognized.
The release of the records suggests that the UK government considers ChatGpt to be similar to ministerial conversations with civil servants via email or WhatsApp. Both are subject to the FOI Act. Kyle’s interactions with ChatGpt show no strong reliance on AI to form serious policies. One of his questions was about which podcasts they should appear on. However, the fact that the FOI request has been granted suggests that some governments seem to believe that AI can speak like humans.
As New Scientist LLM is currently responsible for spitting out the inaccuracies of sound that are as compelling as they provide useful advice, rather than intelligent in a meaningful sense. Furthermore, their answers reflect the inherent bias in the information they ingested.
In fact, many AI scientists are increasingly seeing the view that LLMS is not the route to the lofty goals of artificial general information (AGI). We can match or surpass what humans can do. For example, in a recent survey of AI researchers, around 76% of respondents said that it is “impossible” or “very unlikely” that current approaches will succeed in achieving AGI.
Instead, perhaps we need to think of these AIs in new ways. Write in a journal Science this weeka team of AI researchers stated that “it should not be seen primarily as intelligent agents, but as a new kind of cultural and social technology, allowing humans to access information accumulated by other humans.” Researchers compare LLM to “past technologies such as writing, printing, markets, bureaucracy, and representative democracy” that changed the way information was accessed and processed.
This way, the answers to many questions are clearer. Can the government use LLM to increase efficiency? It’s almost certainly true, but only when used by people who understand their strengths and limitations. Should interactions with chatbots be subject to the Freedom of Information Act? Perhaps existing sculptures designed to give the minister a “safe space” for internal deliberations should be applied. And, as Turing asked, can the machine think? no. still.
Tesla, led by Elon Musk, is cautioning about the potential repercussions of Donald Trump’s trade war. They warned that retaliatory tariffs could harm not only electric car makers but also other American automakers.
In a letter to US trade representative Jamieson Greer, Tesla emphasized the importance of considering the broader impacts of trade actions on American businesses. They stressed the need for fair trade practices that do not inadvertently harm US companies.
Tesla urged the US Trade Representative (USTR) office to carefully evaluate the downstream effects of proposed actions to address unfair trade practices. They highlighted the disproportionate impact that US exporters often face when other countries respond to trade actions taken by the US.
The company, which has been a supporter of Trump, expressed concerns about potential tariffs on electric vehicles and parts imported to targeted countries. They cited past instances where trade disputes led to increased tariffs on vehicles and parts manufactured globally.
As Tesla continues to navigate the challenges of trade policies, they emphasized the importance of considering implementation timelines and taking a step-by-step approach to allow US companies to prepare and adapt accordingly.
Meanwhile, German automaker BMW reported a decline in net profit due to trade tariffs. They highlighted the impact of US trade actions on their business performance and reiterated the challenges posed by a competitive global environment.
BMW’s forecast takes into account various tariffs, including those on steel and aluminum. The company faces challenges in China, where local EV manufacturers are gaining market share, leading to a decline in BMW and Mini sales.
Despite these obstacles, BMW remains committed to navigating the complexities of trade and geopolitical developments to maintain business resilience and performance.
The Guardian has learned that appeals to the UK government’s request to access clients’ highly encrypted data will be heard in a secret High Court hearing.
The appeal, scheduled for Friday, will be reviewed by the Investigatory Powers Tribunal, an independent court with the authority to investigate allegations of illegal actions by the UK Intelligence Reporting Authority.
This goes against a directive issued by the Home Office in February under the Investigatory Powers Act, which compels law enforcement to provide requested information.
The Home Office is seeking the ability to access users’ encrypted data in cases of national security threats. Currently, even Apple does not have access to data protected by Advanced Data Protection (ADP) programs.
ADP allows iCloud users to safeguard photos, memos, and other data with end-to-end encryption, ensuring that only users can access it. Messaging services like iMessage and FaceTime maintain default end-to-end encryption.
Apple has argued that removing this tool would make users vulnerable to data breaches and jeopardize customer privacy. Creating a “back door” would enable Apple to access all data and potentially share it with law enforcement agencies.
Last week, Computer Weekly reported that Apple plans to challenge the secret order.
The court took the unusual step of announcing the closed hearing before President Rabinder Singh on March 14th.
The court listing does not mention Apple or the government, and it does not disclose if the court is associated with either party.
The hearing will be held privately due to security concerns, but media outlets like Computer Weekly argue that it is a matter of public interest and should be conducted in open court as details have already been leaked.
News organizations, including the Guardian, and civil society groups are supporting Computer Weekly in their petition.
In a statement in February, Apple expressed disappointment at the situation. They cited increasing data breaches and threats to customer privacy as the reason for ceasing to offer advanced data protection in the UK.
A spokesperson emphasized the urgency of enhanced security with end-to-end encryption in cloud storage and reiterated Apple’s commitment to user data security.
“As we have stated many times before, we have never created backdoors or master keys for our products or services,” the spokesperson said.
Both Apple and the Home Office declined to comment on the upcoming hearing, and the Guardian reached out to the court for more information.
After facing opposition from education secretaries Peter Kyle and Bridget Phillipson, the bill seeking to ban addictive smartphone algorithms targeting young teenagers was weakened.
The Safer Phone Bill, introduced by Labour MP Josh McAllister, is set to be discussed in the Commons on Friday. Despite receiving support from various MPs and child protection charities, the government has opted to further investigate the issue rather than implement immediate changes.
Government sources indicate that the new proposal will be accepted, as the original bill put forward by McAllister did not receive ministerial support.
The government believes more time is needed to assess the impact of mobile phones on teenagers and to evaluate emerging technologies that can control the content produced by phone companies.
Peter Kyle opposes the major bill, which would have been the second online safety law some advocates were hoping for.
Although not fundamentally against government intervention on this issue, a source close to Kyle mentioned that the work is still in its early stages.
The original proposal included requirements for social media companies to exclude young teens from their algorithms and limit addictive content for those under 16. However, these measures were removed from the final bill.
Another measure to ban mobile phones in schools was also dropped after objections from Bridget Phillipson, who believes schools should self-regulate. There are uncertainties regarding potential penalties for violations.
Health Secretary Wes Streeting has been vocal about addressing the issue of addictive smartphones, publicly supporting McAllister’s bill.
The revised Private Membership Bill instructs Chief Medical Officer Chris Whitty to investigate the health impacts of smartphone use.
McAllister hopes that the bill will prompt the government to address addictive smartphone use among children more seriously, rather than just focusing on harmful or illegal content.
If the Minister commits to adopting the new measures as anticipated, McAllister will not push for a vote on the bill.
The government has pledged to “publish a research plan on the impact of social media use on children” and seek advice from the UK’s chief medical officer on parents’ management of their children’s smartphone and social media usage.
Polls indicate strong public support for measures restricting young people’s use of social media, with a majority favoring a ban on social media for those under 16.
Under tech billionaire Elon Musk, the Doge Task Force cuts jobs across the US government
AFP via Getty Images
The independent task force, US Government Efficiency (DOGE), has closed 18F, a group of in-house technical experts focused on improving the efficiency of the US government. The 18th floor consulted with other government agencies about adopting cost-effective technology and built digital services for tasks such as applying for a passport or submitting online taxes.
Initiatives such as the 18F, another government unit of high-tech consultants, and the US Digital Services (USDS), are “a wealth of professional networks, fixers and dreamers who can modernize government services.” Daniel Castro at Information Technology and Innovation Foundation, a Washington, DC-based think tank.
He says the recent rapid elimination of the 18th floor could halt US government projects. He then expressed his skepticism that Doge is the right organisation to replace USDS or 18F to help the US government use the technology efficiently. “I didn't hire a demolition crew to build a skyscraper,” says Castro.
The US government typically spends over $100 billion on IT services each year, but these expensive technology investments don't often work as they are actually promised. According to to the US Government Accountability Office. 18F said it helped to avoid such waste by consulting with federal and state agencies adopting high-tech solutions and determining which companies could deliver on time and on budget Danhongovernment digital services and technology experts.
Three former 18th floor employees who requested anonymity had recently cut jobs New Scientist. It helped digitize the healthcare application system to make states easy access to federal Medicaid funds. These provide health insurance to 70 million Americans, including 40% of all children and 60% of all nurse residents.
Another former employee interactively collaborated with the US Department of the Interior Website This tracks environmental damage caused by the release of petroleum or other harmful substances. Such data helped ensure that the person responsible for the damages, not the taxpayer, would pay to clean it, they said.
Members of the 18th floor also had the National Weather Service updated. Prediction website To make it more user-friendly. The 18F team worked with USDS to develop a free direct file program. This allows participating states to submit their taxes directly to the Internal Revenue Service, instead of purchasing tax preparation software or hiring an accountant. The government estimates that more than 30 million taxpayers from 25 states will be eligible for services in 2025.
The future of these projects is currently uncertain. Since President Donald Trump began his second term in January 2025, he has changed his name, I'll remake it The USDS is a clumsy USDS, led by government civil servant Amy Gleason, but is actually led by high-tech billionaire Elon Musk. Many previous US digital services members have since been fired or resigned.
Musk I was aiming It was on the 18th floor early in Trump's second administration, but former 18th floor employees had not received official “forced” notifications to close the organization until February 28th. Approximately 85 members on the 18th floor were directly affected by the layoffs, and three more received previous acquisition offers.
The combination of 18F exclusion and layoffs and resignation from the previous USDS team means there is no government-wide mission left to develop and build technology. say Former 18th floor employee. A spokesman for the General Services Agency (GSA), a US government organization that provides operational support to all federal agencies, said: “GSA will continue to support the administration's willingness to adopt best-in-class technologies to accelerate digital transformation and modernize IT infrastructure.”
aA post from Elon Musk on Saturday afternoon requested that federal employees list five things from the previous week related to emails. This request was expected to reach the inboxes of 2.3 million federal employees, sparking discussions among a secret network of government workers and contractors. These individuals began communicating through an encrypted app to coordinate their responses.
Employees on a 10-hour, four-day schedule did not see the email until Tuesday, missing the deadline for responses. Some employees even added a humorous touch, with one worker joking, “Bonus points to those who say they spent government subsidies on hookers and blows.”
After quickly deliberating, the network agreed on a response strategy. They decided to split the oaths sworn by federal employees into five bullet points, which would be sent back via email. The first point was: “I supported and defended the US Constitution against all enemies, foreign and domestic.”
Another oath included: “I’ve pledged true faith and loyalty to the same thing.” According to veteran contractor Lynn Stahl, these efforts aimed to expose harmful policies, defend public institutions, and provide citizens with necessary information and support.
Identifying themselves as #Altgov, the network gained visibility with multiple social media accounts, most adopting names or initials of federal agencies. Their goal was to shed light on the chaos caused by the previous administration and combat misinformation.
With around 40 accounts and growing followership, #Altgov engaged in subgroups for information sharing and strategy development using the encrypted messaging app, Wire.
A post from #ALTGOV explaining the Centers for Disease Control and Prevention. Photo: alt cdc (they/them)/bluesky
The origin of #Altgov dates back to the first Trump administration, with notable accounts like “Alt National Park Service” gaining traction on Twitter. The network evolved to serve the public by coordinating relief efforts and distributing resources during crises.
Transitioning their presence to Bluesky, #Altgov continued their mission to provide value where the government fell short. They expanded their reach by forming new accounts dedicated to specific agencies, like #Altgov FEMA, which focused on disaster response.
Federal employees who joined #Altgov expressed a sense of duty and a desire for transparency in government actions. By uncovering misinformation and providing accurate information, they aimed to empower citizens and hold institutions accountable.
sInse declared the support of Donald Trump in July last year, then spent more than $ 250 million in re-election efforts, and Eron Musk rapidly had a political impact, and is located at the center of the new administration. I am doing it. At present, as the president himself, the mask has begun to use its power, has made a decision that can affect the health of millions of people, and gains access to very sensitive personal data. I am attacking those who oppose him. Musk, the wealthy man in the world and an unrivaled official, has gained surprising levels of the federal government.
On the weekend, workers with the mask “government efficiency” (DOGE) collide with public servants on the demands of free access to the major government agencies of the US government agency in a series of violent series of conflicts. I did. When the dust settled down, several high -ranking officials opposed to the acquisition were pushed out, and Musuk’s allies had controlled.
Masks, which have been supported by Trump, are currently working to close the US International Development Organization (USAID). He boasted on Sunday to “supply USAID to the wood chipper.” He also targeted several other institutions in purs an aggressive attempt to purify and remake the federal government along the border of ideology, avoiding the parliament or justice monitoring.
Most of the Musuk’s actions were carried out, with thousands of people hired by the USAID -like institutions he did without moving forward, transparency, and transparency. Humanitative organizations that depend on US financing The operation has been stopped And the staff fired the staff while the government workers were closed out of their office. He operates DOGE as an unofficial government division without a mission approved by Congress. Hold the position “Special government staff” Side step financial disclosure And the public examination process.
The USAID employee protests outside the headquarters on Monday in Washington. Photo: KEVIN DIETSCH/Getty Images
British citizens will soon have the option to store their passport digitally on their phone, along with their driving license, Universal Credit account, marriage certificate, and birth certificate.
These plans were revealed by Peter Kyle, Secretary of State for Science, Innovation, and Technology, as part of a new smartphone app to streamline interactions with government services. This move aims to eliminate the need for physical government letters and long wait times for basic appointments.
Initially, people will be able to access their driver’s licenses and veterans cards with the new digital wallet starting in June. The government’s digital service will later expand to include accounts related to student loans, car tax, benefits, childcare, and local councils.
Mr. Kyle mentioned that his department is collaborating with the Home Office to authorize a digital passport version. While physical copies will still be valid, their use for crossing borders will depend on other countries’ border systems.
An example of a digital driving license page stored in a smartphone wallet in the Gov.uk app, due to be released this summer. Photo: Faculty of Science/PA
Kyle stated: “We are closely monitoring international standards, and as those standards become clearer, governments will naturally want to benefit from them as much as possible.”
The digital wallet, similar to Apple and Google wallets, will be linked to a person’s ID to verify their identity. This will enable instant sharing of necessary certificates and benefit claims with ease. However, there are no immediate plans to use it for proving immigration status.
In case of a lost phone, a recovery system is in place to prevent loss of the digital wallet. Kyle reassured users about data breaches, mentioning that the app’s design complies with existing data laws.
“We are revolutionizing the interaction between citizens and the state,” said Kyle during a launch event in east London, drawing inspiration from Silicon Valley product launches.
He added that individuals under 18, accustomed to smartphones, would view current government and paper-based systems as outdated.
“Moving government services online doesn’t mean leaving behind those without internet access,” he emphasized. “Easier online access allows us to enhance public services and focus human resources where necessary, ensuring better service for all.”
The technology has been developed over the last six months since the Labor party took office and includes modern smartphone security features like facial recognition checks.
Kate Mosse and Richard Osman have criticized Labor’s proposal to grant wide-ranging freedom to artificial intelligence companies to data mine artwork, warning that it could stifle growth in the creative sector and amount to theft.
Best-selling authors have joined Keir Starmer in opposing the national initiative to establish Britain as an “AI superpower,” endorsing a 50-point action plan that includes changes to how technology companies utilize copyrighted content and data for training models.
There is ongoing debate among ministers regarding whether to permit major technology companies to gather substantial amounts of books, music, and other creative works unless copyright owners actively opt out.
This move is aimed at accelerating growth for AI companies in the UK, as training AI models necessitates substantial amounts of data. Technology companies argue that existing copyright laws create uncertainty and pose a risk to development speed.
However, creators advocate for AI companies to pay for the use of their work, expressing disappointment when the Prime Minister endorsed the proposal. The EU is also pushing for a similar system requiring copyright holders to opt out of data mining processes.
The AI Creative Rights Alliance, comprising various trade bodies, criticized Starmer’s stance as “deeply troubling” and called for the preservation of the current copyright system. They urged ministers to consider their concerns.
Renowned artists like Paul McCartney, Kate Bush, Stephen Fry, and Hugh Bonneville have raised concerns about AI potentially threatening their livelihoods. A petition warns against the unauthorized use of creative works for AI training.
Mosse emphasized the importance of using AI responsibly without compromising the creative industries’ growth potential, while Osman stressed the necessity of seeking permission and paying fees for using copyrighted works to prevent theft.
The government’s AI action plan, formulated by venture capitalist Matt Clifford, calls for reforming the UK’s text and data mining regulations to align with the EU’s standards, highlighting the need for competitive policies.
The government’s response to the action plan emphasizes the goal of creating a competitive copyright regime supportive of both the AI sector and creative industries. Starmer expressed his support for the recommendations.
Various industry representatives, including Joe Twist from the British Recording Industry, advocate for a balanced approach that fosters growth in both the creative and AI sectors without undermining Britain’s creative prowess.
Critics argue that AI companies should not be allowed to exploit creative works for profit without permission or compensation. The ongoing consultation on copyright policies aims to establish a framework benefiting both sectors.
U.S. financial regulators have charged Elon Musk with allegedly threatening other shareholders by not disclosing his ownership of Twitter shares and then acquiring the company’s shares at artificially low prices.
The Securities and Exchange Commission (SEC) filed a lawsuit against Musk in federal court in Washington, D.C., accusing him of securities violations. The complaint states that Musk failed to disclose his 5% stake in the company in a timely manner and profited from the stock purchased after the filing deadline for ownership statements. The company ended up paying less than $1,000,000.
Musk purchased Twitter for $44 billion in 2022 and later rebranded the company as X. He acquired a 5% stake in the company before the purchase, which normally would require a public offering. The SEC claims that Musk disclosed his ownership on Twitter 11 days after the reporting deadline.
Musk’s lawyer, Alex Spiro, stated in an email that the SEC’s lawsuit is baseless, claiming that Musk did nothing wrong. This is not the first time Musk has been investigated by the SEC for his involvement with Twitter.
The SEC alleges that Musk delayed disclosing his ownership to the public and spent over $500 million on additional shares, potentially allowing the company to purchase stock at an artificially low price.
Despite Musk disclosing his ownership to the SEC 11 days later, he stated that he had acquired more than 9% of Twitter’s stock. The SEC noted that Twitter’s stock price rose by over 27% on that day.
The company has collaborated closely with the UK government on artificial intelligence safety, the NHS, and education. They are also working on AI development for military drones.
Their defense industry partners note that Faculty AI has experience in developing and deploying AI models on UAVs (unmanned aerial vehicles).
Faculty is one of the most active companies offering AI services in the UK. Unlike other companies like OpenAI and Deepmind, they do not develop their own models, focusing instead on reselling models from OpenAI and providing consulting services on their use in government and industry.
The company gained recognition in the UK for their work on data analysis during the Vote Leave campaign before the Brexit vote. This led to their involvement in government projects during the pandemic, with their CEO Mark Warner participating in meetings of the government’s scientific advisory committee.
Under former chancellor Rishi Sunak, Faculty Science has been testing AI models for the UK government’s AI Safety Institute (AISI), established in 2023.
Governments worldwide are racing to understand the safety implications of AI, particularly in the context of military applications such as equipping drones with AI for various purposes.
In a press release, British startup Hadean announced a partnership with Faculty AI to explore AI capabilities in defense, including subject identification, object movement tracking, and autonomous swarming.
Faculty’s work with Hadeen does not involve targeting weapons, according to their statements. They emphasize their expertise in AI safety and ethical application of AI technologies.
The company collaborates with AISI and government agencies on various projects, including investigating the use of large-scale language models for identifying undesirable conduct.
The Faculty, led by Chief Executive Mark Warner, continues to work closely with AISI. Photo: Al Tronto/Faculty AI
Faculty has incorporated models like ChatGPT, developed in collaboration with OpenAI, into their projects. Concerns have been raised about their collaborations with AISI and possible conflicts of interest.
The company stresses its commitment to AI safety and ethical deployment of AI technologies across various sectors, including defense.
They have secured contracts with multiple government departments, including the NHS, Department of Health and Social Care, Department for Education, and Department for Culture, Media and Sport, generating significant income.
Experts caution about the responsibility of technology companies in AI development and the importance of avoiding conflicts of interest in projects like AISI.
The Ministry of Science, Innovation, and Technology has not provided specific details on commercial contracts with the company.
The Chinese government has responded to allegations linking Chinese government-supported attackers to the recent cyber breach at the U.S. Treasury Department, dismissing the accusations as “baseless.”
The breach was carried out through a third-party cybersecurity service provider, according to a letter from the Treasury to lawmakers. The hackers were able to access keys used by vendors to bypass certain parts of the system.
The Treasury Department confirmed that the incident took place earlier in the month, allowing the attackers to remotely access the workstation and obtain some unclassified documents.
China refuted the claims on Tuesday, stating that it opposes all forms of hacker attacks and especially rejects the propagation of false information for political motives.
Speaking on behalf of the Foreign Ministry, Mao Ning said, “We have consistently refuted these unfounded accusations without supporting evidence.”
The Treasury Department reported the breach to the U.S. Cybersecurity and Infrastructure Security Agency after being informed by the third-party provider and is collaborating with law enforcement to assess the situation.
A department spokesperson stated, “The compromised services have been disabled, and there is no indication that the attackers continued to infiltrate Treasury systems or data.”
In a letter to the Senate Banking Committee leadership, the Treasury Department stated, “Based on available evidence, this incident appears to be the work of a Chinese state-sponsored Advanced Persistent Threat (APT) actor.”
APT refers to a cyber attack where an intruder gains unauthorized access to a target and remains undetected for an extended period.
The ministry did not disclose the extent of the impact of the breach but promised to provide further details in a subsequent report.
“The Treasury Department treats any threat to our nation’s systems and data with utmost seriousness,” the spokesperson emphasized.
Several countries, including the United States, have expressed concerns about Chinese government-supported hacking campaigns targeting their governments, militaries, and enterprises.
While the Chinese government has denied the allegations, it has previously stated that it opposes and cracks down on all forms of cyber attacks.
In September, the U.S. Department of Justice announced the neutralization of a global cyber attack network affecting 200,000 devices, allegedly operated by Chinese government-backed hackers.
In February, U.S. authorities revealed the dismantling of a hacker network called Bolt Typhoon that targeted critical public infrastructure at China’s direction.
In 2023, Microsoft disclosed that China-based hackers had infiltrated email accounts at numerous U.S. government agencies in search of intelligence information.
The hacker group “Storm-0558” breached the email accounts of around 25 organizations and government agencies, including the State Department and Commerce Secretary Gina Raimondo.
On November 13, 2024, four witnesses appeared before the Joint Subcommittee. US Congressional Oversight and Accountability for a testimony session on so-called “Unidentified Anomalous Phenomena (UAP).” This is a necessary rebranding of the term “UFO.” The people who spouted these three letters in the past were rarely seen as trustworthy or worthy of testifying before the U.S. government.
The four witnesses were Rear Adm. Tim Gallaudet, former commander of the U.S. Navy’s Meteorological and Marine Command; Luis Elizondo, former director of the Defense Department’s Advanced Aerospace Threat Identification Program; investigative journalist Michael Shellenberger; Former NASA Deputy Administrator Michael Gold.
4 people submitted written testimony before the hearing. Shellenberger also allegedly original document An anonymous whistleblower report regarding a program called “Immaculate Constellation,” an “unauthorized special access program” for top-level monitoring of UAP-related activities.
The document referred to an extensive database of high-quality evidence collected over several decades, all of which had previously evaded democratic oversight by Congress and most executive branches.
Ann early hearing Held on July 26, 2023, former U.S. Navy pilots testified about events such as encounters with the famous “tic-tock object” and 2004 FLIR (forward-looking infrared) video from the USS. Nimitz Encounters, and GoFast and gimbal videos from the 2015 USS roosevelt Incident.
Previous reports of UAP/UFO sightings date back to the 1940s, and some even centuries earlier. There also seemed to be waves of UFO sightings.
Suspicions of a government cover-up have been floating since the Roswell incident in 1947, but the latest surge in interest in government secrets was sparked in 2017. new york times article About the Department of Defense’s alleged UAP program.
This has led to a bipartisan interest in Congress to uncover the extent to which the U.S. government and intelligence community covered up the sightings. They promised to provide transparency to the American people.
read more:
So far, it’s safe to say that attempts at transparency have been a total failure. Witnesses have refused to disclose classified material that could violate confidentiality oaths, and the government’s refusal to declassify the material (or even acknowledge its existence) has created obstacles. This had an impact on the full-scale discussion on UAP disclosure.
The hearing on November 13, 2024 was no exception. Chairman Nancy Mace began the cover-up game by saying she had no intention of “revealing names.” She also said there were people trying to influence her not to hold this hearing.
Nancy Mace speaks at a House Oversight and Accountability Committee hearing in Washington, D.C. – Photo courtesy of Getty
All witnesses except Gold are not allowed or unwilling to discuss certain questions in public session, or are not allowed or unwilling to discuss them completely. (Shellenberger claimed it was to protect his journalistic sources). They also reported being subject to threats or outright intimidation not to disclose confidential material.
close encounter
If Mr. Gold had confidential information, he never disclosed it. He simply, and rightly, emphasized the need for independent scientific and academically rigorous investigation of the phenomenon.
However, that did not stop witnesses from claiming knowledge of the crash recovery program and encounters with underwater UAPs and USOs (Unidentified Submersible Objects). They also implied that staff were being treated for injuries sustained from contact with the UAP, and that humanity was already dealing with non-human intelligence (NHI).
Information that, if true, would fundamentally change our view of our place in the universe. This also shows that there is still a lot of sensitive material hidden away.
Witnesses are allowed to speak to some extent about facts that are usually considered “official secrets,” but are prohibited from releasing confidential material that supports their claims. This means you can never really know if what they say is true.
Two senior members of the Eighth U.S. Air Force have identified metal fragments found by a farmer near Roswell, New Mexico as debris from a weather balloon. This is the basis for the 1947 Roswell Incident, which was the alleged crash of an alien spacecraft.
Their testimony is always subject to ample doubt. They may all be sincere in their beliefs or have access to relevant evidence, but it is this personal editing that inevitably leads the witness to failure and, at worst, ridicule. It will be done.
But in many previous hearings, they have only presented what they were told, or in legal parlance, “hearsay evidence”, a type of evidence that legal systems around the world consider to be questionable. I haven’t.
This makes it easy for so-called “falsifiers” to point out that evidence is always announced to be released soon, but is never actually released.
And unfortunately, not all of the witnesses who appeared before Congress on November 13th have impeccable reputations for due diligence and fact-checking information.
His response was to congratulate those who realized his mistake and to say that he is always happy to see false evidence removed from a serious UAP story.
This “non-human” alien corpse was presented to Mexican politicians in 2023. Experts around the world have labeled the corpse a hoax. – Photo credit: Getty
Regardless, former U.S. officials should reconsider their blind allegiance to secrecy and consider whether there is really any benefit to complying with the government’s demands for silence. Their current reluctance to disclose information only further fuels the US government’s quest to obfuscate the democratic process.
If the witnesses’ claims are true, this knowledge should be shared with the world, not held by one country’s government.
battle of words
The question of whether we are alone in space or even here on Earth is not, by definition, a national security issue. This myopic view, currently held by domestic intelligence agencies, is not appropriate for future policy principles.
It is inevitable that at some point someone will have to make the drastic decision to release or publish confidential material to which they have access, and have to impose their name and risk the consequences.
In fact, the threat of legal consequences not only lends greater credibility to the whistleblower’s character, but also increases the credibility of their testimony. Why risk making confidential documents public unless you are 100% sure they are true?
Even if the truth becomes public, it is unlikely to have any real impact. Steals Jack Ryan’s line, “If a bomb goes off, there’s no use trying to defuse it.” A clear and present danger.
Until then, the meaningless show of unchecked hearsay testimony will continue to be repeated on the floor of Congress. It is better to ignore the protests of witnesses that no real information can be revealed and to actively prevent the spread of unverifiable claims than to remain in a perpetual state of limbo of alien gossip and innuendo. Probably.
Keeping their mouths shut will ultimately do great damage to the truth behind UAP.
read more:
For more fact-checked news, visit the BBC. confirm website of bit.ly/BBCVerify
No department in Whitehall has registered the use of artificial intelligence systems since the government announced that it will be made compulsory, sparking warnings that the public sector is “acting blind” to the deployment of algorithmic technologies that will affect millions of lives. AI is already being used by governments to inform decisions on everything from benefit payments to immigration enforcement, and records show public agencies have awarded dozens of contracts for AI and algorithmic services. A contract for facial recognition software worth up to £20 million was put up for sale by the Home Office-set up police procurement agency last week, reigniting concerns about “massive biometric surveillance”.
However, details of only nine algorithmic systems have been submitted so far to the public register. There is no increase in AI programs being used in the welfare system by the Home Office or the police. The lack of information comes despite the government announcing in February this year that the use of AI registers would be a requirement for all government departments.
An expert warned of the potential harms of deploying AI systems uncritically, citing high-profile examples of IT systems not working as intended, like the Post Office’s Horizon software. The use of AI within Whitehall ranges from Microsoft’s Copilot system to automated fraud and error checking in benefits systems. The lack of transparency in the government’s use of algorithms has raised concerns among privacy rights campaigners and experts in the field.
Since the end of 2022, only three algorithms have been recorded in the national registry. These include systems used by the Cabinet Office and AI-powered cameras analyzing pedestrian crossings in Cambridge. A system that analyzes patient reviews of NHS services is also included. Despite the slow progress in registering AI systems, public agencies have signed 164 contracts referencing AI since February. Technology companies like Microsoft and Meta are actively promoting their AI systems to government agencies.
The Department for Work and Pensions and the Home Office are already leveraging AI for various purposes, from fraud detection to decision-making processes. Police forces are using AI-powered facial recognition software to track criminal suspects, while NHS England has signed a deal with Palantir to build a new data platform. In addition, AI chatbots are being trialed to assist people in navigating government websites and assist civil servants in accessing secure government documents quickly.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.