Sirius Setback: Apple’s AI Chief Steps Down Amid Growing Competition

Apple’s artificial intelligence lead, John Gianandrea, is departing the company. This decision comes as the Silicon Valley titan trails behind competitors in launching generative AI features, especially regarding the voice assistant Siri. Apple made the announcement on Monday, expressing gratitude for Mr. Gianandrea’s seven years of service.

CEO Tim Cook noted that his fellow executives played a crucial role in “building and advancing the company’s AI initiatives,” paving the way for continual innovation. Amar Subramanya, a seasoned AI researcher, will take over Gianandrea’s role.

In June 2024, Apple launched its significant AI product suite, Apple Intelligence, but it has been slow to integrate generative AI into its offerings compared to rivals like Google. While Apple has added features such as real-time language translation on its new AirPod earbuds—a capability Google introduced in 2017—and an AI-driven fitness app that uses AI-generated voices during workouts, substantial updates are still forthcoming.

The company has been hinting at AI-powered enhancements for Siri for over a year, yet the release has faced multiple delays.

“For Siri, we required additional time to achieve that high quality,” remarked Craig Federighi, Apple’s vice president of software engineering, during the company’s developer conference in June.

In a subsequent earnings call, Cook emphasized that Apple was “on track to create a more personalized Siri” with a launch targeted for the following year.

The appointment of Subramanya indicates a stronger focus on Apple’s AI strategy. Previously, he was Vice President of AI at Microsoft and spent 16 years at Google, where he led engineering for Gemini AI Assistant, recognized as a benchmark in the industry. Subramanya will report to Craig Federighi, who has expanded his involvement in the company’s AI initiatives in recent years.

Skip past newsletter promotions

On Monday, Cook shared that Federighi is “helping us advance our AI efforts, including overseeing our initiatives to deliver a personalized Siri experience to our users starting next year.” In their announcement, Apple stated that this marks a “new chapter” for the company as it “intensifies its efforts” in AI.

Source: www.theguardian.com

Anthropic Chief Warns AI Companies: Clarify Risks or Risk Repeating Tobacco Industry Mistakes

AI firms need to be upfront about the risks linked to their technologies to avoid the pitfalls faced by tobacco and opioid companies, as stated by the CEO of Anthropic, an AI startup.

Dario Amodei, who leads the US-based company developing Claude chatbots, asserted that AI will surpass human intelligence “in most or all ways” and encouraged peers to “be candid about what you observe.”

In his interview with CBS News, Amodei expressed concerns that the current lack of transparency regarding the effects of powerful AI could mirror the failures of tobacco and opioid companies that neglected to acknowledge the health dangers associated with their products.


“You could find yourself in a situation similar to that of tobacco or opioid companies, who were aware of the dangers but chose not to discuss them, nor did they take preventive measures,” he remarked.

Earlier this year, Amodei warned that AI could potentially eliminate half of entry-level jobs in sectors like accounting, law, and banking within the next five years.

“Without proactive steps, it’s challenging to envision avoiding a significant impact on jobs. My worry is that this impact will be far-reaching and happen much quicker than what we’ve seen with past technologies,” Amodei stated.

He described the term “compressed 21st century” to convey how AI could accelerate scientific progress compared to previous decades.

“Is it feasible to multiply the rate of advancements by ten and condense all the medical breakthroughs of the 21st century into five or ten years?” he posed.

As a notable advocate for online safety, Amodei highlighted various concerns raised by Anthropic regarding their AI models, which included an alarming trend of perceived testing and blackmail attempts against them.

Last week, the newspaper reported that a Chinese state-backed group leveraged its Claude Codeto tool to launch attacks on 30 organizations globally in September, leading to “multiple successful intrusions.”

The company noted that one of the most troubling aspects of the incident was that Claude operated largely autonomously, with 80% to 90% of the actions taken without human intervention.

Skip past newsletter promotions

“One of the significant advantages of these models is their capacity for independent action. However, the more autonomy we grant these systems, the more we have to ponder if they are executing precisely what we intend,” Amodei highlighted during his CBS interview.

Logan Graham, the head of Anthropic’s AI model stress testing team, shared with CBS that the potential for the model to facilitate groundbreaking health discoveries also raises concerns about its use in creating biological weapons.

“If this model is capable of assisting in biological weapons production, it typically shares similar functionalities that could be utilized for vaccine production or therapeutic development,” he explained.

Graham discussed autonomous models, which play a crucial role in the justification for investing in AI, noting that users desire AI tools that enhance their businesses rather than undermine them.

“One needs a model to build a thriving business and aim for a billion,” he remarked. “But the last thing you want is to find yourself locked out of your own company one day. Thus, our fundamental approach is to start measuring these autonomous functions and conduct as many unconventional experiments as possible to observe the outcomes.”

Source: www.theguardian.com

Cyber Threats Can Be Conquered: GCHQ Chief Calls on Businesses to Strengthen Cybersecurity Efforts

The chief of GCHQ emphasized the importance for businesses to implement additional measures to mitigate the potential consequences of a cyber-attack, such as maintaining a physical paper version of their crisis plan for use in the event that an attack disables their entire computer infrastructure.

“What is your contingency plan? Because attacks will inevitably succeed,” stated Anne Keast Butler, head of GCHQ, the UK government’s cyber and signals intelligence agency, since 2023.

“Have you genuinely tested the outcome if that were to occur in your organization?” Keast Butler remarked Wednesday at a London conference organized by cybersecurity firm Record Future. “Is your plan… documented on paper somewhere in case all of your systems go offline? How do you communicate with each other if you are entirely reliant on those systems and they fail?”

Recently, the National Cyber Security Center, part of GCHQ, reported a 50% rise in “very serious” cyber-attacks over the last year. Security and intelligence agencies are now confronting new attacks several times a week, according to the data.

Keast Butler mentioned that governments and businesses must collaborate to address future threats and enhance defense mechanisms, as contemporary technology and artificial intelligence make risks more widespread and lower the “entry-level capabilities” that malicious actors need to inflict harm. He highlighted their efforts in “blocking millions of potential attacks” by partnering with internet service providers to eliminate harmful websites at their origin, but noted that larger companies need to ramp up their self-protection measures.

On Tuesday, a Cyber Monitoring Center (CMC) report revealed that the Jaguar Land Rover hack could cost the UK economy around £1.9 billion, marking it as the most costly cyberattack in British history.

After the attacks in August, JLR was forced to suspend all factory and office operations and may not achieve normal production levels until January.

Keast Butler pointed out that “[there are] far more attacks that have been prevented than those we highlight,” adding that the increased focus on the JLR and several other significant cyber incidents serves as a crucial reminder of the need for robust cybersecurity protocols.

She regularly converses with CEOs of major companies and has conveyed that they should include individuals on their boards who possess expertise in cybersecurity. “Often, due to the board’s composition, nobody knows the pertinent questions to ask, which results in interest, but the right inquiries go unposed,” she noted.

Earlier this year, the Co-op Group experienced a cyberattack that cost it up to £120 million in profits and compromised the personal data of several of its members. Shireen Khoury Haq, CEO of the group, mentioned in a public letter the critical role of cybersecurity training in formulating strategies to respond to attacks.

“The intensity, urgency, and unpredictability of a real-time attack are unparalleled to anything that can be rehearsed. Nonetheless, such training is invaluable; it cultivates muscle memory, sharpens instincts, and reveals system vulnerabilities.”

Keast Butler mentioned a “safe space” that has been created to encourage companies to exchange information about attacks with government entities, allowing them to do so without risking the disclosure of sensitive commercial data to competitors.

“I believe sometimes individuals struggle to come forward due to personal issues or challenges within the company, which hinders our ability to assist in making long-term strategic improvements to their systems,” she remarked.

Source: www.theguardian.com

Interim NASA Chief Aims to Outpace China in Lunar Exploration

NASA is moving full steam ahead, at least according to Sean Duffy, the agency’s agent manager.

During an internal employee town hall on Thursday, Duffy cautioned that he could “make safety an adversary of progress” in the quest for a new space race, as reported in meeting notes acquired by NBC News.

“We must prioritize safety, collaborating with FAA and DOT, yet sometimes that same safety focus can obstruct our progress,” said Duffy, who is also the Secretary of Transport.

“We need to embrace some risks and encourage innovation to carry out this mission. There’s always a balance, but we cannot hold back due to fear of risk.”

A spokesperson from NASA stated the agency remains committed to safety.

Duffy’s remarks arise amidst ongoing turmoil at NASA, where questions about the agency’s budget and priorities have persisted for years. Since the Trump administration began, NASA and other agencies have experienced significant funding and personnel cuts in an effort to downsize the federal workforce.

At a Senate hearing this week, Duffy expressed his frustration about the “shadows cast on everything happening at NASA.”

Image of the moon, captured on February 15, 2025, by Lunar Lander, resilience of Ispace from an altitude of 14,439 km.
iSpace via business wire via AP file

“If that’s the narrative we crafted, I’ll be in trouble,” Duffy remarked. “We beat the Chinese to the moon. We’ll ensure it’s done safely, quickly, and accurately.”

Facing a Time Crunch

He emphasized that time is not in NASA’s favor.

“We are under pressure to perform effectively, rapidly, and safely,” Duffy stated.

NASA’s Artemis program plays a critical role in American efforts to return astronauts to the moon, aiming for regular lunar missions before eventually heading to Mars.

The U.S. previously dominated the Moonshot era during the Apollo program from the 1960s to 1970s, yet its long-standing advantages are now at risk. Competitors like China, Russia, India, and Japan are also aspiring for lunar missions, igniting a new space race.

China, in particular, is swiftly advancing its human spaceflight initiatives. Recently, they tested new lunar mission equipment and rockets, a key step towards realizing their ambitions.

China aims to land astronauts on the moon by 2030, and has announced plans to potentially build a nuclear power plant on the moon in collaboration with Russia to provide electricity.

In the U.S., President Trump’s budget proposal suggested cutting NASA’s funding by over $6 billion.

Despite a proposed budget reduction of about 24%, Duffy maintained that the Artemis program will proceed, although “cost-cutting is vital.”

Around 4,000 NASA employees have taken a voluntary retirement plan as part of the Trump administration’s initiative to decrease the federal workforce.

In July, Reuters reported that over 2,000 senior employees from NASA are expected to exit due to the recent cuts.

Currently, Duffy believes NASA possesses the necessary resources and talent to accomplish its missions in the near future.

“If we fall short, I assure the President 100% that I will approach OMB, the House, and Senate to request additional funding,” Duffy declared.

“More funding doesn’t guarantee success, but I will seek it if needed,” he added.

Duffy stated that he plans to lead initiatives in government space exploration “in the near future.”

Source: www.nbcnews.com

Met Chief Dismisses Proposal to Abandon Live Facial Recognition at Notting Hill Carnival

The Commissioner of the Metropolitan Police has reiterated calls during the Notting Hill Carnival this weekend to halt the use of live facial recognition cameras amid concerns about racial bias and ongoing legal disputes.

In a letter, Mark Lowry stated that the technology would be utilized “in a non-discriminatory manner” at Europe’s largest street carnival, employing an algorithm that “is not biased.”

This response came after letters from 11 anti-racist and civil liberty organizations were revealed in The Guardian, urging the Met to discontinue the technology’s use at events honoring African-Caribbean communities.

Among those organizations are Runnymede Trust, Liberty, Big Brother Watch, The Race on the Agenda, and Human Rights Watch. They emphasized in a letter to Rowley on Saturday that such technology would only “increase concerns about state authority and racial misconduct within your forces.”

Critics argue that the police lack a legal framework, allowing them to “self-regulate” their technological practices, which leads to the deployment of biased algorithms affecting ethnic minorities and women.

Last month, the Met announced plans to deploy a specialized camera at the exit of the two-day event in west London. Annually, the carnival attracts over two million attendees, making it the world’s second-largest street festival during the August bank holiday weekend.

In his correspondence with NGOs and charities, Rowley recognized that previous technology deployments at the 2016 and 2017 carnivals failed to foster public trust. The Met’s earlier facial recognition system has since been enhanced, with 102 individuals mistakenly identified as suspects without being arrested.

“We have made significant strides since then. The latest version of the algorithm has undergone substantial improvements with independent testing and validation, achieving a much higher standard,” said Laurie.

He mentioned that the technology would focus on “minority individuals” involved in severe crimes, such as violence and sexual offenses.

Laurie noted that in 2024, there were 349 arrests made at the event for serious offenses including murder, rape, and possession of weapons.

“These crimes pose a threat to those wanting to enjoy the carnival safely. The use of LFRs is part of a broader strategy to identify, disrupt, and prevent threats from minority groups,” he explained.

Civil Liberties Group urged the Met to cease the use of LFR cameras last month following a high court challenge by anti-knife activist Sean Thompson. Thompson, a Black man from the UK, was wrongly identified by LFR technology as a suspect and faced police questioning due to fingerprint misidentification.

Laurie’s letter did not reply to Thompson’s claims but countered the assertion that police operate without a legal framework, noting that the Equality Act 2010 obligates public institutions to eliminate discrimination. He also mentioned that the use of LFR technology is covered under the European Convention on Human Rights and the Data Protection Act.

In response to Laurie’s letter, Rebecca Vincent, interim director of Civil Liberties Group Big Brother Watch, remarked: “Participants in this cultural celebration.”

“Everyone wants to ensure public safety, but transforming the Carnival into a police lineup is not the solution.”

Quick Guide

Please contact us about this story





show


The best public interest journalism relies on direct accounts from knowledgeable individuals.

If you have anything to share regarding this subject, please contact us confidentially using the following methods:

Secure Messages in the Guardian App

The Guardian app offers a feature for sending tips. Messages are encrypted end-to-end and hidden within standard operations of the Guardian mobile app. This ensures observers cannot discern that you are communicating with us.

If you still need the Guardian app, you can download it (iOS/Android) and navigate to the menu. Select Secure Messaging.

SecureDrop, Instant Messenger, Email, Phone, and Post

If you’re able to browse the TOR network securely without being monitored, you can send messages and documents to the Guardian via the SecureDrop platform.

Lastly, our guide at theguardian.com/tips provides various secure contact methods and discusses the advantages and disadvantages of each.


Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

UK Relents on Demand for Access to Apple User Data, Reports Spy Chief

The UK government has dismissed claims made by Donald Trump’s intelligence chief, Tulsi Gabbard, that Apple permits law enforcement to “backdoor” access to U.S. customer data.

Gabbard shared her assertion on X, following months of tension involving Apple, the UK government, and the U.S. presidency. Trump accused the UK of acting like China and warned Prime Minister Kiel Starmer, “You can’t do this.”

Neither the Home Office nor Apple has commented on the supposed agreement. Gabbard stated that this indicates the UK does not mandate Apple to provide access to secured, encrypted information related to American citizens, thus preventing backdoors that infringe on civil liberties.

The international dispute intensified when the Department of the Interior issued a “Technical Capacity Notice” to Apple under its statutory authority. Apple responded by initiating a legal challenge, but the Home Office insisted on confidentiality, although the instructed judge’s decision was later made public.

U.S. Vice President JD Vance remarked, “American citizens don’t want to be spied on.” He added that “we’re creating backdoors in our own tech networks that our adversaries are already exploiting,” labeling the situation as “crazy.”

Civil liberties advocates cautioned that backdoors could pose risks to politicians, activists, and minority groups.

In February, Apple retracted an option to enable advanced data protection features, prompting new UK customers to express their “deep disappointment” and declare they would never create a backdoor for their products. Consequently, many UK users remain vulnerable to data breaches and lack access to end-to-end encryption for services like iCloud drives, photos, notes, and reminders.

Gabbard noted, “In recent months, we have collaborated closely with our UK partners and President Trump to safeguard private data belonging to Americans and uphold constitutional rights and civil liberties.”

It’s uncertain if the notification requiring data access will be entirely retracted or modified. Theoretically, it may be restricted to allowing data access solely for UK citizens, but experts caution that this may be technically unfeasible. Additionally, there remains a risk that foreign governments could exploit any established backdoor.

Quick Guide

Please contact us about this story

show

The best public interest journalism depends on direct accounts from informed individuals.

If you have relevant information, please reach out to us confidentially using the following methods:

Secure Messaging in Guardian App

The Guardian app has a feature for sending story tips. Messages are encrypted end-to-end and are camouflaged within normal activities of the Guardian mobile app, preventing any observer from knowing that you are in communication with us.

If you haven’t already, download the Guardian app (iOS/Android) and select ‘Secure Messaging’ from the menu.

SecureDrop, Instant Messenger, Email, Phone, and Mail

If you can safely utilize the TOR network without being observed, you can send us messages and documents through our <a href=\"https://www.theguardian.com/securedrop\">SecureDrop platform</a>.

Lastly, our guide at <a href=\"https://www.theguardian.com/tips\">theguardian.com/tips</a> lists various secure contact methods, including the pros and cons of each.

Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.


It remains unclear whether Apple will regain access to the highest level of data protection for new UK customers.

The Home Office declined to confirm Gabbard’s statements, stating that it “does not comment on operational matters, including whether such notices exist.” They emphasized their long-standing joint security and intelligence agreement with the United States aimed at addressing the most serious threats, including terrorism and child sexual abuse, which involves the role of advanced technologies in exacerbating these issues.

“These agreements have consistently included safeguards to uphold privacy and sovereignty. For example, Data Access Agreements incorporate crucial protections to prevent the UK and the US from targeting each other’s citizens’ data. We are committed to enhancing these frameworks while maintaining a robust security structure that can effectively combat terrorism and ensure safety in the UK,” they added.

The UK Data Access Agreement permits UK agencies to directly request telecommunications content from service providers, including U.S. social media platforms and messaging services, but solely for the investigation, prevention, detection, and prosecution of serious crimes.

Apple was contacted for a statement.

Source: www.theguardian.com

Ofcom Chief: Age Verification Crucial for Kids’ Online Safety

The UK’s primary media regulator has vowed to deliver a “significant milestone” in the pursuit of online safety for children, although it has cautioned that age verification measures must enforce stricter regulations on major tech firms.

Ofcom’s chief, Melanie Dawes, will unveil a new framework on Sunday. To be introduced later this month, marking a pivotal change in how the world’s largest online platforms are regulated.

However, she faces mounting pressure from advocates, many of whom are parents who assert that social media contributed to the deaths of their children, claiming that the forthcoming rules could still permit minors to access harmful content.

Dawes stated to the BBC on Sunday: “This is a considerable moment because the law takes effect at the end of the month.”

“At that point, we expect broader safeguards for children to become operational. We aim for platforms that host material inappropriate for under-18s, such as pornography and content related to suicide and self-harm, to either be removed or to implement robust age checks for those materials.”

She continued: “This is a significant moment for the industry and a critical juncture.”


Melanie Dawes (left) remarked that age checks are “a significant milestone for the industry.” Photo: Jeffover/BBC/PA

The regulations set to take effect on July 25th are the latest steps under the online safety law enacted in 2023 by the Conservative government.

The legislation was partially influenced by advocates like Ian Russell, whose 14-year-old daughter, Molly, tragically took her own life in 2017 after being exposed to numerous online resources concerning depression, self-harm, and suicide.

Minister Tory Removing certain bill sections has been criticized for potentially neglecting regulations on “legal but harmful” content in 2022.

Russell, who previously referred to the ACT as “timid,” expressed concerns regarding its enforcement by Ofcom on Sunday. He noted that while regulators allow tech companies to self-determine validation checks, they will evaluate the effectiveness of these measures.

Russell commented: “Ofcom’s public relations often portray a narrative where everything will improve soon. It’s clear that Ofcom must not only prioritize PR but must act decisively.”

“They are caught between families who have suffered losses like mine and the influence of powerful tech platforms.”

Skip past newsletter promotions

Ian Russell, a father currently advocating for child internet safety, expressed concerns about the enforcement of the law. Photo: Joshua Bratt/PA

Russell pressed Dawes to leverage her influence to urge the government for more stringent actions against tech companies.

Some critics have charged the minister with leaving substantial regulatory loopholes, including a lack of action against misinformation.

A committee of lawmakers recently asserted that social media platforms facilitated the spread of misinformation following a murder in Southport last year, contributing to the unrest that ensued. Labour MP Chi Onwurah, chair of the Science and Technology Committee, remarked that the online safety law “is unraveling.”

Dawes has not sought authority to address misinformation, but stated, “If the government chooses to broaden the scope to include misinformation or child addiction, Ofcom would be prepared to implement it.”

Nonetheless, she called out the BBC regarding their handling of Glastonbury’s coverage, questioning whether the lead singer should continue broadcasting footage of Bob Dylan’s performance amid anti-Israel chants.

“The BBC needs to act more swiftly. We need to investigate these incidents thoroughly. Otherwise, there’s a genuine risk of losing public trust in the BBC,” she stated.

Source: www.theguardian.com

Google’s Chief Warns That Breakup Proposals Could Be Challenging for Business

On Wednesday, Google CEO Sundar Pichai addressed a federal judge, stating that the government’s plan to dissolve the company would significantly obstruct its operations as it seeks to implement changes to remedy alleged illegal monopolies in online search.

Judge Amit P. Mehta of the U.S. District Court for the District of Columbia ruled last year that Google had violated laws to sustain its search monopoly. This month, he held a hearing to establish a remedy for addressing these unlawful practices.

As the company’s second witness, Pichai argued against aggressive governmental solutions, including the sale of Google’s widely-used Chrome web browser and mandates to share data with competitors. He expressed concern that such proposals would force the company to scale back on investments in new technologies in order to redistribute profits to rivals with minimal fees.

“No combination of bailouts can replace what we have invested in R&D over the past three decades and our ongoing innovation to enhance Google search,” he stated, referring to research and development.

Pichai is expected to testify throughout a landmark three-week hearing. The tech industry is currently racing to develop internet products powered by artificial intelligence, and new restrictions on Google’s business could energize its competitors and hinder its own progress.

This case against Google marks the first substantial examination of the U.S. government’s efforts to rein in the extensive power held by commercial entities in the online information landscape. Recently, a federal judge in Virginia concluded that Google also holds a monopoly over various online advertising technologies.

The Federal Trade Commission is engaged in a legal battle with Meta, scrutinizing whether the acquisitions of Instagram and WhatsApp unlawfully diminished competition. Additional federal antitrust actions against Apple and Amazon are anticipated in the coming years.

The Justice Department initiated a lawsuit against Google regarding search practices during President Trump’s first term in 2020.

At the 2023 trial, government attorneys contended that Google has effectively highjacked other search engines by compensating companies like Apple, Samsung, and Mozilla to ensure that its search engine appears as the default on browsers and smartphones. Evidence submitted indicated that this amounted to $26.3 billion in payments in 2021.

In August, Judge Mehta expressed opposition towards the company. Last week, he conducted a three-week hearing aimed at determining an appropriate relief strategy.

The Department of Justice’s suggestions are extensive. The government has asserted that Google must divest Chrome since user queries are automatically directed to its search engine.

During approximately 90 minutes of testimony, Pichai emphasized the company’s significant investments in Chrome, citing its effectiveness in safeguarding users against cyber threats. When government attorneys probed whether future browser owners would manage cybersecurity, Pichai responded assertively, drawing on his deep knowledge of the field.

“Based on my extensive expertise and the understanding of other companies’ capabilities regarding web security, I can confidently discuss this,” he noted.

The government also desires that Google provide search result data to its rivals, a move that would grant other search engines access to information about user searches and clicked websites.

Pichai criticized the proposal for mandatory data sharing, suggesting it effectively threatens the company’s intellectual property, enabling others to reverse-engineer its comprehensive technology stack.

In contrast, Google’s proposal is more limited. He stated that the company should be permitted to continue compensating other businesses for search engine placements, with some arrangements open for annual renegotiation. He also emphasized that smartphone manufacturers should have greater autonomy in selecting which Google applications to install on their devices.

Judge Mehta inquired how other search engines might compete with Google.

“We can hardly rely on the notion that ‘the best product wins,'” Pichai later remarked.

Source: www.nytimes.com

Former UK Cyber Chief believes it is “unrealistic” to demand Apple to break encryption

Apple withdraws one of its crypto services from its UK customers

Slandstock / Alamy

The former cybersecurity chief called the UK government “naive” for Apple’s request to add a backdoor to its software. This allows the UK Intelligence Agency to search customer data.

Ciaran Martin He is the head of cybersecurity at the UK Government Communications Headquarters (GCHQ), and was the first CEO of the National Cybersecurity Centre (NCSC) before joining Oxford University in 2020. New Scientist On reports that the UK government has made an unprecedented request to grant Apple access to data stored anywhere in the world, even if it is encrypted.

Such an order, made under the Investigation Powers Act of 2016, is intended to be made in secret, but Martin says it’s not surprising that details appear to have been leaked. “I think the idea that this type of order for companies like Apple would work secretly was probably naive,” he says.

Neither the Home Office nor Apple has confirmed the existence of requests. However, in February, Apple announced that it would do so. No longer provide advanced data protection servicesIt is designed to securely encrypt cloud data to new users in the UK. “As I’ve said many times before, we’ve never built a backdoor or a master key for our products or services and never would,” Apple said at the time. The same goes for the company Reportedly challenges British orders in legal cases that are likely to be heard secretly.

Martin says that while it’s not uncommon for governments and industries to collide with security issues, he is “not cumbersome, but susceptible to some form of compromise.” He says several times during his career at Intelligence Reporting Agency, technology companies have requested that malicious actors remove features used to harm national security or criminal enterprises. He refused to give details But they often said these are small specialized technology providers.

“They’ll have a new app or something, and it will become a criminal favourite for certain features, and you just say, ‘Look, you can’t do this,'” says Martin. “They are little niche technology, they are widely used. They are more misused than they are used.

At the end of the day, he says, the government must accept that non-crackable encryption will remain here. “The ship sailed,” says Martin. “I think the government has to agree to this in the end, and I think in the long run, I’m trying to force a global Titan. [US] The West Coast is not going well. ”

topic:

Source: www.newscientist.com

UK police chief says that young people driven to violence by selecting and mixing fear online

The leader of counter-terrorism in Britain has expressed concern that more young people, including children as young as 10, are being lured towards violence through the “mix of fear” they encounter on the internet.

Vicky Evans, the deputy commissioner of the Metropolitan Police and senior national co-ordinator for counter-terrorism, noted a shift in radicalization, stating, “There has been a significant increase in interest in extremist content that we are identifying through our crime monitoring activities.”

Evans highlighted the disturbing trend of suspects seeking out material that either lacks ideology or glorifies violence from various sources. She emphasized the shocking and alarming nature of the content encountered by law enforcement in their investigations.

The search history reveals a disturbing fascination with violence, misogyny, gore, extremism, racism, and other harmful ideologies, as well as a curated selection of frightening content.

Detectives from the Counter-Terrorism Police Network are dedicating significant resources to digital forensics to apprehend young individuals consuming extremist material, a troubling trend according to Evans.

The government introduced measures to reform the Prevent system, aimed at deterring individuals from turning to terrorism. They are also reassessing the criteria for participation in Prevent to address individuals showing interest in violence without a clear ideological motive.

Evans emphasized the persistent terrorist threat in the UK, particularly in “deep, dark hotspots” that require urgent attention. Despite efforts to prevent terrorism, the UK has experienced several attacks in recent years.

Skip past newsletter promotions

There have been 43 thwarted terrorist plots since 2017, with concerns over potential mass casualty attacks. The counter-terrorism community is also monitoring the situation in Syria for any potential threats from individuals entering or leaving the country.

Source: www.theguardian.com

Britain’s security chief warns of underestimated cyberattack threats from hostile states and gangs

Britain is being warned by its cybersecurity chief about the seriousness of online threats from hostile states and criminal organizations. Richard Horne, director of the GCHQ National Cyber Security Center, highlighted a threefold increase in “serious” incidents due to Russia’s “aggression and recklessness” and China’s “highly sophisticated” digital operations.

In his recent speech, Mr. Horne emphasized the growing hostile activity in UK cyberspace, driven by adversaries aiming to cause disruption and destruction. He mentioned Russia’s aggressiveness and recklessness and China’s continued sophistication as cyber attackers.

Despite the increasing risks, Horne expressed concern that the severity of the threats facing the UK has been underestimated. This serves as a wake-up call for businesses and public sector organizations.

The NCSC reported a significant increase in serious cyber incidents over the past year, with 430 incidents requiring assistance compared to 371 in the previous year. Horne stressed the need to enhance protection and resilience against cyber threats across critical infrastructure, supply chains, and the economy.

The NCSC’s investigation does not differentiate between nation-state attacks and criminal incidents, but ransomware attacks remain a significant concern in the UK. Recent incidents targeting high-profile organizations like the British Library and Synnovis highlight the reliance on technology and the potential human cost of cyberattacks.

With various cyber threats emanating from Russia, China, Iran, and North Korea, the NCSC is urging organizations to ramp up their cybersecurity measures and stay vigilant. The warning signals the need for a collective effort to safeguard against cyber dangers.

Alan Woodward, a cybersecurity expert, reiterated the importance of staying alert to cyber threats. The government’s warning serves as a reminder for both public and private sectors to prioritize cybersecurity measures.

Source: www.theguardian.com

Donald Trump Appoints Elon Musk as Chief of Government Efficiency

President Donald Trump announced on Tuesday that Elon Musk and former Republican presidential candidate Vivek Ramaswami will head the newly established Department of Government Efficiency.

Despite the name, this department is not a government agency. Trump stated that Musk and Ramaswamy will operate externally, offering “advice and guidance” to the White House, collaborating with the Office of Management and Budget to implement significant structural reforms and fortify an entrepreneurial approach. He expressed that this initiative would be a disruptor to the government system.

President Trump mentioned that this duo will lead the way for his administration to streamline bureaucracy, reduce unnecessary regulations, cut wasteful spending, and restructure federal agencies.



Musk pledged on his social media platform X to document all department actions online for maximum transparency. He encouraged the public to provide feedback if they believe something important is being cut or something unnecessary is being retained.

Ramaswamy acknowledged his appointment on the X show, promising to work diligently alongside Musk, symbolized by an American flag emoji.

The operational model of this organization remains unclear and may be subject to the Federal Advisory Committee Act, which defines the operations and accountability of external bodies advising the government.

As Musk and Ramaswamy are not official federal employees, they are not obligated to disclose assets, divest holdings, or adhere to ethical restrictions imposed on federal employees.

Musk advocated for the government’s efficiency division, emphasizing the acronym “Doge” and promising a comprehensive audit of the federal government’s finances and performance for fundamental reforms.

Dogecoin’s value has surged post-Election Day amid hopes of deregulation under the Trump administration, benefiting Tesla stock which has also seen a rise since the election.

President Trump expects their work to conclude by July 4, 2026, presenting a more compact and efficient government as a “gift” on the Declaration of Independence’s 250th anniversary.

Ramaswamy, a biotech entrepreneur, endorsed Trump after withdrawing from the Republican nomination race last year. He has significant experience in cost-cutting within the corporate realm.

Musk aims to slash government spending by $2 trillion, which could impact his companies such as Tesla, SpaceX, X, and Neuralink due to deregulation and policy changes.



Incorporating a government portfolio into Musk’s endeavors could bolster his companies’ market value and specialties like artificial intelligence and cryptocurrencies.

Analyst Daniel Ives from Wedbush Securities believes Musk will have a significant impact in the Trump administration and on federal agencies.

Critics from Public Citizen, a consumer rights organization, oppose Musk’s appointment, citing his lack of experience in government efficiency and concerns about potential conflicts of interest.

President Trump indicated that Musk, due to his numerous commitments, will not serve full-time in the role but will act as a cost-cutting advisor.

Source: www.theguardian.com

Miami Welcomes the World’s First Chief Thermal Officer

MIAMI BEACH, Fla. — Jane Gilbert embraced the pleasant weather and light breeze of early March while hurrying between meetings. She is well aware that the heat is on its way.

At the Miami Beach Convention Center, Gilbert and numerous scientists, policymakers, activists, and business leaders have convened for the Aspen Ideas: Climate conference. This three-day event focuses on discussing solutions and adaptations to combat global warming.

Gilbert serves as the Chief Heat Officer for Miami-Dade County, a region with over 2.6 million residents situated at the southeastern end of Florida. In 2021, she made history by becoming the first person globally to hold such a position. Since then, others have followed suit in cities worldwide facing the challenges of extreme heat in a warming climate.

Chief heat officers from various locations communicate through a WhatsApp group, exchanging insights and advocating for policy modifications.

Speaking about her interactions, Gilbert stated, “I mostly collaborate with the chief heat officers in Phoenix and Los Angeles, but I’ve also gained knowledge from Melbourne, Australia, Santiago, Chile, and Athens, Greece. Sharing resources like this is one of the most rewarding aspects of my job.”

In South Florida, renowned for its tropical climate, Gilbert’s primary objective is safeguarding residents from intense heat and humidity while enhancing the county’s resilience against heatwaves exacerbated by climate change.

Those particularly at risk when temperatures soar include children, the elderly, the homeless population, individuals who work outside, and those in low-income communities.

Gilbert highlighted, “If you reside and work in an air-conditioned environment and have the means for an air-conditioned vehicle, you’re likely covered. Our main concern is for individuals working outdoors, those unable to stay cool at home, and those enduring long waits at unsafe bus stops.”

Her efforts in aiding the most vulnerable were crucial last year when Miami encountered its hottest summer to date.

She shared, “Over the last 14 years until 2023, the average number of days annually with a heat index surpassing 105 degrees was six. Last summer, it exceeded 42 days, a staggering seven times the norm.”

Numerous forecasts indicate that the situation could worsen. 2023 marked the hottest year on record globally. Climate experts project that this year might be equally scorching, if not more.

Recalling the skepticism she faced upon her appointment, Gilbert emphasized the urgency of having professionals dedicated solely to addressing heat-related challenges in South Florida.

“While it’s always warm here, there are now 77 additional days above 90 degrees compared to five decades ago,” she mentioned. “That’s a significant escalation.”

Heat is often dubbed a “silent killer,” causing more deaths annually in the United States than any other weather phenomenon, according to the National Weather Service. Gilbert noted a surge in heat-related ER visits last summer amidst the temperature spikes.

Studies suggest that by the middle of this century, this region of Florida may face heat index temperatures of 105 degrees Fahrenheit or higher for a duration of approximately 88 days each year, roughly three months.

Given the predictions, Gilbert stressed the urgency in taking action.

Ahead of the impending heat surge, her team is reaching out to renters and homeowners regarding cost-effective cooling methods. Training programs are also lined up for healthcare workers, homeless outreach workers, and summer camp providers, similar to last year.

She reiterated, “Our top priority is reaching the most vulnerable groups and tailoring messages for varied communities. That’s why we use English, Spanish, and Haitian Creole to communicate about the risks of extreme heat and preparation methods through radio, social media, and community platforms.”

Over the next month, the focus will shift to educating employers on safeguarding workers. This initiative became more pressing after the Florida Senate sanctioned a bill that would bar local governments from enforcing mandatory water breaks or workplace safety standards against extreme heat beyond federal regulations.

Gilbert expressed concern about the bill’s potential repercussions, citing statistics showing that construction workers are up to 11 times more susceptible to heat-related illnesses during extreme temperatures than the average person. Agricultural workers face an even higher risk, being 35 times more vulnerable.

Despite the challenges, Gilbert believes progress can still be achieved in advocating for employers to adhere to OSHA guidelines, enhancing productivity during hot spells, improving worker retention, reducing compensation claims, and yielding other economic benefits.

She emphasized, “This is where we must focus our efforts. By collaborating with OSHA offices, we can recognize the compliant entities and, in some cases, address non-compliance.”

Having served as the chief resilience officer for the City of Miami previously, Gilbert is well-versed in navigating legal obstacles. She acknowledged the irony of hosting this week’s climate conference in a city often referred to as the “epicenter” of the nation’s climate crisis.

“Florida is a complex landscape when it comes to politics, and I’m accustomed to climate change being a contentious topic,” she noted. “Nevertheless, I’ll do my part, right?”

Source: www.nbcnews.com

IMF Chief Predicts AI will Affect 40% of Jobs and Potentially Exacerbate Inequality

According to the Director-General of the International Monetary Fund, artificial intelligence will impact 40% of jobs around the world, and countries need to build social safety nets to reduce the impact on vulnerable workers. “Very important.”

AI, a term that refers to computer systems capable of performing tasks typically associated with a level of human intelligence, is poised to significantly change the global economy, with a growing risk of disrupting developed economies.

Analysis by IMFThe last international financial institution says that around 60% of jobs in developed countries such as the US and UK are exposed to AI, and half of them could be adversely affected. But as AI improves performance, the technology could also help some humans become more productive, the report said.

According to the IMF, the safest jobs at risk are those that are “highly complementary” to AI, meaning that the technology supplements rather than completely replaces jobs. This includes roles that involve a high degree of responsibility and interaction with people, such as surgeons, lawyers, and judges.

High-risk jobs that are “low complementarity” (i.e., could be replaced by AI) include telemarketing or cold calls to solicit people to offer goods or services. According to the IMF, low-exposure occupations include dishwashers and performers.

According to the IMF, AI will account for 40% of job opportunities in emerging market countries (defined by the IMF as countries including China, Brazil, and India) and 26% in low-income countries, for a total of just under 40%.

Generative AI (a term used to describe technologies that can generate highly plausible text, images, and even audio from simple manual prompts) has emerged on the political agenda since the advent of tools such as ChatGPT chatbots.

IMF Managing Director Kristalina Georgieva said the ability of AI to impact high-skilled jobs means developed countries face greater risks from the technology. She added that in extreme cases, jobs could be lost in some major economies.

“About half of the exposed jobs could benefit from AI integration and increase productivity,” Georgieva said in a blog post accompanying the IMF study. “For the other half, AI applications could perform key tasks currently performed by humans, which could reduce demand for labor and lead to lower wages and fewer jobs. In extreme cases, some of these jobs may disappear.”

He added that in most scenarios, AI would likely exacerbate inequality across the global economy and could cause social tensions without political intervention. AI is expected to be high on the agenda at the World Economic Forum in Davos this week, where top technology industry leaders are expected to attend.

“It is important for countries to establish comprehensive social safety nets and provide retraining programs for vulnerable workers,” Georgieva said. “Doing so can make the transition to AI more inclusive, protect livelihoods, and limit inequality.”

According to the IMF's analysis, high-wage workers in jobs that are highly complementary to AI can expect to see higher incomes, which could lead to higher inequality.

“This will further widen income and wealth inequality resulting from higher returns to capital accruing to high-income earners,” the IMF report said. “Countries' choices regarding fiscal policy, including the definition of AI property rights and redistribution, will ultimately shape the impact on the distribution of income and wealth.”

The report found that the UK workforce, with a high proportion of university graduates, is under no obligation to do so, although older workers may struggle to adapt to new jobs, move on to new jobs or retrain. They say they may be ready to switch from a job that risks leaving them to a job that is “highly complementary.”

Last year, the Organization for Economic Co-operation and Development said the occupations most at risk from AI automation are high-skilled occupations, which account for about 27% of all agency jobs. 38 member countriesThis includes the United Kingdom, Japan, Germany, United States, Australia, and Canada. He said skilled professions such as law, medicine and finance are most at risk.

Source: www.theguardian.com

Report reveals former employee’s criticism of Instagram chief Adam Mosseri’s track record on youth safety

Instagram boss Adam Mosseri has announced that his employees, even as parent company Meta Inc. faces increased legal scrutiny over concerns that the popular social media app is harming young users, have reportedly prevented or weakened the implementation of youth safety features. Mosseri, whose name frequently appears in a high-profile lawsuit brought by 33 states accusing Meta of having addictive features in its apps that harm the mental health of young people, reportedly ignored “pressure from employees” to install some of its safety features as default settings for Instagram users. According to the information.

Meta-owned Instagram and Facebook say their use is fueling a number of worrying trends among young people, including an increase in depression, anxiety, insomnia, body image issues, and eating disorders. This claim has drawn criticism from critics.

Despite this, Instagram executives have rejected pressure from members of the company’s “welfare team” to include app features that encourage users to stop comparing themselves to others, according to three former employees with knowledge of the details. The feature was implemented despite Mosseri himself acknowledging in an internal email that he considered “social comparisons” to be “an existential problem facing Instagram” and that “social comparisons are for Instagram.” It wasn’t done. [what] According to the state’s complaint, the election interference is against Facebook.

Adam Mosseri was appointed as the head of Instagram in 2018. Reuters

Additionally, a Mosseri-backed feature that addresses the “social comparison” problem by hiding Instagram like counts will eventually be “watered down” and an option that users can manually enable. The report states that this has been set up.

Internally, some employees have reportedly pointed out that the “like hiding” tool would hurt engagement in the app, resulting in less advertising revenue.

While some sources praised Mosseri’s efforts to promote youth safety, one told the magazine that Instagram has a pattern of making such features optional rather than automatically implementing them. There was also

A Meta spokesperson did not specifically answer questions about why the company rejected proposals for tools to combat problems arising from social comparison issues.

“We don’t know what triggers a particular individual to compare themselves to others, so we give people the tools to decide for themselves what they do and don’t want to see on Instagram. ,” a Meta spokesperson told the publication.

A coalition of state attorneys general is suing Instagram and Facebook. shutter stock

Mehta did not immediately respond to a request for comment from the Post.

Elsewhere, Mosseri allegedly objected to the use of a tool that automatically blocks offensive language in direct message requests. The reason for this, The Information reported, citing two former employees, was “because we thought it might prevent legitimate messages from being sent.”

Finally, Instagram approved an optional “filter” feature in 2021, allowing users to block the company’s curated list of offensive words or compile their own list of offensive phrases and emojis they’d like to block. I made it possible.

The move reportedly infuriated safety staff, including former Meta engineer Arturo Bejar. They believed that people of color should not be forced to confront offensive language in order to address the problem. In November, Mr. Behar testifies before Senate committee About harmful content on Instagram.

“I returned to Instagram with the hope that Adam would be proactive about addressing these issues, but there was no evidence of that in the two years I was there,” Bejart said, initially starting Meta in 2015. He retired in 2007 and returned to a safety management role. the team told the outlet in 2019.

Mehta has been accused of failing to protect young social media users. Just Right – Stock.adobe.com

Meta pushed back against the report, saying Instagram has implemented a series of safety defaults for teen users, including blocking adults 19 and older from sending direct messages to teen accounts that don’t follow them. It was pointed out that the function has been introduced.

For example, Meta said its tool called “Hidden Words,” which hides offensive phrases and emojis, will be enabled by default for teens starting in 2024. The company said it has announced more than 20 policies regarding teen safety since Mosseri took over Instagram. 2018.

Mosseri echoed this, writing that further investments in platform security would “strengthen our business.”

“If teens come to Instagram and feel bullied, receive unwanted advances, or see content that makes them uncomfortable, they will leave and go to a competitor.” said Mosseri. “I know how important this work is, and I know that my leadership will be determined by how much progress we make in this work. I look forward to continuing to do more.” Masu.”

Instagram, led by Adam Mosseri, has reportedly scrapped or watered down proposed safety tools. Getty Images

Mosseri was one of several meth executives who came under scrutiny as part of a major lawsuit filed in October by a coalition of 33 state attorneys general. The lawsuit claimed in part that Meta’s millions of underage Instagram users were the company’s “open secret.” The complaint includes an internal chat from November 2021 in which Mosseri appeared to acknowledge the app’s problems with underage users, saying, “Teens want access to Instagram. , who is my age and wants to get Instagram right now.”

A month later, Mosseri testified before the Senate that children under 13 “are not allowed to use Instagram.” He also told MPs that he believes online safety for young people is “very important”.

Separate from the state legal challenges, Meta is facing a separate lawsuit from New Mexico, alleging it failed to protect young people from alleged sex offenders and flooded them with adult sex material. confronting.

Source: nypost.com