Alan Turing Institute Unveils Initiative to Safeguard Britain Against Cyber Attacks

The foremost AI institute in Britain has declared a new initiative to safeguard the nation from cyber assaults targeting essential services such as energy, transportation, and utilities. This announcement follows the resignation of its chief executive, who stepped down amid pressure from government officials over allegations of a detrimental workplace environment.

On Tuesday, the Alan Turing Institute revealed that it will “launch a program of science and innovation focused on shielding the UK from hostile threats.” This initiative is part of a broader reorganization following the resignation of CEO Jean Innes last month, which came after staff discontent and the government’s directive for a state-sponsored strategic review of the institution.

This mission arises from escalating worries about online disruptions and the UK’s susceptibility to cyberattacks, particularly in light of recent incidents that impacted Amazon’s cloud operations globally, along with cyberattacks that disrupted production at Jaguar Land Rover’s facility and influenced the supply chains of Marks & Spencer and Co-op.

Bryce Crawford, the former leader of the UK Air and Space Warfare Center, is expected to deliver a report next month addressing how government-supported research institutes can “enhance the scale of the government’s AI goals in defense, national security, and intelligence.”


Chairman Doug Garr, a former president of Amazon UK, disclosed that 78 different research initiatives at the 440-member institute have been shut down, transferred, or completed due to misalignment with the new trajectory.

The institute has experienced significant internal conflict since last year as staff opposed the proposed changes, leading to a group of employees submitting a whistleblower complaint to the Charity Commission.

In a BBC interview, Garr stated that the allegations from the whistleblower were “independently investigated” by an external entity and deemed “without merit.”

Named after the mathematical pioneer who played a crucial role in decoding the Enigma machine during World War II, the institute is associated with key concepts of AI and is also known for the Turing Test, which evaluates whether computers can demonstrate human-like intelligence.

The institute will additionally emphasize applying AI to environmental and health challenges. Leveraging rapidly evolving technology, it aims to create faster and more precise methods to forecast shifts in weather, oceans, and sea ice, aiding UK government endeavors to enhance the readiness of emergency responders. Furthermore, it seeks “measurable reductions in emissions across transportation networks, manufacturing processes, and critical infrastructure.”

In the health sector, it will prioritize the creation of a digital twin of the human heart, pushing forward in AI-enabled personalized medicine to potentially enhance medical interventions and improve outcomes for patients with severe heart conditions.

Source: www.theguardian.com

White House Funding Cuts Endanger AI Weather Forecasting Institute

Funding for a $20 million artificial intelligence lab aimed at enhancing weather forecasts has been halted by the Trump administration. This decision threatens both the pipeline of scientists and the nation’s capability to evaluate the effects of hurricanes and other weather-related disasters.

According to Amy McGovern, the director of the Institute for AI2ES (AI Institute for Heather and Weather, Climate, and Coastal Oceanography), the National Science Foundation (NSF) informed the institute last month that it would not extend its five-year grant.

McGovern, who serves as a professor of meteorology and computer science at the University of Oklahoma, stated:

She emphasized that, without private funding, the institute may have to close its doors next year.

AI2ES collaborates with various universities to integrate AI into weather forecasting while evaluating its reliability.

This move to shut down AI2ES occurs as the Trump administration is heavily investing in AI and accelerating the establishment of data centers. The administration’s own AI plan advocates for the development of AI systems and programs aimed at fostering AI vocational training programs and specialized AI labs across various scientific fields.

In July, the administration unveiled an ambitious plan to achieve “global dominance” in artificial intelligence, emphasizing both innovation and its implementation—key areas of focus for AI2ES.

Alan Gerald, the former director of the National Intensive Storm Institute at the National Oceanic and Atmospheric Administration, described the cut as “dissonance” in light of this trend toward advancing technology.

The White House has not responded to requests for comments regarding this matter.

The institute was established in 2020 under the previous Trump administration as part of the NSF’s AI research labs, having received around $20 million in funding over the past five years. An NSF spokesperson, Michael England, stated that the agency holds the AI Institute’s groundbreaking work in high regard.

The National Science Foundation is fully committed to advancing artificial intelligence research through the National AI Research Institute Program, a pivotal aspect of the administration’s strategy to reinforce the US’s leadership in transformative AI.

NSF and its collaborating partners have provided funding for a network of 29 AI institutes. This year, AI2ES was one of five labs updated through the NSF, with three labs having received updates, while the status of the fourth remains pending, according to McGovern.

The Trump administration has proposed a 55% budget cut for the NSF; however, Congress has not yet ratified the budget. Senate and House appropriations have diverged from the Trump administration’s proposals, suggesting smaller cuts to scientific institutions like the NSF.

“We were an AI lab, so we believed we were secure, given our alignment with the president’s priorities,” McGovern noted.

The Trump administration’s AI plan aims for NSF and other organizations to expose K-12 students to AI careers, develop industry-driven training programs to generate AI jobs, and bolster workforce initiatives to enhance the nation’s AI talent pool.

“They desire a more robust AI-trained workforce. We were doing a significant amount of work,” McGovern emphasized.

She expressed concern that private AI firms are “poaching talent constantly,” as the institute funds around 70 positions each year at various universities, creating a talent pipeline. Among the institute’s achievements are over 130 academic publications and the development of AI tools used by the government today.

The center aided in the creation of AI tools that predict weather events potentially endangering sea turtles near Corpus Christi, Texas, making these animals susceptible to hazards onboard vessels.

Additionally, the institute developed an application enabling forecasters to “see” within hurricanes, even without a polar orbit satellite equipped with a microwave sensor capable of penetrating storm clouds. This application utilizes data from Earth-measuring satellites that cannot penetrate clouds and simulates the internal structure of a hurricane.

The center is also investigating how forecasters evaluate the reliability of AI tools developed by private companies, including Google.

“We have social scientists who engage with end-users to comprehend their trust in AI, their reservations, and what improvements are necessary,” remarked McGovern.

According to Gerald, if the center were to shut down, it wouldn’t adversely affect current weather forecasting but could limit innovation and place the nation at a disadvantage.

“Many other countries are heavily investing in AI-related weather research, like China. They risk falling behind many nations committed to enhancing weather forecasting,” Gerald concluded.

Source: www.nbcnews.com

Alan Turing Institute in the UK commences consultation on potential lay offs due to AI advancements

The National Institute for Artificial Intelligence and Data Science in Britain has initiated a consultation process that may result in the redundancy of 440 employees.

In a memo sent to staff this month, the Alan Turing Institute announced an update on its new strategy, which involves focusing on a smaller number of projects.

Addressed to “affected employees,” the letter mentioned that government-backed labs might have to reduce their workforce. Unofficial estimates suggest that the memo could have been sent to about 140 individuals.

The institute collaborates with universities, private companies, and government agencies on 111 active projects. An internal document states that they will need to scale back their involvement in some projects.

Last year, the institute introduced a new strategy called “Turing 2.0,” with a focus on health, environment, defense, and security. However, due to lower core funding, they are considering restructuring and potentially closing certain projects.

The institute is evaluating which projects align with their new strategy and could lead to staff reductions. They aim to minimize layoffs and will involve employee representatives in the decision-making process.

Dr. Jean Innes, the institute’s CEO, mentioned that they are entering a new ambitious phase to address societal challenges using technology.

Named after the renowned mathematician, the institute was initially focused on data science before including AI in its mission in 2017. Its objectives include conducting top-notch research to tackle global issues and fostering informed discussions about AI.

With upcoming government announcements on technology, the institute is gearing up for potential changes. This includes launching an “AI Action Plan” led by Technology Entrepreneur Matt Clifford, focusing on economic growth and public service enhancement.

Additionally, there are plans to establish a legally binding AI model testing agreement with tech companies, separate the UK AI Safety Institute from the Turing Institute, and introduce a consultation on the proposed AI bill.

Source: www.theguardian.com

The Oxford Institute for the Future of Humanity: Examining the Controversial Legacy of Eugenics in Technology

T
A few weeks ago, it was quietly announced that the Future of Humanity Institute, a famous interdisciplinary research center in Oxford, no longer has a future. It closed without warning on April 16th. Initially, its website contained only a short statement that it had been closed and that research could continue elsewhere within or outside the university.

The institute, dedicated to the study of humanity’s existential risks, was founded in 2005 by Swedish-born philosopher Nick Bostrom and quickly made a name for itself beyond academia. Many high-tech billionaires praised the institute, especially in Silicon Valley, and provided financial support.

Mr. Bostrom is perhaps best known for his 2014 best-selling book. super intelligence, which warned of the existential dangers of artificial intelligence, but also became widely known for his 2003 academic paper “Are You Living in a Computer Simulation?” The paper argues that over time, humans are likely to develop the ability to create simulations that are indistinguishable from reality, and if this is the case, it has already happened and we may be the simulation. insisted.

I interviewed Bostrom more than a decade ago, and he had one of those elusive and rather abstract personalities that perhaps lends credence to simulation theory. He was pale, had a reputation for working all night, and seemed like the type of person who didn’t go out much. The Institute appears to be aware of this social shortcoming. final reporta long inscription written by Fuji Heavy Industries researcher Anders Sandberg states:

“We have not invested enough in the politics and socialization of the university to build long-term, stable relationships with faculty…When epistemology and communication practices become too disconnected, misunderstandings flourish.”




Nick Bostrom: “Proudly provocative on paper, cautious and defensive in person.” Photo: Washington Post/Getty Images

Like Sandberg, Bostrom is an advocate of transhumanism, the belief in using advanced technology to improve longevity and cognitive abilities, and is said …

Source: www.theguardian.com

Rishi Sunak Commends AI Safety Institute at Bletchley, Though Regulation is Delayed

The Frontier AI Taskforce, set up by the UK in June in preparation for this week’s AI Safety Summit, is expected to become a permanent fixture as the UK aims to take a leading role in future AI policy. UK Chancellor Rishi Sunak today formally announced the launch of the AI ​​Safety Institute, a “global hub based in the UK tasked with testing the safety of emerging types of AI”.

The institute was informally announced last week ahead of this week’s summit. This time, the government announced that the committee will be led by Ian Hogarth, an investor, founder and engineer who also chaired the taskforce, and that Yoshuo Bengio, one of the most prominent figures in the AI ​​field, will lead the committee. It was confirmed that the Creating your first report.

It’s unclear how much money the government will put into the AI ​​Safety Institute, or whether industry players will pick up some of the costs. The institute, which falls under the Department of Science, Innovation and Technology, is described as “supported by major AI companies,” but this may refer to approval rather than financial support. do not have. We have reached out to his DSIT and will update as soon as we know more.

The news coincided with yesterday’s announcement of a new agreement, the Bletchley Declaration. The Bletchley Declaration was signed by all countries participating in the summit, pledging to jointly undertake testing and other commitments related to risk assessment of ‘frontier AI’ technologies. An example of a large language model.

“Until now, the only people testing the safety of new AI models were the companies developing them,” Sunak said in a meeting with journalists this evening. Citing efforts being made by other countries, the United Nations and the G7 to address AI, the plan is to “collaborate to test the safety of new AI models before they are released.”

Admittedly, all of this is still in its early stages. The UK has so far resisted moves to consider how to regulate AI technologies, both at the platform level and more specific application level, and the idea of ​​quantifying safety and risk has stalled. Some people think that it is meaningless.

Mr Sunak argued it was too early to regulate.

“Technology is developing at such a fast pace that the government needs to make sure we can keep up,” Sunak said, focusing too much on big ideas but too little on legislation. He spoke in response to accusations that he was “Before we make things mandatory and legislate, we need to know exactly what we’re legislating for.”

Transparency appears to be a very clear goal of many long-term efforts around this brave new world of technology, but today’s series of meetings at Bletchley, on the second day of the summit, It was far from the ideal.

In addition to bilateral talks with European Commission President Ursula von der Leyen and United Nations Secretary-General António Guterres, today’s summit focused on two plenary sessions. Though not accessible to journalists watching from across a small pool as people gather in the room, attendees at the event included the CEOs of DeepMind, OpenAI, Antrhopic, InflectionAI, Salesforce, and Mistral, as well as Microsoft The president of the company and the head of AWS were also included. Among those representing governments were Sunak, US Vice President Kamala Harris, Italy’s Giorgia Meloni and France’s Finance Minister Bruno Le Maire.

Remarkably, although China was a much-touted guest on the first day, it did not appear at the closed plenary session on the second day.

Elon Musk, owner of X.ai (formerly Twitter), also appeared to be absent from today’s session. Mr. Sunak is scheduled to have a fireside chat with Mr. Musk on his social platforms this evening. Interestingly, it is not expected to be a live broadcast.

Source: techcrunch.com