The Frontier AI Taskforce, set up by the UK in June in preparation for this week’s AI Safety Summit, is expected to become a permanent fixture as the UK aims to take a leading role in future AI policy. UK Chancellor Rishi Sunak today formally announced the launch of the AI ​​Safety Institute, a “global hub based in the UK tasked with testing the safety of emerging types of AI”.
The institute was informally announced last week ahead of this week’s summit. This time, the government announced that the committee will be led by Ian Hogarth, an investor, founder and engineer who also chaired the taskforce, and that Yoshuo Bengio, one of the most prominent figures in the AI ​​field, will lead the committee. It was confirmed that the Creating your first report.
It’s unclear how much money the government will put into the AI ​​Safety Institute, or whether industry players will pick up some of the costs. The institute, which falls under the Department of Science, Innovation and Technology, is described as “supported by major AI companies,” but this may refer to approval rather than financial support. do not have. We have reached out to his DSIT and will update as soon as we know more.
The news coincided with yesterday’s announcement of a new agreement, the Bletchley Declaration. The Bletchley Declaration was signed by all countries participating in the summit, pledging to jointly undertake testing and other commitments related to risk assessment of ‘frontier AI’ technologies. An example of a large language model.
“Until now, the only people testing the safety of new AI models were the companies developing them,” Sunak said in a meeting with journalists this evening. Citing efforts being made by other countries, the United Nations and the G7 to address AI, the plan is to “collaborate to test the safety of new AI models before they are released.”
Admittedly, all of this is still in its early stages. The UK has so far resisted moves to consider how to regulate AI technologies, both at the platform level and more specific application level, and the idea of ​​quantifying safety and risk has stalled. Some people think that it is meaningless.
Mr Sunak argued it was too early to regulate.
“Technology is developing at such a fast pace that the government needs to make sure we can keep up,” Sunak said, focusing too much on big ideas but too little on legislation. He spoke in response to accusations that he was “Before we make things mandatory and legislate, we need to know exactly what we’re legislating for.”
Transparency appears to be a very clear goal of many long-term efforts around this brave new world of technology, but today’s series of meetings at Bletchley, on the second day of the summit, It was far from the ideal.
In addition to bilateral talks with European Commission President Ursula von der Leyen and United Nations Secretary-General António Guterres, today’s summit focused on two plenary sessions. Though not accessible to journalists watching from across a small pool as people gather in the room, attendees at the event included the CEOs of DeepMind, OpenAI, Antrhopic, InflectionAI, Salesforce, and Mistral, as well as Microsoft The president of the company and the head of AWS were also included. Among those representing governments were Sunak, US Vice President Kamala Harris, Italy’s Giorgia Meloni and France’s Finance Minister Bruno Le Maire.
Remarkably, although China was a much-touted guest on the first day, it did not appear at the closed plenary session on the second day.
Elon Musk, owner of X.ai (formerly Twitter), also appeared to be absent from today’s session. Mr. Sunak is scheduled to have a fireside chat with Mr. Musk on his social platforms this evening. Interestingly, it is not expected to be a live broadcast.
Source: techcrunch.com