Britain is spearheading a global initiative to test state-of-the-art AI models for safety concerns prior to their public release, with regulators scrambling to establish effective protections before the upcoming Paris summit in six months. The UK’s AI Safety Institute, the world’s first of its kind, is currently collaborating with counterparts in South Korea, the US, Singapore, Japan, and France.
Regulators at the Seoul AI Summit aim to collaborate on crafting a modern version of the Montreal Protocol, a pivotal agreement regulating CFCs and addressing the ozone layer depletion. Before this can be achieved, the institutes must agree on how to unify international approaches and regulations for AI research.
At the sold-out meeting of countries, Donnellan emphasized the imperative of not being complacent as AI development accelerates. The safety agency network, with a strict deadline, must demonstrate mastery of cutting-edge AI testing and evaluation to regulate AI models effectively.
Clarke from Anthropic’s AI lab highlighted the significance of establishing a Functional Safety Institute, positioning the UK ahead in AI safety. Donnellan announced £8.5 million in funding for AI safety testing, with Bennett from the Ada Lovelace Institute emphasizing the need for robust programs to address societal and systemic risks.
Despite the summit’s exclusion of key voices, industry leaders urge a mature approach to regulating AI for a sustainable future. The focus must extend beyond large-scale models to encompass the broader AI economy, fostering a framework that benefits the majority of industry players.
Source: www.theguardian.com