The UK should prioritize setting global standards for artificial intelligence testing, instead of attempting to conduct all reviews itself, as suggested by the company responsible for the government’s AI Safety Institute.
Mark Warner, CEO of Faculty AI, emphasized the institute’s commitment to AI safety and its development of technologies for chatbots like ChatGPT. He cautioned that excessive scrutiny of AI models could be limiting.
Last year, Rishi Sunak announced the establishment of the AI Safety Institute (AISI) ahead of a global AI safety summit. This initiative involved collaboration with large tech companies from the EU, UK, US, France, and Japan to prioritize testing of advanced AI models before and after deployment.
The UK’s leading role in AI safety was underscored by the establishment of the Institute, according to Warner, whose London-based company also works with a British lab to test AI model compliance with safety guidelines.
Warner stressed the importance of the institute becoming a global leader in setting testing standards: “I think it’s important to set standards for the wider world rather than trying to do everything ourselves,” he said.
He also expressed optimism about the institute’s potential as an international standard setter, promoting scalability in maintaining AI security and describing it as a long-term vision.
Warner cautioned against the government taking on all testing responsibilities, advocating for the development of standards that other governments and companies can adopt instead.
He acknowledged the challenge of testing every released model and suggested focusing on the most advanced systems.
The Financial Times reported that major AI companies are urging the UK government to expedite safety testing of AI systems. Notably, the US also announced the establishment of an AI Safety Institute participating in the testing program outlined at the Bletchley Park summit.
The UK’s Department for Science, Innovation and Technology emphasized the role of governments in testing AI models, with the UK taking a leading global role through the AI Safety Institute.
Source: www.theguardian.com