The report highlighted the importance of establishing a system in the UK to track instances of misuse or failure of artificial intelligence. Without such a system, ministers could be unaware of alarming incidents related to AI.
The Centre for Long Term Resilience (CLTR) suggested that the next government should implement a mechanism to record AI-related incidents in public services and possibly create a centralized hub to compile such incidents nationwide.
CLTR emphasized the need for incident reporting systems, similar to those used by the Air Accident Investigation Branch (AAIB), to effectively leverage AI technology.
According to a database compiled by the Organisation for Economic Cooperation and Development (OECD), there have been approximately 10,000 AI “safety incidents” reported by news outlets since 2014. These incidents encompass a wide range of harms, from physical to economic and psychological, as defined by the OECD.
The OECD’s AI Safety Incident Monitor also includes instances such as a deepfake of Labour leader Keir Starmer and incidents involving self-driving cars and a chatbot-influenced assassination plot.
Tommy Shafer-Shane, policy manager at CLTR and author of the report, noted the critical role incident reporting plays in managing risks in safety-critical sectors like aviation and healthcare. However, such reporting is currently lacking in the regulatory framework for AI in the UK.
CLTR urged the UK government to establish an accident reporting regime for AI, similar to those in aviation and healthcare, to address incidents that may not fall under existing regulatory oversight. Labour has promised to implement binding regulations for most AI incidents.
The think tank recommended the creation of a government system to report AI incidents in public services, identify gaps in AI incident reporting, and potentially establish a pilot AI incident database.
In a joint effort with other countries and the EU, the UK pledged to cooperate on AI security and monitor “AI Harm and Safety Incidents.”
CLTR stressed the importance of incident reporting to keep DSIT informed about emerging AI-related risks and urged the government to prioritize learning about such harms through established reporting processes.
Source: www.theguardian.com