In response to Jeffrey Hinton’s concerns about the risks of artificial intelligence (‘The Godfather of AI’ predicts technology won’t wipe out humanity in 30 years, Dec. 27), I believe these concerns can be addressed through collaborative research involving regulators.
Current frontier AI is tested post-development by a “red team” to identify any potential negative outcomes. However, this approach is not sufficient on its own. AI must be designed with safety and evaluation as top priorities, drawing on expertise from the relevant industry.
While Hinton may not see AI as an intentional existential threat, it is important to proactively avoid such scenarios. Even if I don’t share his views on the risk level, the precautionary principle demands immediate action to prevent potential harm.
Unlike traditional safety-critical systems like aircraft, frontier AI lacks physical limitations on the speed at which safety can be compromised once deployed. This necessitates regulatory intervention. Risk assessment prior to implementation is ideal, but current metrics fall short, failing to consider factors like application scope and deployment scale.
Regulators should have the authority to recall AI models post-deployment (with companies implementing mechanisms to restrict certain uses) and support risk assessment efforts that offer early warning signs of potential risks. Governments must shift towards post-market regulatory oversight while aiding research for pre-market regulatory frameworks, crucial if Hinton’s risk assessment holds true.
Source: www.theguardian.com