The Risks of Believing in Self-Proclaimed AI Experts

Demis Hassabis, the CEO of Google DeepMind and a Nobel Prize Winner for his work in developing the AlphafoldAI algorithm that predicts protein structures, appeared on 60 Minutes in April. He asserted that, with the assistance of AI like Alphafold, we might reach the end of all diseases within the next decade.

This assertion is met with skepticism by those involved in drug development and disease treatment. For instance, Derek Lowe, an experienced drug chemist, reacted to Hassabis’ remarks by stating, “I want to quietly stare out the window and express words I don’t understand.” It’s not necessary to be an expert to see the hyperbole; the notion of entirely eliminating diseases in a decade is far-fetched.

Some speculate that Hassabis’ claim is yet another instance of tech leaders overstating their achievements to attract investors. Isn’t this reminiscent of Elon Musk’s outrageous predictions about Mars settlements or Sam Altman’s assertions regarding the impending arrival of artificial general intelligence (AGI)? While this cynical perspective has some merit, such experts may downplay the underlying complexities.

It seems like authorities occasionally make bold statements outside their expertise (consider Stephen Hawking on AI, aliens, and space travel). However, Hassabis appears to recognize his boundaries, as his Nobel comments highlight the potential for new drug development stemming from Alphafold’s predictions, which generated buzz about groundbreaking discoveries.

Similarly, another 2024 Nobel laureate, Jeffrey Hinton, previously an AI advisor at Google, emphasized that large-scale language models (LLMs) are similar to human learning. So, don’t worry about crying protests from cognitive scientists—or in some instances, AI too much.

These examples suggest that, oddly, some AI experts may mirror their creations—producing remarkable outcomes while acknowledging their limitations.

Another case is Daniel Kokotajiro, a researcher who departed from OpenAI over concerns regarding AGI and is now the executive director of the AI Futures project in California. He stated, “We’ve caught AIs lying, and I’m sure they knew what they were saying was wrong.” His knowledge, intentions, and anthropomorphic language reveal that Kokotajiro may be overlooking the true nature of LLMs.

The danger of assuming these experts are always right is highlighted by Hinton’s 2016 comment suggesting that, due to AI, “We should stop training radiologists now.” Fortunately, radiology experts dismissed this claim; there are doubts about a connection between his comments and growing concerns among medical students regarding the future of radiology jobs. Hinton has since revised that statement—but imagine the impact it could have had if he had already received a Nobel. The same applies to Hassabis’ comments about illnesses. The notion that AI could handle everything fosters overconfidence when it requires a far more nuanced, scientifically and politically-informed approach.

These “expert” predictions often go unchallenged in the media. I can personally attest that even some intelligent scientists are persuaded by them. Many governmental leaders seem to have bought into the hype generated by high-tech CEOs and Silicon Valley titans. We need to start scrutinizing their proclamations with the same skepticism we apply to the statements made by LLMs.

Philip Ball is a science writer based in London. His latest book is How Life Works.

Topics:

  • artificial intelligence/
  • technology

Source: www.newscientist.com