A prominent British-Canadian computer scientist often referred to as the “godfather” of artificial intelligence has reduced the likelihood of AI causing the extinction of humanity in the next 30 years, stating that the rate of technological advancement is “much faster” than anticipated. I warned you.
Professor Geoffrey Hinton, the recipient of this year’s Nobel Prize in Physics for his contributions to AI, suggested that there is a “10% to 20%” probability of AI leading to human extinction within the next three decades.
Hinton previously expressed that there was a. 10% chance that technology could result in catastrophic outcomes for humanity.
When asked on BBC Radio 4’s Today program if he had revised his assessment of the potential AI doomsday scenario and the one in 10 likelihood of it happening, he replied, “No, it’s between 10% and 20%.”
In response to Hinton’s estimate, former Prime Minister Sajid Javid, who was guest editing Today, remarked, “You’re going up,” to which Hinton quipped, “You’re going up. You know, we’ve never had to confront anything more intelligent than ourselves.”
He further added, “And how many instances do you know of something more intelligent being controlled by something less intelligent? There are very few examples. There’s a mother and a baby. In evolutionary theory, the baby controls the mother. It took a lot of effort to make it possible, but that’s the only example I know of.”
Hinton, a professor emeritus born in London and based at the University of Toronto, emphasized that humans would appear infantile compared to the intelligence of highly advanced AI systems.
“I like to compare it like this: Imagine yourself and a 3-year-old. We’re in third grade,” he stated.
AI can broadly be defined as computer systems that can perform tasks typically requiring human intelligence.
Last year, Hinton resigned from his position at Google to speak more candidly about the risks associated with unchecked AI development, citing concerns that “bad actors” could exploit the technology to cause harm. This issue gained significant attention. One of the primary worries of AI safety advocates is that the progression of artificial general intelligence, or systems that surpass human intellect, could enable the technology to elude human control and pose an existential threat.
Reflecting on where he anticipated AI development would bring him when he initially delved into AI research, Hinton remarked, “[we are] here now. I thought we would arrive here at some point in the future.”
He added, “Because in the current environment, most experts in this field believe that AI surpassing human intelligence will likely materialize within the next 20 years.” And that’s a rather frightening notion.
Hinton remarked that the pace of advancement was “extremely rapid, much quicker than anticipated” and advocated for government oversight of the technology.
“My concern is that the invisible hand isn’t safeguarding us. In a scenario where we simply rely on the profit motive of large corporations, we cannot ensure secure development. That’s insufficient,” he stated. “The only factor that can compel these major corporations to conduct more safety research is government regulation.”
Hinton is one of three “Godfathers of AI” who were awarded the ACM A.M. Turing Prize, the computer science equivalent of the Nobel Prize, for their contributions. However, one of the trio, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta, downplayed the existential threat, suggesting that AI “may actually save humanity from extinction.”
Source: www.theguardian.com