Although many artificial intelligence researchers see the possibility of future development of superhuman AI as having a considerable chance of causing the extinction of humanity, there is disagreement and uncertainty about such risks. are also widely available.
Those findings can be found below Survey of 2,700 AI researchers They recently presented their research at six major AI conferences. This is the largest study of its kind to date. The survey asked participants to share their thoughts on possible timelines for future AI technology milestones and the positive or negative social impact those achievements would have. Almost 58% of researchers said they believe there is a 5% chance of human extinction or other very bad AI-related outcomes.
“This is an important sign that most AI researchers do not think it is highly implausible that advanced AI will destroy humanity,” he says. Katya Grace Author of the paper, affiliated with the Machine Intelligence Institute, California. “I think the general idea that the risk is not trivial says much more than the exact percentage of risk.”
But he says there's no need to panic just yet. Emil Torres At Case Western Reserve University in Ohio. Such research by AI experts “doesn't have a good track record” of predicting future AI developments, they say. A 2012 study showed that over the long term, AI experts' predictions are no more accurate than non-expert public opinion. The authors of the new study also noted that AI researchers are not experts in predicting AI's future trajectory.
When compared to responses from the same survey in 2022, many AI researchers predicted that AI would reach certain milestones sooner than previously predicted. This coincides with his November 2022 debut of ChatGPT and Silicon Valley's rush to broadly deploy similar AI chatbot services based on large-scale language models.
The researchers surveyed found that within the next 10 years, AI systems will be able to perform most of the 39 sample tasks, such as creating a new song indistinguishable from a Taylor Swift banger or coding an entire payment processing site from scratch. He predicted a 50 percent chance of success. Other tasks, such as physically installing electrical wiring in a new home and solving age-old math puzzles, are expected to take even longer.
There is a 50 percent chance of developing AI that can outperform humans at any task by 2047, while there is a 50 percent chance of all human jobs being fully automated by 2116. It is said that this will happen with a probability of . These estimates are 13 years earlier, 48 years earlier than last year's survey.
However, Torres says the rising expectations for AI development could also be disappointed. “Many of these breakthroughs are completely unpredictable, and it's quite possible that the AI field will experience another winter,” he says. I mentioned that funding and corporate interest in has dried up.
Even without the risk of superhuman AI, there are also more pressing concerns. The majority of AI researchers (over 70%) say that scenarios using AI, including deepfakes, public opinion manipulation, engineered weapons, authoritarian population control, and worsening economic inequality, are of serious or extreme concern. It states that there is. Torres also highlighted the danger that AI could contribute to disinformation around existential issues such as climate change and the deterioration of democratic governance.
“We already have technologies that can seriously harm society, right here, right now. [the US] It’s a democracy,” Torres said. “Let's see what happens in the 2024 election.”
topic:
Source: www.newscientist.com