If you have any interest in artificial intelligence, you will inevitably come across the concept of artificial general intelligence. As is well known, AGI has grown significantly in recent years as AI has exploded into the public consciousness, driven by the success of large-scale language models (LLMs), a type of AI that powers chatbots such as ChatGPT. It achieved the status of a buzzword in 2017.
The main reason for this is that AGI has become a beacon for companies that are pioneers in this type of technology. For example, his OpenAI, creator of ChatGPT, states that its mission is to “ensure that artificial general intelligence benefits all humanity.” While governments are also preoccupied with the opportunities that AGI may present and the possible threats that exist, the media (including, of course, this magazine) has already seen the “AGI spark” in LLM systems. We are reporting on claims that people are being watched.
Despite all this, it’s not always clear what AGI actually means. Indeed, this is the subject of intense debate in the AI community, with some arguing that it is a useful goal and others arguing that it is a meaningless goal that betrays a misunderstanding about the nature of intelligence and the prospects of replicating it in machines. Some claim it’s a fantasy. “It’s not really a scientific concept,” he says melanie mitchell At the Santa Fe Institute in New Mexico.
Human-like artificial intelligence and superintelligent AI have been a staple of science fiction for centuries. However, the term AGI started being used about 20 years ago by computer scientist Ben Goertzel and co-founder Shane Legg.
Source: www.newscientist.com