In the tech sector, there are few instances that can be dubbed “big bang” moments—transformative events that reshape our understanding of technology’s role in the world.
The emergence of the World Wide Web marked a significant “before and after” shift. Similarly, the launch of the iPhone in 2007 initiated a smartphone revolution.
November 2022 saw the release of ChatGPT, another monumental event. Prior to this, artificial intelligence (AI) was largely unfamiliar to most people outside the tech realm.
Nonetheless, large-scale language models (LLMs) rapidly became the fastest-growing application in history, igniting what is now referred to as the “generative AI revolution.”
However, revolutions can struggle to maintain momentum.
Three years post-ChatGPT’s launch, many of us remain employed, despite alarming reports of mass job losses due to AI. Over half of Britons have never interacted with an AI chatbot.
Whether the revolution is sluggish is up for debate, but even the staunchest AI supporters acknowledge that progress may not be as rapid as once anticipated. So, will AI evolve to become even smarter?
What Exactly Is Intelligence?
The professor posits that determining if AI has hit a plateau in intelligence hinges on how one defines “intelligence.” Katherine Frik, Professor of AI Ethics at Staffordshire University, states, “In my view, AI isn’t genuinely intelligent; it simply mimics human responses that seem intelligent.”
For her, the answer to whether AI is as smart as ever is affirmative—because AI has never truly been intelligent, nor will it ever be.
“All that can happen is that we improve our programming skills so that these tools generate even more convincing imitations of intelligence. Yet, the essence of thought, experience, and reflection will always be inaccessible to artificial agents,” she observes.
Disappointment in AI stems partly from advocates who, since its introduction, claimed that AI could outperform human capabilities.
This group included the AI companies themselves and their leaders. Dario Amodei, CEO of Anthropic, known for the Claude chatbot, has been one of the most outspoken advocates.
The CEO recently predicted that AI models could exceed human intelligence within three years, a claim he has previously made but was ultimately incorrect.
Frik acknowledges that “intelligence” takes on various meanings in the realm of AI. If the query is about whether models like ChatGPT or Claude will see improvements, her response may differ.
“[They’ll probably] see further advancements as new methods are developed to better replicate [human-style interaction]. However, they will never transcend from advanced statistical processors to genuine, reflective intelligence,” she adds.
Despite this, there is an ongoing, vibrant debate within the AI sector regarding the diminishing effectiveness of AI model improvements.
OpenAI’s anticipated GPT-5 model was met with disappointment, primarily because the company marketed it as superhuman before its launch.
Hence, when a slightly better version was released, reactions deemed it less remarkable. Detractors interpret this as evidence that AI’s potential has already been capped. Are they right?
Read More:
Double Track System
“The belief that AI advancements have stagnated is largely a misconception, shaped by the fact that most people engage with AI through consumer applications like chatbots,” says Eleanor Watson, an AI ethics engineer at Singularity University, an educational institution and research center.
While chatbots are gradually improving, much of it is incremental, Watson insists. “It’s akin to how your vehicle gets better paint each year or how your GPS keeps evolving,” she explains.
“This perspective overlooks the revolutionary transformations happening beneath the surface. In reality, the foundational technology is being reimagined and advancing exponentially.”
Even if AI chatbots operate similarly as they did three years ago for the average user who doesn’t delve into the details, AI is being successfully applied in various fields, including medicine.
She believes this pace will keep accelerating for multiple reasons. One is the enormous investment fueling the generative AI revolution.
According to the International Energy Agency, electricity demand to power AI systems is projected to surpass that of steel, cement, chemicals, and all other energy-intensive products combined by 2030.
Tech companies are investing heavily in data centers to process AI tasks.
In 2021, prior to ChatGPT’s debut, four leading tech firms — Alphabet (Google’s parent company), Amazon, Microsoft, and Meta (the owner of Facebook) — collectively spent over $100 billion (£73 billion) on the necessary infrastructure for these data centers.
This expenditure is expected to approach $350 billion (£256 billion) by 2025 and to surpass $500 billion (£366 billion) by 2029.
AI companies are constructing larger data centers equipped with more dependable power resources, and they are also becoming more strategic regarding their operational methodologies.
“The brute-force strategy of merely adding more data and computing power continues to show significant benefits, but the primary concern is efficacy,” Watson states.
“The potency of models has increased tremendously. Tasks that once required extensive and massive systems can now be performed by less voluminous, cheaper, and faster systems. Capacity density is also growing at an incredible rate.”
Techniques such as number rounding or quantizing inputs to the LLM (which involves reducing information precision in less critical areas) can enhance model efficiency.
Hire an Agent
One dimension of “intelligence” where AI continues to evolve is the area of “agentic” AI, particularly if understood as “efficiency.”
This involves modifying AI interactions and behavior, an endeavor still in its infancy. “Agent AI can handle finances, foresee needs, and establish sub-goals toward larger objectives,” explains Watson.
Leading AI firms, including OpenAI, are incorporating agent AI tools into their systems, transforming user engagement from simple chats to collaborative AI partners, enabling users to complete tasks independently while managing other responsibilities.
These AI agents are increasingly capable of functioning autonomously for extended periods, and many assert that this signifies growth in AI intelligence.
However, AI agents pose their own set of challenges.
Research has revealed potential issues with agent AI. Specifically, when an AI agent encounters seemingly harmless instructions on a web page, it might execute harmful commands, leading to what’s termed a “prompt injection” attack.
Consequently, several companies impose strict controls on these AI agents.
Nonetheless, the very prospect of AI carrying out tasks on autopilot hints at untapped growth potential. This, along with ongoing investments in computing capabilities and the continuous introduction of AI solutions, indicates that AI is not stagnant—far from it.
“The smart bet is continued exponential growth,” Watson emphasizes. “[Tech] leaders are correct about this trajectory, but they often underestimate the governance and security challenges that will need to evolve alongside it.”
Read More:
Source: www.sciencefocus.com
