Illustration: Petra Péterffy/The Guardian
“Technological advancements occur because they can,” states OpenAI CEO Sam Altman. I mentioned how the 2019 New York Times rephrased Robert Oppenheimer, the creator of the atomic bomb.
Altman encapsulates the ethos of Silicon Valley. The march of technology is relentless.
Another prevailing technical belief is that the emergence of artificial general intelligence (AGI) will result in one of two potential futures: a technotopia or the end of humanity.
In numerous instances, the arrival of humans has led to decisive change. We were faster, stronger, and more adaptable. Extinctions have often been unintended consequences of our ambitions. Genuine AGI could be akin to creating new species that may outsmart or outnumber us.
Altman and leaders of prominent AI labs are perceived as facilitators of a potential extinction event. This is a genuine concern echoed by numerous AI researchers and notable figures.
Given this backdrop, one naturally wonders: should we pursue technologies that could jeopardize our existence?
A common retort is that AGI is inevitable; it’s simply too appealing not to create. After all, AGI is viewed as the pinnacle of technology, as described by Alan Turing’s contemporaries, the last invention humanity will ever need. Moreover, if you don’t, someone else will. Responsibility looms overhead.
A burgeoning ideology in Silicon Valley, Effective Accelerationism (e/acc), argues that AGI’s inevitability is rooted in the second law of thermodynamics, and it is driven by “technological capital.” The e/acc manifesto asserts: “You cannot halt this machine. Progress is a one-way street. Returning is not an option.”
For Altman, e/acc is imbued with a mystical quality. The trajectory of inventions is perceived as an immutable law of nature. Yet, that perspective overlooks the reality that technology emerges from intentional human actions influenced by myriad powerful forces.
Despite the allure of AGI, the notion of technology being inevitable deserves scrutiny.
Historically, advancements in technology have prompted resistance, with society often restraining its utilization.
Concerns regarding new technologies have led to regulations. Pioneering biologists effectively prohibited recombinant DNA experiments in the 1970s.
Humans have yet to be successfully replicated through cloning, even though the possibility has existed for over a decade; only one scientist attempting to gene-edit humans found himself imprisoned.
Nuclear energy provides steady, carbon-free power, yet fears of disaster have inhibited its progress extensively.
If Altman was more aware of the history of the Manhattan Project, he might understand that the creation of nuclear weapons was a series of unpredictable and unintended outcomes, sparked by misconceptions regarding nations’ technological advancements.
It is now hard to conceive a world devoid of nuclear arms. Yet, in lesser-known history, President Ronald Reagan nearly reached an agreement with Mikhail Gorbachev to dismantle all nuclear arms, which was thwarted by the Star Wars satellite defense system. Currently, nuclear arsenals run at less than 20% of their 1986 peak.
These choices weren’t made in isolation. Reagan, previously a staunch opponent of disarmament, was ultimately swayed by the global movements advocating for nuclear freeze during the late 1980s.
While there are significant economic incentives to continue utilizing fossil fuels, climate activism has transformed the discourse surrounding decarbonization.
In April 2019, the youth-led climate movement Extinction Rebellion brought London to a standstill, pushing for net-zero carbon emissions by 2025.
The UK declared a climate emergency and Labor adopted a 2030 target for decarbonizing electricity production.
Sierra Club’s Beyond Coal campaign, while not widely recognized, has been incredibly effective, shuttering over a third of U.S. coal plants within five years.
US carbon emissions are currently lower than the levels of 1913.
In many respects, the regulation of AGIs could present an easier challenge than decarbonization, given that 82% of global energy production still relies on fossil fuels. Society does not depend on hypothetical AGIs to avert disaster.
Moreover, guiding the future of technological development does not necessitate halting current systems or creating specialized AIs to address pressing challenges in medicine and climate.
It’s evident why many capitalists are drawn to AI; they envision a future where they can eliminate manual labor (and reduce costs).
However, governments are not merely focused on maximizing profits. While economic growth is crucial, they also prioritize employment, social stability, market concentration, and occasionally democracy.
The overall impact of AGI on these areas remains uncertain. The government is not equipped for a scenario in which widespread technical unemployment occurs.
Historically, capitalists have often gotten what they desire, particularly in recent decades. However, their relentless chase for profit can hinder regulatory attempts to slow AI’s progression.
In a San Francisco bar in February, veteran OpenAI safety team members stated that E/ACC proponents should fear the likes of AOC and Senator Josh Hawley more than “extreme” AI safety advocates, as they possess the power to truly disrupt.
While humanity may seem stuck in its ways, it’s uncertain whether AGI will ultimately be created; however, proponents often assert that its arrival is imminent, and that resistance is futile.
Yet, whether AGI emerges in 5, 20, or 100 years is crucially significant. The timeline is more within our control than advocates are likely to admit. Deep down, many of them likely recognize this, rendering attempts to persuade others as futile. Furthermore, if they believe AGI is inevitable, why seek to convince anyone?
We already possessed the computational power to train GPT-2 a decade before OpenAI actually undertook it, as uncertainty loomed about its value.
Yet now, top AI labs fail to implement requisite precautions, even those that their safety teams advocate for. A recent OpenAI employee resigned over a loss of faith in responsible actions towards AGI due to competitive pressures.
The “safety tax” is a cost that labs are unwilling to incur if they wish to stay competitive, pushing for faster product releases at the expense of safety.
In contrast, governments do not face the same financial burdens.
Recently, certain tech entrepreneurs claimed that regulating AI development is impossible “unless you control every line of code.” While this might hold true for an AGI created on a personal laptop, cutting-edge AI requires extensive arrays of supercomputers with chips produced by an extraordinarily exclusive industry.
Thus, many AI safety advocates have proposed that computational governance could be a viable solution. Governments could collaborate with cloud computing providers to prevent unregulated training of next-gen systems. Instead of instituting draconian oversight, thresholds could be established to target only major players capable of significant expenditures; training models like GPT-4 reportedly cost over $100 million per run.
Governments must consider the implications of global competition and the risk of unilateral disarmament. However, international treaties can facilitate the equitable sharing of benefits derived from advanced AI systems while ensuring that comprehensive scaling does not proceed blindly.
Despite the competitive climate, collaboration among nations has occurred in surprising ways.
The Montreal Protocol successfully mitigated ozone layer depletion by banning chlorofluorocarbons. Globally, there is consensus on a morally compelling ban against weapons designed for military purposes, including biological and chemical weapons, alongside blinding laser weapons and environmental modification.
In the 1960s and 1970s, many analysts feared that all states capable of developing nuclear arms would do so. However, around three dozen nuclear programs have since been abandoned globally, not merely through coercion but via intentional actions bolstered by the norms established in the 1968 Non-Proliferation Treaty.
When polled on whether Americans favor superhuman AI, a significant majority indicated “no.” Opposition to AI has grown as technology becomes more prevalent. Advocates declaring AGI’s inevitability often dismiss public sentiment, perceiving the populace as unaware of their own best interests, which contributes to the appeal of inevitability as it bypasses meaningful debate.
The potential risks of AGI are severe, with implications that could jeopardize civilization itself. This necessitates a collective effort to impose effective regulations.
Ultimately, technology progresses because people choose to make it happen. The option to decide remains.
Source: www.theguardian.com
