TTen years ago, Oxford philosopher Nick Bostrom Super IntelligenceThe book explores how superintelligent machines might be built and the implications of such technology, one of which is that such machines, if built, would be difficult to control and might even take over the world to achieve their goals (in Bostrom's famous thought experiment, this was to make paperclips).
The book was a huge hit, generated lively debate, but also attracted a fair amount of opposition. Critics complained that it was based on an overly simplistic view of “intelligence,” that it overestimated the likelihood of the imminent emergence of superintelligent machines, and that it offered no credible solutions to the problems it raised. But the book had the great merit of forcing people to think about possibilities that had previously been confined to academia or the fringes of science fiction.
Ten years later, he takes on the same target again. This time, instead of a book, he makes a film titled “Situational Awareness: The Next DecadeThe author is Leopold Aschenbrenner, a young man of German origin who now lives in San Francisco and hangs out with Silicon Valley's intellectual elite. On paper, he sounds like a Sam Bankman Freed-type whiz kid: a math genius who graduated from a prestigious US university as a teenager, spent time at Oxford with his colleagues at the Future of Humanity Institute, and worked on the OpenAI “superalignment” team.Currently disbandedAfter working at Yahoo! Auctions for $1.2 billion in 2017, he founded an investment firm focused on artificial general intelligence (AGI) with funding from Stripe founders Patrick and John Collison. These are two smart guys who don't play for losers.
So this Aschenbrenner is clever, but at the same time, he's playing the game.The second point may be relevant, since the gist of his lengthy essay is essentially that superintelligence is coming (with AGI as a stepping stone), but the world isn't yet ready to accept it.
The essay is divided into five sections. The first section lays out the path from GPT-4 (its current state) to AGI (which the author believes could arrive as soon as 2027). The second follows a hypothetical path from AGI to true superintelligence. The third describes four “challenges” that superintelligent machines would pose to the world. The fourth section outlines what the author calls the “projects” necessary to manage a world with (or dominated by) superintelligent machines. The fifth section is Aschenbrenner's message to humanity in the form of three “tenets” of “AGI realism.”
In his view of how AI will progress in the near future, Aschenbrenner is fundamentally an optimistic determinist, i.e., he extrapolates the recent past under the assumption that trends will continue. To see an upward curve, he has to extend it. He grades LLMs (large-scale language models) by their capabilities. Thus, GPT-2 is at the “preschooler” level, GPT-3 at the “elementary school student” level, and GPT-4 at the “smart high school student” level, and it seems that with the massive increase in computing power, by 2028 “models as smart as PhDs and experts will be able to work next to us as colleagues.” By the way, why do AI advocates always consider PhDs to be the epitome of human perfection?
After 2028 comes the big leap from AGI to superintelligence. In Aschenbrenner's world, AI won't stop at human-level capabilities. “Hundreds of millions of AGIs will automate AI research, compressing a decade's worth of algorithmic progress into a year. We will rapidly evolve from human-level to superhuman AI systems. The powers and dangers of superintelligence will be dramatic.”
The third section of the essay explores what such a world might be like, focusing on four aspects of it: the unimaginable (and environmentally catastrophic) computational requirements needed to run it, the difficulty of maintaining the security of an AI lab in such a world, the problem of aligning machines with human purposes (which Aschenbrenner believes is difficult but not impossible), and the military implications of a world of superintelligent machines.
It is not until the fourth topic that Aschenbrenner's analysis really begins to disintegrate thematically. Like the message in the Blackpool stone pole, the nuclear weapons analogy runs through his thinking. He sees the US as being at a stage in AI after J. Robert Oppenheimer's original Trinity experiment in New Mexico, ahead of the USSR, but not for long. And of course, China fills the role of the Soviet empire in this analogy.
Suddenly, superintelligence has gone from being a human problem to being a US national security imperative. “The US has a lead,” he writes. “We must maintain that lead. And now we're screwing it. Above all, we must lock down AI labs quickly and thoroughly before major AGI breakthroughs leak out in the next 12 to 24 months. … Computer clusters must be built in the US, not in the dictatorships that fund them. And US AI labs have an obligation to cooperate with intelligence agencies and the military. A US lead in AGI cannot ensure peace and freedom by simply building the best AI girlfriend app. It's ugly, but we must build AI for US defense.”
All we need is a new Manhattan Project and the AGI Industrial Complex.
What I'm Reading
The dictator is shot
Former Eastern Bloc countries fear Trump It's an interesting piece. New Republic About people who know something about life under oppression.
Normandy revisited
Historian Adam Tooze 80 Years Since D-Day: World War II and the “Great Acceleration” The piece looks back on wartime anniversaries.
Lawful interference
Monopoly Recap: The Harvey Weinstein of Antitrust This is a blog post by Matt Stoller about Joshua Wright, the lawyer who has had a devastating impact on U.S. antitrust enforcement for many years.
Source: www.theguardian.com