I’ve been trying to heed the call to prepare for the powerful AI that is coming. But, is it coming soon, like in 2027 (see AI 2027), or significantly later, like in 2045 (see AI as Normal Technology)? Either way, we should prepare, but knowing whether it is coming in years or decades would make a huge difference in that preparation.
The pieces referenced each make compelling cases for the shorter or longer timeframes (another good one on the longer side is The case for multi-decade AI timelines). These are all lengthy pieces, so I thought it would be helpful for me (and maybe you, too!) to consider the core assumptions that cause them to differ so much. I think it boils down to just two:
Autonomous AI coders can advance AI much faster via recursive self-improvement.
AI 2027’s timelines forecast details the data points they extrapolate to predict that such recursive self-improvement will likely occur very soon, predicting about a 25x speedup in AI development pacing as a result. By contrast, AI as Normal Technology remains skeptical if this is even possible:
Perhaps recursive self-improvement in methods is possible, resulting in unbounded speedups in methods. But note that AI development already relies heavily on AI. It is more likely that we will continue to see a gradual increase in the role of automation in AI development than a singular, discontinuous moment when recursive self-improvement is achieved.
I’m skeptical that getting fully automated AI systems to improve themselves would be impossible. Still, I’m also not close enough to the cutting edge of this development to know when this might occur, or how much speedup we could expect from it when it does. Many potential bottlenecks have been flagged, such as a lack of inherent AI creativity, internal coordination issues between thousands of AIs, connecting the AIs to the whole stack of resources needed, compute scarcity, etc.
These bottlenecks are explained away in the aggressive timelines by the more intelligent and capable AIs operating at a much faster feedback loop, such that, to the extent these bottlenecks are significant, they would nevertheless find ways to address and break through them relatively quickly, like in months, not years. For example, they can make software more efficient (bypassing compute scarcity), iterate out of their coordination/creativity bottlenecks, etc. I get that, but one that seems harder to overcome is the potential for widespread societal backlash related to the second assumption.
Advanced AI can control enough of the physical world to transform the global economy fast.
AI as Normal Technology contends that “the speed of diffusion [of AI through the global economy] is inherently limited by the speed at which not only individuals, but also organizations and institutions, can adapt to technology.” They point out that “AI diffusion lags decades behind innovation”, such as in medical and legal contexts, and that “there are already extremely strong safety-related speed limits in highly consequential tasks [like self-driving cars, nuclear, etc.]. These limits are often enforced through regulation, such as the FDA’s supervision of medical devices, as well as newer legislation such as the EU AI Act, which puts strict requirements on high-risk AI.”
By contrast, AI 2027 paints a picture of bypassing all of that slowness and regulation by granting AI companies special physical zones to operate independently:
Both the US and China announce new Special Economic Zones (SEZs) for AIs to accommodate rapid buildup of a robot economy without the usual red tape.
The design of the new robots proceeds at superhuman speed. The bottleneck is physical: equipment needs to be purchased and assembled, machines and robots need to be produced and transported.
The US builds about one million cars per month. If you bought 10% of the car factories and converted them to robot factories, you might be able to make 100,000 robots per month. OpenBrain [their Open AI equivalent in the forecasted scenario], now valued at $10 trillion, begins this process. Production of various kinds of new robots (general-purpose humanoids, autonomous vehicles, specialized assembly line equipment) are projected to reach a million units a month by mid-year [2028].
I think the AI backlash is real and will only intensify from here, especially as jobs are displaced at societally significant levels, “dark factories” (a.k.a lights out manufacturing) become more widely deployed, and humanoid robots become more visible. This backlash will create political pressure (and corresponding opportunities for politicians and political parties/movements) to slow things down.
On the opposite side is the arms race with China (and potentially others) for using AI in military applications. Superhuman AI will create a significant military advantage if the gap between countries in getting access to such runaway AI is significant. I’m honestly not sure how this nets out (backlash slowdown vs. military speedup), and some of the military side may happen in secret for some time, similar to the secrecy in the Manhattan Project. However, it will ultimately be hard to hide because to truly transform things, many physical objects must be made to revamp the military and the economy (vs. a small number of nuclear bombs out of the Manhattan Project).
So, what happens next with these two assumptions will largely determine the timeline. This leaves me thinking we still have to take the shorter timeframes seriously and accelerate societal preparations, to the extent that is even possible.
Avengers: Age of Ultron (2015)
Totally agreed on these -- recursive self-improvement (RSI) and human-level robotics (HLR) -- being two of the biggest unknowns about the trajectory we're on. In terms of the risk of all humans dying in our lifetimes, it seems like both RSI and HLR have to happen. But in terms of the world getting turned thoroughly upside-down, RSI is enough. And with RSI, HLR is bound to happen eventually.
And I'm not even totally sure HLR is required for some absolute disaster scenarios. Do the CEOs and presidents and prime ministers of the world ever really need to operate outside of the internet? Probably superhuman versions of them could shape the world without limit using just email and Zoom and GitHub and Slack and the like. (Imagine an AI megacorp that steadily eats the rest of the economy. Not possible now but maybe that's part of the trajectory we're on. See https://agifriday.substack.com/p/jobstealing )
On the other hand, I think there are additional assumptions to add to your list. Maybe the first is whether generative AI is about to hit a wall and peter out at a sub-human level. There's already some senses in which AI is recursively self-improving, with LLMs writing more and more code. That could accelerate and we could still hit a wall. If the current paradigm isn't going to cut it, then no amount of speedup matters.
At this exact moment it doesn't *feel* like we're hitting a wall, but I've been going back and forth on this since 2022, when I first woke up to the possibility that AGI could be years rather than decades away.