Totally agreed on these -- recursive self-improvement (RSI) and human-level robotics (HLR) -- being two of the biggest unknowns about the trajectory we're on. In terms of the risk of all humans dying in our lifetimes, it seems like both RSI and HLR have to happen. But in terms of the world getting turned thoroughly upside-down, RSI is enough. And with RSI, HLR is bound to happen eventually.
And I'm not even totally sure HLR is required for some absolute disaster scenarios. Do the CEOs and presidents and prime ministers of the world ever really need to operate outside of the internet? Probably superhuman versions of them could shape the world without limit using just email and Zoom and GitHub and Slack and the like. (Imagine an AI megacorp that steadily eats the rest of the economy. Not possible now but maybe that's part of the trajectory we're on. See https://agifriday.substack.com/p/jobstealing )
On the other hand, I think there are additional assumptions to add to your list. Maybe the first is whether generative AI is about to hit a wall and peter out at a sub-human level. There's already some senses in which AI is recursively self-improving, with LLMs writing more and more code. That could accelerate and we could still hit a wall. If the current paradigm isn't going to cut it, then no amount of speedup matters.
At this exact moment it doesn't *feel* like we're hitting a wall, but I've been going back and forth on this since 2022, when I first woke up to the possibility that AGI could be years rather than decades away.
Yeah, that’s a good point that there the first assumption is arguably two: (1) that true autonomy / self-recurison is achievable; and (2) that when achieved, the self-recurison loop results in a perpetual speedup, or at least a speedup long enough to be material.
Also, that’s an interesting point about not interacting with the world—thanks for sharing. I guess the way I was thinking about it is that if you don’t start interacting with the world on a large scale, then the economy is not likely to transform as dramatically in the short term, for example, through the large-scale displacement or shifting of jobs.
Totally agreed on these -- recursive self-improvement (RSI) and human-level robotics (HLR) -- being two of the biggest unknowns about the trajectory we're on. In terms of the risk of all humans dying in our lifetimes, it seems like both RSI and HLR have to happen. But in terms of the world getting turned thoroughly upside-down, RSI is enough. And with RSI, HLR is bound to happen eventually.
And I'm not even totally sure HLR is required for some absolute disaster scenarios. Do the CEOs and presidents and prime ministers of the world ever really need to operate outside of the internet? Probably superhuman versions of them could shape the world without limit using just email and Zoom and GitHub and Slack and the like. (Imagine an AI megacorp that steadily eats the rest of the economy. Not possible now but maybe that's part of the trajectory we're on. See https://agifriday.substack.com/p/jobstealing )
On the other hand, I think there are additional assumptions to add to your list. Maybe the first is whether generative AI is about to hit a wall and peter out at a sub-human level. There's already some senses in which AI is recursively self-improving, with LLMs writing more and more code. That could accelerate and we could still hit a wall. If the current paradigm isn't going to cut it, then no amount of speedup matters.
At this exact moment it doesn't *feel* like we're hitting a wall, but I've been going back and forth on this since 2022, when I first woke up to the possibility that AGI could be years rather than decades away.
Yeah, that’s a good point that there the first assumption is arguably two: (1) that true autonomy / self-recurison is achievable; and (2) that when achieved, the self-recurison loop results in a perpetual speedup, or at least a speedup long enough to be material.
Also, that’s an interesting point about not interacting with the world—thanks for sharing. I guess the way I was thinking about it is that if you don’t start interacting with the world on a large scale, then the economy is not likely to transform as dramatically in the short term, for example, through the large-scale displacement or shifting of jobs.