Discussion about this post

User's avatar
Daniel Reeves's avatar

Totally agreed on these -- recursive self-improvement (RSI) and human-level robotics (HLR) -- being two of the biggest unknowns about the trajectory we're on. In terms of the risk of all humans dying in our lifetimes, it seems like both RSI and HLR have to happen. But in terms of the world getting turned thoroughly upside-down, RSI is enough. And with RSI, HLR is bound to happen eventually.

And I'm not even totally sure HLR is required for some absolute disaster scenarios. Do the CEOs and presidents and prime ministers of the world ever really need to operate outside of the internet? Probably superhuman versions of them could shape the world without limit using just email and Zoom and GitHub and Slack and the like. (Imagine an AI megacorp that steadily eats the rest of the economy. Not possible now but maybe that's part of the trajectory we're on. See https://agifriday.substack.com/p/jobstealing )

On the other hand, I think there are additional assumptions to add to your list. Maybe the first is whether generative AI is about to hit a wall and peter out at a sub-human level. There's already some senses in which AI is recursively self-improving, with LLMs writing more and more code. That could accelerate and we could still hit a wall. If the current paradigm isn't going to cut it, then no amount of speedup matters.

At this exact moment it doesn't *feel* like we're hitting a wall, but I've been going back and forth on this since 2022, when I first woke up to the possibility that AGI could be years rather than decades away.

Expand full comment
1 more comment...

No posts