The first question is: timelines till what? In reality, what we care about is timelines till drastic acceleration of AI research.
I like Takeoff speeds presentation at Anthropic" by Tom Davidson (Sep 2023 presentation, Jun 2024 article), www.lesswrong.com/posts/Nsmabb9fhpLuLdtLE/takeoff-speeds-presentation-at-anthropic
In particular, he defined a "narrow version of AGI" there:
>"AGI” (= AI that could fully automate AI R&D)
I usually think about "non-saturating recursive self-improvement", and, in principle, people might discover a specialized scheme which would enable "non-saturating recursive self-improvement" any day now. And I certainly think that while full-blown "narrow AGI" is not necessary to start "non-saturating recursive self-improvement", it is almost certainly sufficient.
Although Tom Davidson modestly formulates it as follows (easier to digest and easier to accept):
>the pace of software progress might increase dramatically (e.g. by a factor of ten)
And then any custom-made method of "non-saturating recursive self-improvement" will almost certainly produce versions of "automated AI researcher" rather soon.
So, one way or another, we are really trying to estimate time till we can fully automate AI R&D.
***
The second question is: when? Of course, everyone is correctly pointing out that there is a good deal of uncertainty.
And then people's and org's forecasts might be affected by what they find convenient. For example, timelines which are too short might invalidate a lot of strategies people find promising, or they might call for various inconvenient actions, or they might create an impression of "missed deadlines".
Timelines which are too long imply larger uncertainties associated with the already ongoing evolution and transformation of the world.
In any case, one needs to maintain a distribution of expected timelines and sufficient flexibility in their strategies, making sure one can adapt to timelines which are shorter or longer than one's expectations.
***
OpenAI has recently declared that they organize around the following timelines:
September 2026: Automated AI research intern
March 2028: Automated AI research
So their expectation for "narrow AGI" is March 2028 (they emphasize that they are uncertain, but one has to make plans, to organize corporate strategy and corporate R&D, and they say they currently organize around this forecast).
Many people think it is too aggressive (e.g. Zvi does not expect OpenAI to meet these timelines), but some people actually think this is too conservative, and that the transition will start earlier.
***
AI 2027 exploration has April 2027 as the modal value for "narrow AGI" in this sense.
People objected a lot that this is too short (often confusing expected value and mode of the distribution; I am talking about modes here, and modes are earlier for this kind of distributions than expected values).
But I objected that this is too conservative.
***
Anthropic hinted at end of 2026 early 2027 seven months ago:
jack-clark.net/2025/04/07/import-ai-407-deepmind-sees-agi-by-2030-mousegpt-and-bytedances-inference-cluster/
>
I asked:
>Hi, if I try to characterize 6 qualitative transitions of the state of AI as 1) AlexNet, 2) GPT-3 (in-context learning, some program synthesis), 3) GPT-4 (understanding), 4) o1-preview (reasoning), 5) o3-GPT-5 (mature reasoning), 6) GPT-5-Codex (start of competent agents), what would be the likely next stage, stage 7?
The system predicted that the next stage will be the Trustworthy autonomy stage.
In my previous post I predicted the following aggressive timelines for stages 6-8: mishka-discord.dreamwidth.org/4806.html
>GPT-5-Codex (September 2025) has finally led to actually reasonably competent coding agents despite some defects still being present. Again, the lead of OpenAI does exist somewhat, but it is rather fragile. This revolution is also likely to unfold for several months with several important events, probably centered at Oct-Nov border and concluding in Dec 2025.
>
>This sequence, 7.5 years - 3 years - 18 months - about 9 months - about 4-5 months (tentatively), is pointing to intelligence explosion/AI takeover happening soon, unless we see a break in this tendency of intervals between revolutions being halved each time. If this trend continues, the sequence converges at some point in March 2026, and, in any case, humans can't handle ultra-frequent releases and daily revolutions, only the self-improving AI ecosystem is capable of handling something like that.
>
>Anyway, we can try to guess which one is next, what will constitute the revolution number 7. This is an invitation to myself and to the readers to think about this question.
>
>The revolution number 7 is supposed to be centered around some time in January 2026 if the trend of shrinking intervals holds (I am adding 2 months and a bit to the Oct-Nov border).
>
>Then the revolution number 8 is supposed to be centered around some time in February 2026.
So, I asked ChatGPT (extended thinking) a follow-up question: chatgpt.com/share/69004e4a-e54c-8010-9b1f-86e4859dcb0a
>Thanks a lot! What do you think will be Stage 8 following the "Trustworthy autonomy" Stage 7?
The system's answer was:
>if Stage 7 is “trustworthy autonomy” (agents you can leave running in prod), Stage 8 is the jump from reliable operators → self-extending institutions.
So, Stage 7 is Trustworthy autonomy stage, and Stage 8 is Self-Extending Governed Collectives.
This is, indeed, very close to what one needs for a transition, and even hints at what might be the shape of that transition.
I like Takeoff speeds presentation at Anthropic" by Tom Davidson (Sep 2023 presentation, Jun 2024 article), www.lesswrong.com/posts/Nsmabb9fhpLuLdtLE/takeoff-speeds-presentation-at-anthropic
In particular, he defined a "narrow version of AGI" there:
>"AGI” (= AI that could fully automate AI R&D)
I usually think about "non-saturating recursive self-improvement", and, in principle, people might discover a specialized scheme which would enable "non-saturating recursive self-improvement" any day now. And I certainly think that while full-blown "narrow AGI" is not necessary to start "non-saturating recursive self-improvement", it is almost certainly sufficient.
Although Tom Davidson modestly formulates it as follows (easier to digest and easier to accept):
>the pace of software progress might increase dramatically (e.g. by a factor of ten)
And then any custom-made method of "non-saturating recursive self-improvement" will almost certainly produce versions of "automated AI researcher" rather soon.
So, one way or another, we are really trying to estimate time till we can fully automate AI R&D.
***
The second question is: when? Of course, everyone is correctly pointing out that there is a good deal of uncertainty.
And then people's and org's forecasts might be affected by what they find convenient. For example, timelines which are too short might invalidate a lot of strategies people find promising, or they might call for various inconvenient actions, or they might create an impression of "missed deadlines".
Timelines which are too long imply larger uncertainties associated with the already ongoing evolution and transformation of the world.
In any case, one needs to maintain a distribution of expected timelines and sufficient flexibility in their strategies, making sure one can adapt to timelines which are shorter or longer than one's expectations.
***
OpenAI has recently declared that they organize around the following timelines:
September 2026: Automated AI research intern
March 2028: Automated AI research
So their expectation for "narrow AGI" is March 2028 (they emphasize that they are uncertain, but one has to make plans, to organize corporate strategy and corporate R&D, and they say they currently organize around this forecast).
Many people think it is too aggressive (e.g. Zvi does not expect OpenAI to meet these timelines), but some people actually think this is too conservative, and that the transition will start earlier.
***
AI 2027 exploration has April 2027 as the modal value for "narrow AGI" in this sense.
People objected a lot that this is too short (often confusing expected value and mode of the distribution; I am talking about modes here, and modes are earlier for this kind of distributions than expected values).
But I objected that this is too conservative.
***
Anthropic hinted at end of 2026 early 2027 seven months ago:
jack-clark.net/2025/04/07/import-ai-407-deepmind-sees-agi-by-2030-mousegpt-and-bytedances-inference-cluster/
>
>DeepMind’s key assumptions:
>
> ...
>
>Could happen by 2030: Very powerful systems could arrive by the end of the decade (by comparison, Anthropic thinks things could happen by end of 2026 early 2027).
***
My own timelines are early 2026: mishka-discord.dreamwidth.org/4806.html
I asked:
>Hi, if I try to characterize 6 qualitative transitions of the state of AI as 1) AlexNet, 2) GPT-3 (in-context learning, some program synthesis), 3) GPT-4 (understanding), 4) o1-preview (reasoning), 5) o3-GPT-5 (mature reasoning), 6) GPT-5-Codex (start of competent agents), what would be the likely next stage, stage 7?
The system predicted that the next stage will be the Trustworthy autonomy stage.
In my previous post I predicted the following aggressive timelines for stages 6-8: mishka-discord.dreamwidth.org/4806.html
>GPT-5-Codex (September 2025) has finally led to actually reasonably competent coding agents despite some defects still being present. Again, the lead of OpenAI does exist somewhat, but it is rather fragile. This revolution is also likely to unfold for several months with several important events, probably centered at Oct-Nov border and concluding in Dec 2025.
>
>This sequence, 7.5 years - 3 years - 18 months - about 9 months - about 4-5 months (tentatively), is pointing to intelligence explosion/AI takeover happening soon, unless we see a break in this tendency of intervals between revolutions being halved each time. If this trend continues, the sequence converges at some point in March 2026, and, in any case, humans can't handle ultra-frequent releases and daily revolutions, only the self-improving AI ecosystem is capable of handling something like that.
>
>Anyway, we can try to guess which one is next, what will constitute the revolution number 7. This is an invitation to myself and to the readers to think about this question.
>
>The revolution number 7 is supposed to be centered around some time in January 2026 if the trend of shrinking intervals holds (I am adding 2 months and a bit to the Oct-Nov border).
>
>Then the revolution number 8 is supposed to be centered around some time in February 2026.
So, I asked ChatGPT (extended thinking) a follow-up question: chatgpt.com/share/69004e4a-e54c-8010-9b1f-86e4859dcb0a
>Thanks a lot! What do you think will be Stage 8 following the "Trustworthy autonomy" Stage 7?
The system's answer was:
>if Stage 7 is “trustworthy autonomy” (agents you can leave running in prod), Stage 8 is the jump from reliable operators → self-extending institutions.
So, Stage 7 is Trustworthy autonomy stage, and Stage 8 is Self-Extending Governed Collectives.
This is, indeed, very close to what one needs for a transition, and even hints at what might be the shape of that transition.
Tags: