Published on September 10, 2025 3:40 PM GMT
Inspired by this thread, where a whole lot of discussion around what the term AGI actually means, and I'm starting to wonder if the term is at this point far too generally used and people not distinguishing similar outcomes precisely enough, now that we've made progress in AI.
Now, @Thane Ruthenis at least admitted that his talk around AGI was trying to address the query over whether or not we actually can have AI that fully automates the economy/AI research soon with the level of resources we have, and he was claiming that LLMs might just fail to be impactful with the level of resources that we have without algorithmic improvements (which is very plausible).
But once we have that, I don't see a reason to actually use the AGI/ASI distinction anymore, and I deny the assumption that behavioral definitions of AGI/ASI are akin
to denying the distinction between the Taylor-polynomial approximation of a function, and that function itself (at least if we assume infinite compute like this).
I think talking and reasoning about approximations are fine, and I think the question of what AIs and humans can do in practice given limits to resources is an excellent question to be studying, but I currently see no reason why we need the AGI/ASI word once we actually have the query at hand.
And I'm currently confused about why people care so much about the distinction between AGIs/ASIs and non-AGIs/ASIs in 2025.
Discuss
