From Benedict Evans:

There’s an old joke that ‘AI’ is whatever doesn’t work yet, because once it works, people say ‘that’s not AI - it’s just software’.

Indeed, ‘AGI’ itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.

This fundamental uncertainty, even at the level of what we’re talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isn’t fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we don’t know.

By default, though, this will follow all the other waves of AI, and become ‘just’ more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UK’s Post Office scandal reminds us that you don’t need AGI for software to ruin people’s lives. LLMs will produce more pain and more scandals, but life will go on. At least, that’s the answer I prefer myself.