Vertigo
Orienting in an accelerating world
I awoke, a few days ago, to a vertiginous feeling, one Land might call acceleration. First, the Future of Life Institute (FLI) released an open letter calling on AI labs to pause the training of AI systems more powerful than OpenAI’s recently announced GPT-4. Then Eliezer Yudkowsky rejoined with an article in Time, arguing that the letter doesn’t go far enough: we need to shut AI development down entirely. Within a day, AI existential risk was catapulted into mainstream discourse.
The White House press briefing decided to act out a clip from Don’t Look Up. It’s quite something.
Of course, as another journalist helpfully points out at the end of the clip, literally everyone on Earth dying is not exactly a serious topic. This is why, in a recent survey of researchers at two major machine learning conferences, the median respondent assigned a 10% probability to ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’.
Geoffrey Hinton, sometimes called one of the ‘godfathers of AI’, also gave a fascinating interview.
If his responses don’t leave you feeling concerned, I don’t know what will. I am reminded of the spirit of Clarke’s first law:
When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
Yoshua Benigo, another of the ‘godfathers of AI’, signed the FLI open letter. Let’s not speak of the third.1
Having spent much of a day processing all of this, I awoke the next to Eliezer Yudkowsky appearing on Lex Fridman’s podcast and was overcome by the feeling that events were transpiring faster than I could hope to process. It passed but will, I suspect, come again. Should I try to hold on to the narrative until it is torn from my grasp? I’ve failed to write contemporaneously on the events of the last few months; hopefully I find time to circle back amidst this acceleration. Perhaps, indeed, as AI existential risk goes mainstream, I’ll be one of the few who feel like they have any idea of what’s going on.
I’m afraid, sometimes, that the game will be all but played out by the time I have the chance to contribute myself. I made my plans aware of this possibility, and do not think I planned wrongly. And yet—that feeling of acceleration still steals my breath away.
We live in interesting times. Perhaps the most interesting time. Fasten your seatbelt and enjoy our ride. Let’s hope the safety engineers are doing their due diligence. Prod them, if not.
Yann LeCun has, unfortunately, been consistently and disingenuously hostile towards safety concerns.

