Discussion about this post

User's avatar
Leo Abstract's avatar

This is an excellent survey of the subject; thank you for writing this. It is good to see that someone (@deepfates, in one of your links) has made a point about persuasion not stopping at human levels of intelligence. A concern I have not seen addressed is that there is already an existent decisive strategic advantage in this space, possessed by non-agent 'intelligences'. Influence is largely about attention, and while our human perspective assumes that the information that comes to us does so in a way that would be legible to our hunter-gatherer ancestors, nothing like this is happening anymore.

(An aside [and let's not quibble about evo psych if this doesn't land for you]: The next suggested youtube or tiktok video feels to the human brain an obvious analog to the ancient practice of telling stories around a fire. One story ends, someone else in the tribe begins another. Everyone knows the stories, and they reflect shared group values and lived histories. Or again, a tweet hits the human brain the way the utterance of some nearby human would: this person is important to my group and is speaking to me.)

Instead, layers of obscure machinery determine what we see -- perhaps not every thing every time, but for the great mass of the tech-connected species these layers determine enough of it enough of the time. What's more, not only can we not decide to turn these off (they're too profitable, decision-makers can always rationalize keeping them on) there aren't even mechanisms in place that would allow us to do so if we did decide (a corporation does not contain anyone whose job it is to put the brakes on things that are hugely profitable). Legislation might, but not only is this possible only after a problem is severe enough to warrant attention, the problem in this case actively changes how much attention it gets incommensurate with severity.

Take as an example the "giving up" you describe from Elon Musk. He has spoken to decision-makers in the past and found them unsympathetic. He also has 80 million followers on Twitter. Like all humans, he appears to respond to incentives. When he makes provocative culture-war statements he gets attention, when makes statements about AI he doesn't. This could be due to hidden twitter machinery, because only ironic joke versions of AI risk are accessible to the popular consciousness (due to earlier social programming?), or any one of a thousand other reasons. Doesn't matter exactly why, and this is only one symptom of a greater problem.

Imagine some hypothetical post-near-human-extinction-and-post-butlerian-jihad historian gaining access to records from this era. He might say "in retrospect it seems obvious that machine control of humanity was functionally total more than two decades before the Crisis, but remained invisible even during it."

Expand full comment
3 more comments...

No posts