Hi—welcome to my blog! You may call me Rina, if you wish. As this is my first post, I should tell you my plans for the blog. Principally, I’m here to practice writing skills that have atrophied since they were last used in high school English. Apparently—and much to my chagrin—being able to write well is a useful skill.1 I’d also like to read more useful and interesting things, browse Reddit a little less, and maybe even learn something from what I read. Fortunately, I can do both by writing about the things I read—how convenient!
In this blog, I intend to chronicle my journey into the ideas of the rationalists, but I also expect to write about many other things that catch my fancy. The former won’t make much sense if you don’t know of the rationalists, so this post will give you a brief background, an overview of their ideas, and discussion of their influence. I’ll also discuss some adjacent movements, such as effective altruism and neoreaction, offer some warnings, and perhaps poke a bit of fun. Lastly, I’ll offer a speculative map of my journey into the ideas of the rationalists. If this post manages to interest you—feel free to use it too!
But first, I’ll comment on the very teleological ‘Gyroscopic Musings’. This blog exists in part as a safety mechanism: I’ll come across some extreme ideas in my journey—I already have, as you’ll see—and I hope writing out my thoughts here will serve to stabilise and moderate any pesky ideas I happen to stumble across before I go and put them in my head.
An overview of the rationalists
For perhaps obvious reasons, I’ve spent a lot more time on the internet over the past year, and much of this has been on Twitter. Through two important figures in quantum information, Michael Nielsen and Scott Aaronson, I learned of Scott Alexander’s blog Slate Star Codex—just as it was taken down—and was reacquainted with the rationalists through this new home. Before the rationalist diaspora, they were found at the recently revived LessWrong, created by Eliezer Yudkowsky to branch off from Robin Hanson’s blog Overcoming Bias. But—who exactly are these rationalists, and why do I say reacquainted?
Roko’s basilisk
Infohazard warning: discussion of Roko’s basilisk. Obviously.
The latter question is easier to answer, so I’ll quickly deal with it by mentioning the notorious story of Roko’s basilisk. Roko’s idea was that a future superintelligent artificial intelligence (AI) may, after its emergence, torture those who presently imagine its future existence. This policy would acausally blackmail such people into working to bring the AI into existence. It’s called a basilisk because imagining it is the only way it can harm you—it only has an incentive to torture those who imagine it.
This basilisk is easily dismissed as it has no incentive to follow through on the threat once it exists. Nevertheless, discussion was banned, ironically sparking media coverage, as this category of ideas may pose an information hazard—a risk associated with the dissemination of true information that may cause harm. There are apparently more compelling versions of the basilisk. I suggest you do not try to imagine them.2 It’s unlikely that you end up being tortured, but at least one person had ‘terrible nightmares’.
All this aside, Roko’s basilisk was given far more attention than it probably deserved. Indeed, everyone’s favourite couple, Grimes and Elon, met through the idea—make of that what you will! I recall stumbling across the basilisk on the internet a few years after it was conceived. I imagine that this was many people’s first and only brush with the rationalist community; this certainly was true for me until recently.
The rationalists
It’s a little difficult to clearly define the rationalist community—it’s the subject of ongoing Twitter spats. But it’s important to note that I won’t use ‘rationalist’ to mean ‘one who ascribes to philosophical rationalism’—to the contrary, the rationalists are empiricists—as this mistake muddies the aforementioned discourse. Instead, it will be used to mean ‘one who aspires to rationality’, or to refer to the rationalist community that formed around Eliezer Yudkowsky’s Sequences. These draw on Kahneman’s Thinking, Fast and Slow and Jaynes’s Probability Theory, and Eliezer defines two forms of rationality therein:
Epistemic rationality: systematically improving the accuracy of your beliefs.
Instrumental rationality: systematically achieving your values.
Rationality is presented as a toolset that enables one to obtain a more accurate picture of the world—to become less wrong3—and navigate it more effectively. In essence, it’s a framework for self-improvement, so it’s probably about as useful as any other, although different people will prefer different frameworks. Incidentally, Eliezer also wrote an extraordinarily successful Harry Potter fanfiction that attempts to convey the experience of rationality—I found it a fun read! While the utility of rationalist self-improvement remains a divisive topic in the community, I can supply a previous description of typical rationalist positions:
While of course getting rationalists to reach consensus is something like herding cats, typical rationalist philosophical positions include reductionism, materialism, moral non-realism, utilitarianism, anti-deathism and transhumanism. Rationalists across all three groups tend to have high opinions of the Sequences and Slate Star Codex and cite both in arguments; rationalist discourse norms were shaped by How To Actually Change Your Mind and 37 Ways Words Can Be Wrong, among others.
The future sounds like it could be pretty cool—or not
Rationalists have characteristic and somewhat peculiar norms surrounding discourse. These norms expect people to argue in good faith—to honestly and openly describe, justify, and defend their beliefs. And there are a few issues in which rationalist discourse is characteristically interested. These overlap heavily with the type of issues researched by the University of Oxford’s Future of Humanity Institute, headed by Nick Bostrom. You may have heard of his simulation argument, sometimes called the simulation hypothesis, which posits that one of the following is likely true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.
This sort of metaphysical speculation certainly is fun, but it’s probably not too useful to most people.
A far more important issue the rationalists are interested in is existential risk—a risk threatening the extinction of humanity or the permanent curtailment of our future development—stemming from, for example, AI, nanotechnology or biotechnology. Eliezer founded and presently works at the Machine Intelligence Research Institute (MIRI) on the difficult problem of AI alignment—ensuring that AI has values aligned with human values. This work seeks to mitigate the existential risk posed by the creation of an unaligned superintelligent AI. Being indifferent to our existence, such an agent may destroy us as we destroy anthills.
Somewhat related is the Fermi paradox—the absence of evidence for alien intelligence, despite the size of the universe. A possible resolution proposed by Robin Hanson is the existence of a Great Filter—at least one improbable step in the process required for the emergence of life at a stellar or galactic scale. Such a step, if it exists, may be behind us—abiogenesis—or ahead of us in the form of an extreme existential risk. But not all existential risks resolve the paradox: an unaligned superintelligent AI would, with reasonable likelihood, collect resources at a stellar scale. Hanson recently suggested—though he wasn’t the first—a simple resolution to the paradox: if alien civilisations colonise outwards at nearly light speed, we would likely not see evidence of them, conditioned on our existence in uncolonised space. We would also expect to have emerged early in the life of the universe in such a scenario, as indeed we have.
Transhumanist ideas are also of particular interest, including cryonics—speculatively freezing and preserving a human corpse or brain in the hope of future resurrection—as well as life extension and mind uploading. There are probably quite a few people who use the prospect of the glorious transhumanist future—where humanity has beaten back death and spans the stars—as a source of motivation.4 But transhumanism is not just the purview of rich white men—medical transitioning is presently a mainstream discussion, and transhumanists are proponents of body modification in general.
Accurate beliefs about the world can be useful—at least for investing
The rationalist interest in AI alignment bleeds into their perspective on epistemic rationality. Bayesian epistemology is in some sense an unattainable ideal, although it can be perfectly implemented using unbounded computation through Solomonoff induction. Predictive processing theory seems to make a reasonable case that the brain is fundamentally Bayesian. This reassuringly suggests that the pursuit of epistemic rationality should not involve a complete overhaul of our reasoning. Rather, it should involve explicitly understanding how we reason about the world, including our cognitive biases, and deliberately practising Bayesian epistemology. The end goal is to hone one’s intuitions, improving but not supplanting the way we usually reason about the world. Of course, explicit reasoning will always remain useful in some contexts.
At this point, I should offer a little overview of Bayesian epistemology. Bayesians interpret probabilities as describing a state of knowledge of reality, in contrast to frequentists, who interpret probabilities as describing the frequencies of events. Degrees of belief are quantified by probabilities, which are updated using Bayes’s theorem when encountering new evidence. Hence, unlike frequentists, Bayesians can assign a probability to a hypothesis being true.5 Grant Sanderson has created many excellent videos on mathematics, including explanations of Bayes’s theorem and Bayesian belief updating.6
Rationalists enjoy assigning probabilities to concrete predictions and will sometimes bet on these predictions to put skin in the game. They use sites like PredictionBook or Metaculus to test their calibration—whether events occur at rates commensurate with the probabilities you assign to their occurrence—in the vein of Tetlock and Gardner’s Superforecasting. Gwern is an example of someone who is particularly well-calibrated, and Zvi is also regarded as such. Making accurate predictions about the world at large is a clear demonstration of epistemic rationality. But it is perhaps more important to have accurate beliefs about your own personal life, and this probably requires quite a bit more introspection.
Procrastination probably doesn’t help you systematically achieve your values
I haven’t yet read the Sequences, but they’ve apparently done an excellent job in illuminating the waters of epistemic rationality. This is not quite so for instrumental rationality, as Eliezer admits in the preface, and the early community was particularly focused on things like cognitive biases. A few organisations have focused more on the application of rationality in the world, which necessarily involves developing techniques of instrumental rationality. Some examples are the Centre for Applied Rationality (CFAR), which runs workshops and has an accompanying handbook, and Spencer Greenberg’s Clearer Thinking.
In particular, akrasia—acting against one’s better judgement—is seen as rate-limiting in a chemical kinetics sense.7 One example of akrasia is procrastination, which is commonly seen as a problem of emotional regulation. Combating akrasia seems to come down to understanding your feelings and working out what you actually care about. This may involve introspective techniques like meditation and Gendlin’s Focusing. I should note that epistemic rationality is the foundation for the pursuit of instrumental rationality. Be wary, therefore, of those who suggest that you should sacrifice epistemic rationality in the pursuit of instrumental rationality, for you will lose the ability to accurately assess the outcome.
These are real people with real influence
My previous descriptions of the rationalists might give the impression that the community is primarily online, but there’s actually a social scene in the San Francisco Bay Area, and these people frequently hang out in person—when there’s no pandemic to contend with, at least. You probably shouldn’t move there, though.8
As the location of the social scene might lead you to expect, the rationalists are fairly well connected to Silicon Valley and its startup culture and venture capital. Quite a few people there read blogs like Slate Star Codex, although I must qualify this by pointing out that the rationalists certainly aren’t the Silicon Valley zeitgeist.9 The connection is best exemplified by Paul Graham, who is very closely connected to the rationalists—if not one himself. He has written numerous essays and founded the startup incubator Y Combinator, which helped launch Airbnb, Stripe, DoorDash, Dropbox, Twitch, Reddit, and, of course, Substack. It also runs the social media site Hacker News.
Interestingly, there also seems to be some sort of story about Eliezer’s involvement in the founding of DeepMind, which created AlphaGo, and OpenAI of GPT-3 fame. However, the only story I can find is that Eliezer introduced Peter Thiel to Demis Hassabis and Shane Legg, who helped found DeepMind with Thiel’s backing. Thiel is a somewhat sinister anti-democratic billionaire who founded Palantir. Yes, this is a Tolkien reference. Yes, they want to spy on you. Seemingly in response to DeepMind, Elon Musk and Sam Altman founded OpenAI. Sam Altman is the current CEO of OpenAI, former president of Y Combinator, and a reader of Scott’s blog, and thus the circle is complete.
Both DeepMind and OpenAI seem to conduct research with an eye to the existential risk posed by AI, and this influence seems to stem from Eliezer. Again, none of this is to say that the rationalists control Silicon Valley or Big Tech, or anything close to that. Rather, the rationalists are a small and obscure group whose ideas have a hugely outsized influence on the world. They might even have some good ideas!
The post-rationalists
I should offer a brief, more up-to-date rationalist taxonomy than Scott Alexander’s seven-year-old map. In doing so, I’ll touch on the Twitter discourse I mentioned earlier. You may see the critical rationalists lurking around on Twitter, David Deutsch among them. He’s another important figure in quantum information—this is starting to feel strange. However, they are unrelated to the rationalists I’ve been discussing, being interested in Karl Popper and falsification rather than Bayes’s theorem.
The post-rationalists, such as eigenrobot, QC, meditationstuff, visa, and Venkatesh Rao, who founded the blog Ribbonfarm, seem to exist as a direct reaction to the rationalists—I’ll mainly refer here to the Twitter community centred on eigenrobot. Generally, the post-rationalists are more interested in feelings than they are in explicit reasoning. Some claim rationality is about ignoring your feelings and instead relying on explicit reasoning. In response, I can quote the introduction to the Sequences:
Real-world rationality isn’t about ignoring your emotions and intuitions. For a human, rationality often means becoming more self-aware about your feelings, so you can factor them into your decisions.
Indeed, the post-rationalists are a lot more ‘rationalist’ than they are ‘post’, leading to regular spats about the taxonomy. Surprisingly, this is something even people like Paul Graham seem to care about. The most distinctive thing that I can point to separating some post-rationalists from rationalists is an interest in magic and the occult, as typified by Liminal Warmth. She recently bought some people’s souls for $10 each, but while it was quite amusing, the sellers were exceedingly irrational. Also, many post-rationalists are interested in the meta-rationalist ideas expressed in David Chapman’s Meaningness and In the Cells of the Eggplant.
It’s important to distinguish the rationality Chapman rejects from the rationality Eliezer advances, but while the post-rationalists often fail to do this, it’s perhaps not the mistake you might expect it to be. As Jacob Falkovich—who blogs at Putanumonit—suggests, post-rationality seems to exist primarily as a reaction against the rationalist social scene, which is apparently full of people who don’t know how to have fun at parties. That is, the rationalist community somehow manages to attract people who are anxious, socially-awkward nerds—entirely unlike me, of course—and these people are out of touch with their emotions and haven’t yet fixed that. In some sense, then, the post-rationalists are closer to Eliezer’s vision of rationality than many rationalists. To be frank, I think the essence of the taxonomy is captured by the idea that the post-rationalists are just rationalists with more social skills and a greater focus on instrumental, rather than epistemic, rationality.
Rationalist-adjacent movements
Effective altruism
Now that I’ve provided a brief overview of the rationalists, I should discuss the two major rationalist-adjacent movements: effective altruism and neoreaction. Some consider polyamory to be rationalist-adjacent, but while many rationalists are polyamorous—reflecting their predilection for re-evaluating how things are done—I highly doubt the converse is true, so I won’t discuss polyamory here. Onto effective altruism, then, which has been defined as follows:
(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world.
The effective altruism community formed around Giving What We Can and 80,000 Hours, founded respectively by Toby Ord, a philosopher at the Future of Humanity Institute, and William MacAskill. While Peter Singer has been pottering around in this area long before these organisations, his ideas only really caught traction with their emergence, and the rationalists were some of the first to hop on. The movement mainly prioritises alleviating global poverty, reducing animal suffering, and mitigating existential risks. Unsurprisingly, those who are rationalist-adjacent are often most concerned with existential risk, particularly AI alignment. Ord himself recently published a book on the topic. All in all, this is a spectacular success on the part of Eliezer, who began the rationalist community, convinced them that AI alignment is an important issue, and then convinced them to donate to MIRI, which he founded, to fund his work on the issue.
More broadly, the effective altruism movement has had some missteps, such as an exaggerated early focus on earning to give—pursuing a financially rewarding career to donate a large portion of your income to charity—which has been quite harmful to public perception. And a major problem with both rationality and effective altruism is their tendency to attract anxious and scrupulous people who are then provided with compelling targets for their anxiousness and scrupulosity. Is AI going to destroy the world? Am I a good person? I can’t answer these questions—but most people probably shouldn’t worry about them too much. On the whole, though, effective altruism is probably a good thing: it aims to make the world a better place. Neoreaction might also want that, but most people will probably find it just a little more questionable.
Neoreaction
Neoreaction was founded by Mencius Moldbug, a frequent commenter on Overcoming Bias who blogged at Unqualified Reservations and now writes under his real name Curtis Yarvin. It is an anti-democratic and reactionary political philosophy that was developed further by Nick Land, also responsible for accelerationism. But I won’t delve into that here. More recently, neoreaction has gone somewhat mainstream, influencing people like Trump strategist Steve Bannon and our favourite sinister billionaire Peter Thiel. Thiel backed Yarvin’s startup Urbit, which aims to reimagine the internet as a decentralised, peer-to-peer network where individuals control their own data—perhaps he’s hedging Palantir. Anyway, Yarvin just adores monarchy, and I hope you can see how Urbit reflects his anti-democratic, reactionary spirit. If not—well, I’m afraid I’m going to take you on a little detour before I describe some of Yarvin’s thoughts.
On Twitter, you’ll find social ties between rationalists and neoreactionaries, with a prime example being the Roko of basilisk fame. About 5% of the survey-taking Slate Star Codex readership identify politically as neoreactionary, or more precisely ‘Neoreactionary, for example Singapore: prosperity, technology, and stability more important than democratic process’. I’m not sure what to make of this. I haven’t told you what Yarvin thinks, but I suspect he’s looking for something a little more extreme than Singapore! There’s clearly some overlap, but I think it’s largely in part because rationalist discourse norms mean they’re more willing to tolerate the neoreactionaries than other communities. Indeed, neoreaction seems to be a somewhat common failure mode of rationality. I don’t want to become a fascist, or worse, Roko.10 As I alluded to at the start, one reason I’m writing this blog is to avoid falling down this rabbit hole—or, at least, if I wanted to fall down the rabbit hole, I’d have to drag you along with me.
Meet the racist far-right centrists
Among and around the neoreactionaries, you’ll find many objectionable people11 who believe in ‘human biodiversity'—scientific racism by another name—and simultaneously claim to be centrist. I’ll take a brief interlude to touch on this curious idea. Consider the idea of the accidental centrist—whose carefully and individually considered opinions conspire to place them, on average, in the centre—which is beloved by Paul Graham. These people also refer to themselves as chad, enlightened, or alt-centrists. I see no reason why the aggregate of your views should happen to place you at the centre of the Overton window—the current range of politically acceptable discourse—but I suppose they just like the optics.
More clearly, while the opinions of the intentional centrist—who places themselves proudly at the centre of the Overton window—are dominated by the window, so too are the opinions of the accidental centrist. Well, supposing they’re actually a centrist. Of course, none of this explains why the ‘human biodiversity’ people are wrong! And seeking to prove them wrong is dangerous precisely because the evidence might just be on their side. Worse, this elucidates that my reasoning will be highly motivated—but I’m afraid I’ll leave you hanging for the moment with a warning to be wary of the label ‘centrist’ in these parts.
Although Eliezer is actively hostile towards neoreaction, Scott Alexander is a more complex affair and a subject of recent controversy. I should first note that he wrote a detailed anti-neoreactionary FAQ. But anyway, now’s the time for me to elaborate on why Scott Alexander took down his blog Slate Star Codex, a hub for the rationalists. Basically, the New York Times was planning to write an article on the blog and the rationalists that would reveal Scott’s last name—his first and middle names comprise his pseudonym. Scott wanted to remain pseudonymous, in large part because he’s a psychiatrist—they’re taught not to disclose personal information to their patients—so he deleted the blog. He then decided to entirely refactor his life so as to be compatible with the sort of publicity brought by such an article. This involved quitting his job. Recently, he relaunched his blog on Substack as Astral Codex Ten and was met shortly after by the long-awaited New York Times article. Many people have had many different thoughts on this article, and Yarvin, of course, has had thoughts too.
I should have some of my own, but I’m afraid I’m going to cheat you here. The article was not particularly well-written—those on the side of the Times didn’t really try and argue on that front—and its descriptions were generally inaccurate and often misleading. But most of what the Times has to say doesn’t interest me. One thing, though, does stick out:
In one post, he aligned himself with Charles Murray, who proposed a link between race and I.Q. in “The Bell Curve.” In another, he pointed out that Mr. Murray believes Black people “are genetically less intelligent than white people.”
Perhaps unsurprisingly, this leads us straight back into the choppy waters of ‘human biodiversity’. Before I more precisely define what I’ll refute, though, I want to point out that the mendacity of this quote is needless. Scott aligns himself with Charles Murray in support of a basic income guarantee, but he usually seems not unsympathetic to ‘human biodiversity’. Indeed, a non-negligible portion of the Slate Star Codex community favours the idea, and this is reflected in the comments. While you can only deconvert people by first tolerating them, you must beware lest those you seek to deconvert become a sizeable part of your community. I certainly doubt Steve Sailer is changing his mind—he coined ‘human biodiversity’ and seems to be a frequent commenter on the blog. And there’s the matter of some leaked emails—I expect this to stay up, so please forgive the odious source—which clearly indicate that Scott is not entirely opposed to the idea. So I think it’s reasonable to be a little wary.
The main hypothesis of ‘human biodiversity’ is simple: there’s a genetic component to group-level differences in IQ test performance. Almost invariably, race is the group in which these people are interested. And you could question the meaningfulness of IQ, biases in IQ testing, and what’s meant by race—but you can head this hypothesis off even if you baldly accept its wording! I agree with Scott that it’s very difficult to prove the null hypothesis that there’s absolutely no genetic contribution. However, I claim that you seem to be able to satisfactorily explain race differences in IQ test performance without needing to invoke genetic differences. That’s good enough for me to dismiss the matter and move on. Sure, this isn’t very Bayesian, I haven’t actually scoured the literature, and my reasoning is highly motivated. But I must ask:
Why choose to spend your time trying to prove that there’s a genetic component to group-level differences in IQ test performance?
Why choose to spend your time trying to come up with some scientific justification for racism?
Let me now justify my claim. Pottering around in these waters—I don’t remember where—I found a Slate Star Codex post that signal-boosts an Objectivist’s absolutely fascinating refutation of ‘human biodiversity’. It also links to a more concise refutation by Ron Unz. Unfortunately, it seems Unz has recently gone a little crazy. Still, I feel reasonably confident in claiming there’s no leftist bias in sight, given these ideas are coming from an Objectivist and The American Conservative. The essence of Unz’s refutation is that after correcting for the Flynn effect—the rise in scores on IQ tests over time, as IQ is normalised to a mean of 100 and standard deviation of 15—we can compare the IQ of European immigrants to America to the IQ of Europeans who did not emigrate. We find mean IQ differences on the order of a standard deviation—just as large as you might find between races nowadays—and so we are done.
Scott recently reviewed Marxist Freddie deBoer’s The Cult of Smart, which touches on these issues. I myself agree with Freddie. Individual-level IQ test performance differences seem to be in part genetic, but there’s no particular reason to suspect cleaving along somewhat ill-defined racial lines will leave you with groups that have different distributions of these genes. Regardless, neither intelligence nor IQ test performance are measures of human value. The view that intelligence is a measure of human value seems pretty deeply baked in—intelligence is probably the feature that most clearly distinguishes us from other animals—but it’s not particularly healthy. In the review, Scott is perplexed by Freddie’s disinterest in genetic contributions to racial differences in IQ test performance. If Scott didn’t care so much about the accuracy of his answer to this question, he’d perhaps be happy to accept that environmental effects seem to be a sufficient explanation of what we see. It’s a little concerning that he would choose to be preoccupied with this matter, but it’s really not that big of a deal at the end of the day—at least, in comparison to what you might find poking around neoreaction.
A neoreactionary criticism of government
Now I’ll finally give you an outline of Yarvin’s thoughts and neoreaction. When he’s not busy snickering at Scott Aaronson about the debate over the term ‘quantum supremacy’ or attempting to convert Aaronson to the dark side, he likes to think about power. And he likes to think about the government—of the United States, of course, does anywhere else even matter? Well, so the story goes, not even this government—the government—matters. It was intended to be a combination of democracy and monarchy: an elected monarchy, where the ruler—the President—is elected by the people to rule for four years. People still care about Presidential elections, but what can a modern president do to control the machinations of a growing bureaucracy? It’s not as if it’s the bureaucrats’ jobs that are under threat. Members of Congress are elected too, but no one really seems to care—they tend to stick around even when everyone hates them. They have the nominal ability to wield power, yet they never manage to do anything interesting. No, these people are just replaceable cogs in their party’s machine.
It is the government bureaucracy that holds power—an oligarchy—and diffuses it because, well, who wants to be responsible if they could instead blame a bureaucratic process? And the bureaucracy leaks power by exporting its thinking to the Cathedral—journalism and academia—which decides generally accepted truth and is so named because it is comprised of many institutions. Yarvin has a funny habit of mentioning only two—Harvard and the New York Times—but he does sometimes mention Yale, at least when it’s teaching him. The problem with the Cathedral is that it’s policed only by itself. As long as the marketplace of ideas is free, this is no problem. But the Cathedral wields unaccountable power—how could anyone but the government be responsible when government bureaucracy implements some bad policy? It’s certainly not whoever came up with it. Unaccountable power corrupts, the Cathedral’s truth diverges from reality, the system begins to break down, and reality eventually comes knocking.
I have presented what I think are some of the strongest parts of Yarvin’s criticisms of modern government. You might be tempted to retort that politics is ruled by money, not ideas—yet people spend more on almonds than on politics. Is this not strange? And it’s not as if money wants a dysfunctional system—it is fiat currency, after all! But in pruning Yarvin down to this, I did have to remove copious verbosity and references to the Nazis—it’s not like he’s said the Nazis were anything but evil; he just asked why they’re ‘considered to be so much worse than other comparably murderous groups’. Things are not quite so bad as you might expect, though—he can’t manage to be homophobic:
Let’s take homophobia, for example, because this is one area on which (despite my breeder tendencies) I am fully in agreement with the most advanced progressive thinking.
Curtis really tries to hide it, but he does care deep down! Progressive thinking truly has seeped into the water—it’s hard for neoreactionaries to worry me all that much when I see this.
And neoreactionary criticisms are useful—they do offer a very different perspective on things, after all. Some left-market anarchists who lurk around post-rationalist Twitter seem to keep up with Yarvin’s work. Not that they agree, of course, but perhaps there’s value to be had, even if it’s thoroughly mixed with evil—though I myself am not quite so sure of my ability to separate the evil from the insightful. However, I would love to better understand anarchist positions and arguments as a counterpoint to neoreaction. Their vision sounds pretty appealing—though to be fair, everyone is looking to realise the glorious transhumanist future in these rationalist-adjacent parts, including the neoreactionaries. While this flavour of anarchism sounds so complex, the anarchists deserve at least as much attention as the neoreactionaries—and they’re not overtly racist and sexist, which is a nice bonus!
Patchwork: a utopian vision fit for the future
I asserted earlier that Yarvin is anti-democratic. This is not because he identifies modern government as an oligarchical system that supplanted an older democratic system. It is because he wants to replace the current system. He dreams a monarchist dream of a functional government that takes care of its citizens and does not leak power. Perhaps now is the time to reconsider our affection for the state, lest we too become fascists?
Yarvin has a better form of government in mind, and he calls it Patchwork: a system of thousands of independent mini-countries, each governed by a joint-stock company, the shareholders of which select a CEO. His ideal monarchs are tech CEOs—the ‘neo’ in neoreaction is no accident, nor is its rationalist-adjacency. He just wants to put Steve Jobs in charge of things, and is that so bad? The company profits by making its land as valuable as possible. That is, it profits by making people want to live within its borders—by functioning well and treating its residents well. This simple incentive system should generally ensure that the government behaves itself. In general, these governments should neither make their residents slaves nor impede their freedom of movement, though these actions would be entirely within their power. Neither should they restrict the freedom of speech of their residents, for their opinions are wholly irrelevant. Bring on the glorious transhumanist future!
To be frank, this sounds like a perfectly reasonable system of government. But here’s a little idea I had: maybe we could try calling the residents citizens; maybe we could try having the residents and shareholders be the same people; maybe we could even try giving each shareholder a single vote which they can use to select a CEO. Hey, doesn’t that sound—12
This is not quite fair. Yarvin doesn’t want the residents to be shareholders—he wants rational, disinterested voters motivated only by profit. He probably also wants the influence of their votes to be based on their shareholdings, though he doesn’t seem to specify this. But is this not just a weird form of democracy?
There’s one last thing to clean up, and that’s the state monopoly on the legitimate use of violence. What ingenious solution has our software engineer friend Yarvin come up with here? His solution is a cryptographic chain of command, where cryptographic systems provide authorisation that flows down from the shareholders to the CEO and then to security forces, enabling the use of computerised weapons or robot armies. This is truly the dream of an absolute nerd—and it sounds like something a rationalist came up with. But the proliferation of capable 3D printers would be problematic, as they could produce weapons that don’t require such authorisation.
I said Patchwork doesn’t sound so bad, but maybe I’m just a fascist at heart. You see, I have this running joke—I think it’s funny, at least; I’m not so sure about everyone else—about becoming a global dictator. Sorry, becoming a solar sultan. Ruling only the Earth is such lowly ambition. In this scenario, authority is invested in me by some aligned superintelligent AI and my will is enforced by robot armies and—
I suppose this does seem quite fascistic. And a bit like Patchwork. Maybe I could still repent—perhaps a meditation on the virtues of good old-fashioned democracy would suffice.
Reading Yarvin probably won’t turn you into a mass shooter
I’ve led you on a brief journey through neoreaction and effective altruism. But while I meant to give you an introduction to the rationalists, I’ve spent many more words on these adjacent movements. This is largely because I don’t intend to continue to engage much with their ideas. Yet this section also serves as an inoculation against the failure mode that is neoreaction, whose presence is tolerated by rationalist discourse norms. Fortunately, it is a sheep in wolf’s clothing. Yarvin’s writing is fun—don’t you feel smart when you read the word Rubicon? But when you ignore the blatant evil as I’ve done here—no, Curtis, we shouldn’t seal up poor people in virtual reality pods, though that might be better than how Australia treats asylum seekers—you’re left with ideas that really aren’t as dangerous as he’d like you to think. For the anxious and scrupulous committed progressive, rationality and effective altruism pose far more direct dangers.
And yet—Nick Land’s personal and very fascistic motto is ‘Coldness be my God’. His god is Gnon—nature or nature’s god, reversed. It is a cold god indeed, and not one to be worshipped.13 Unfortunately, some mass shooters seem to have drawn inspiration from Land’s accelerationist ideas. But Yarvin would repudiate such actions as helpful to the current regime. Instead, he encourages detachment—and I do think that some people would be better off for participating less in politics.
A plan for rationalist self-improvement
So how do I plan to journey into the ideas of the rationalists—to pursue rationalist self-improvement? I’ll start at the beginning by reading Eliezer’s Sequences—these focus mainly on epistemic rationality. If tempted, I could also read some relevant books. Daniel Kahneman’s Thinking, Fast and Slow hasn’t aged well—he’s overly confident about priming research that was hit hard by the replication crisis—but if you’re less overconfident and skip priming, there’s probably still value here. Philip Tetlock and Dan Gardner’s Superforecasting is particularly relevant to making accurate predictions about the world. Edwin Jaynes’s14 Probability Theory is a textbook on Bayesian methods in statistics, and it might be interesting for a more serious approach to things.
Rationality is sometimes called systematised winning, and yet they’re an obscure group few have heard of—how could that possibly be winning? In fact, rationalists interested in AI alignment seem to be doing fairly well.15 They’ve influenced people like Peter Thiel and Dominic Cummings, and they’ve influenced the founding of organisations like DeepMind and OpenAI. And rationalist blogs like Putanumonit and Slate Star Codex also helped the UK government realise that they were not reacting appropriately to COVID-19.
This should give you some indication that they’re not just full of bluster: their ideas might be valuable, and their virtues might be virtuous. And they’re not just trying to try—Eliezer, at least, is taking responsibility and trying to win, and I think that’s a useful attitude. At the very least, Scott seems to be learning lots of things, which is more than can be said for many. If the rationalists are doing well, perhaps their ideas are worth investigating—and as Michael Nielsen suggests, science certainly seems to need the help.
More interesting and less well developed is the step after epistemic rationality: instrumental rationality. The CFAR handbook offers a guide to practising some techniques of instrumental rationality. Introspective techniques are a particular focus, with meditation a classic example. I enjoy the more intellectual vibe of Sam Harris’s Waking Up—though you might want to avoid his Twitter—but I’m sure other apps in this space are fine for casually investigating meditation. By contrast, meditationstuff’s protocol or more extreme forms of meditation are a far more serious affair, requiring an investment of many thousands of hours and posing a much greater danger. Some other techniques include Eugene Gendlin’s Focusing and the rationalist concept of luminosity. David Chapman has also written Vividness on Vajrayana Buddhism, whose insights inform Meaningness and In the Cells of the Eggplant. Perhaps one could even dabble in the dark arts if they felt frisky—there’s some really spooky stuff out there that leverages Löb’s theorem.16
I hope the name ‘dark arts’ makes it clear that these techniques can be dangerous. After all, they consist of people intentionally modifying how their mind works by themselves and without either supervision or personalised guidance. I think an amusing name for this is brain self-surgery, one form of which is meditation, especially in large amounts. Serious meditation of this form may be useful, but it can also be dangerous, and this danger should be taken seriously. In fact, you should generally be careful when performing brain self-surgery—I hope the name makes this painfully obvious!
Free will
Infohazard warning: discussion of free will. When taken seriously, this topic can be distressing for some.
Self-improvement can quite easily become pathological, pursued in a futile effort to fix the feeling that you’re not enough as you are. Of course, it’s an entirely incorrect approach to the problem. This doesn’t mean self-improvement is inherently flawed—you can desire to become better without viewing your current self as wrong—but it can be difficult to adopt a more healthy mindset.
A careful consideration of free will has been useful in my pursuit of such a mindset. Some try to define free will as the ability to have done otherwise, to select between possible futures. This makes absolutely no sense. Your feeling that you could have done otherwise does not constitute evidence that you actually could have done otherwise. Indeed, you can never obtain such evidence because only one future can come to pass. Let me quickly note that some research has suggested that people will behave more poorly if you prime them to not believe in free will. But other research has suggested the opposite, and this is priming research regardless, so you needn’t worry: we don’t need some noble lie to preserve the notion of free will. I’ll now summarise a realistic view of free will—at least, from my physicalist perspective—which may have been influenced by Hofstadter’s excellent Gödel, Escher, Bach.
You and your brain are comprised of particles whose behaviours are governed by physical laws, and so your future states follow from your past states as determined by these laws. Neither the fact that you’re a chaotic system nor the possibility of quantum effects are obstacles to this.17 You may have heard Landauer’s adage that ‘information is physical’. This statement also applies to you. That is, you are the physical system that implements the computation the system uses to determine its own future states. More abstractly, you are also that computational process itself—and this is what flavours your conscious experience. You are free in the sense that you determine your own future states, but you do this by using physical laws to perform a computation that tells you what you want your actions to be. Hence your freedom stems precisely from a deterministic but quantum mechanical physics. Someone else certainly could, in principle, use the fact that this computation is lawful to replicate it and thereby determine your future states.18 But you are that computational process, so this simply amounts to simulating you.
It was inevitable that your current self would come to be, for the freedom you have arises from a deterministic but quantum mechanical physics. Our lives have led us inexorably to this moment—to me writing this sentence and to you reading it. How could there be any wrongness here? How could there be any insufficiency here? You are inevitable as you currently are—though not Thanos, I’m afraid—and that could never be wrong or insufficient. So pursue self-improvement freely—pursue it of your own free will and without blame for yourself! For it may be useful to be better than you are now.
Don’t blindly listen to advice
I’ll give you a quick note on the tricky thing that is advice on the internet. It’s tricky because, well, who’s the audience? Regardless of the matter at hand, some will need to do something more, and others will need to do that thing less. Hence any advice telling people to go either way will lead some astray. This is worsened by the tendency people have to sort themselves into communities whose beliefs align with their own. The advice these communities generate is often targeted at the world at large, which makes it likely to push the people in the community—the ones who are most likely to actually hear the advice—in the wrong direction. Consider, therefore, both the advice you hear and its inverse before you act on it.
Where to from here?
We’re finally at the end—I hope you now have a decent impression of the rationalists, effective altruists, and neoreactionaries. Perhaps you’ve even learned something about information hazards, the Fermi paradox, IQ, free will, or advice on the internet—an eclectic set of topics, for sure. I’ve also outlined a speculative map of my journey into the rationalists’ ideas, and this journey will begin with a series of posts summarising the key ideas of Eliezer’s Sequences. You can expect one post on each of the six books, but it might be a while before that happens—I have a few things I’d like to write about first. See you then! And if you want to be notified when that happens—
If you’re going into an occupation where you write professionally, such as academia, you really should watch the video. Actually, you watch it regardless—it really is outstanding.
I couldn’t tell you if I’m exaggerating the dangers here because I’m not stupid enough to think about this in any sort of detail. Regardless, if you haven’t already heard about the rationalists or Roko’s basilisk, I doubt you’ve internalised the ideas you’d need to do yourself significant harm, which is why I didn’t supply a warning.
This is, in fact, the origin of the name of the forum LessWrong. It’s not intended to imply that others are more wrong, or anything in that vein.
In Ziz’s terminology, these people are called liches, with the glorious transhumanist future as their phylactery. I would advise you not to listen to her, but this metaphor does feel like it captures something important.
p-values really are quite strange. I speculate that the frequentist perspective exacerbated the replication crisis.
As these videos and their treatment of the matter might suggest, Grant is a rationalist, or, at least, rationalist-adjacent. Another fairly famous content creator who falls into the same category is Wait But Why’s Tim Urban.
Chemical reactions usually occur in a series of interactions, each between two species—intuitively, two-body collisions are much more likely than three-body collisions. Often, one step is significantly slower than the rest, and so it becomes the main factor determining the rate of the reaction. However, this doesn’t mean that the overall rate is necessarily the rate of this process or that other steps are irrelevant. For many, procrastination is rate-limiting in the process of systematically achieving their values. Everything else still matters, but the most important thing is to procrastinate less.
Ziz has made some pretty extreme accusations against people in the rationalist social scene. But there’s an online warning against falling into her orbit, and it seems more likely that she’s unreasonable than that the entire rationalist social scene is unreasonable. Nevertheless, quite a few people seem to have been convinced to move to the Bay Area specifically to join the social scene. I think this is a bad idea.
For much of the blog’s history, Scott lived in Michigan. It would be quite strange for someone to suggest that a psychiatry resident in Michigan could meaningfully define the zeitgeist of Silicon Valley.
Roko is replying to Default Friend, who is rationalist Twitter-adjacent. She is excellent for more measured right-leaning takes and writes at Default Wisdom. She deactivates her Twitter account fairly regularly, so the link might not always work.
It appears that Zero HP Lovecraft may have been a developer for Candy Crush. These sorts of people are really traditional, you see.
If you were to read only one Slate Star Codex post, Meditations on Moloch would be an excellent choice.
This is, in fact, the Jaynes responsible for the Jaynes-Cummings model, another albeit more tenuous connection to quantum information.
Actually, if you check the Twitter thread, you’ll find Michael Nielsen and Scott Aaronson on the list of people he thinks you should read. In fact, the list consists of six rationalists and one critical rationalist, David Deutsch. It’s frankly shocking that three of the seven are important figures in quantum information—it’s not something I can explain. But maybe it’s just the field to be in!
I really don’t know how risky this sort of thing can be. I suspect it can really do some damage if you’re not careful, although I also think you’d need to put in a lot of effort to experience any effect at all. Nevertheless, I’m not going to link to anything about this here—look into it yourself if you’re desperate.
Unless your reaction to Bell’s theorem is ‘non-locality it is’. I suppose I can’t stop you.
The no-cloning theorem does pose a problem if quantum effects in the brain are important, but we have very good reasons to think that this is not the case.