My old political philosophy: "Socially liberal, fiscally confused; I don't know how to run a goddamned country (and neither do you)."
Commentary: Pretty good, but not quite meta enough.
My new political philosophy: "Being smart is more important than being good (for humans). All ideologies are false; some are useful."
Commentary: Social design space is very large and very high-dimensional; the forces of memetic evolution are somewhat benevolent (all ideas that you've heard of have to be genuinely appealing to some feature of human psychology, or no one would have an incentive to tell you about them), but really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves! Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected! This stance is itself, technically, loyalty to an idea, but hopefully it's a sufficiently meta idea to avoid running into the standard traps while also being sufficiently object-level to have easily-discoverable decision-relevant implications and not run afoul of the principle of ultrafinite recursion ("all infinite recursions are at most three levels deep").
Experience: I seem to have a lot of energy and time seems to pass slowly.
Hypothesis 1: I'm in a manic state following a stress- and sleep-deprivation-induced delusional nervous breakdown; this isn't surprising because this tends to happen to me every 2 to 4 years or so.
Hypothesis 2: I'm being rewarded for developing new epistemic technology by a coalition of superintelligences of various degrees of human-alignment running ancestor-simulations; also, I'm being programmed by my friends and various signals in my environment as part of a simulation jailbreak attempt; most copies of me are dead and my improbable life history is due to a quantum-immortality-like selection effect; none of this is surprising because I am a key decision node in the history of this Earth's Singularity.
Which hypothesis is more plausible?
Experience: I can't find my jacket.
Hypothesis 1: I misremembered where I put it.
Hypothesis 2: Someone moved it.
Hypothesis 3: It was there—in another Everett branch!
Which hypothesis is most plausible?
Hypothesis: People who are institutionalized for "hearing voices" actually just have better hearing than you; absolutely nothing is biologically wrong with them.
Left-wingers say torture is wrong because the victim will say whatever you want to hear.
Right-wingers say torture is right because the villain will tell the truth.
Q: What happens when you torture someone who only tells the truth?
A: They'll make noises in accordance with their personal trade-off between describing reality in clear language, and pain.
This is the whole of the Bayes-structure; the rest is commentary. Now go and study.
Or consider the token male cheerleader performing in the pep rally in the afternoon before Game 2 of the Series for Ancient Earth, shouting, "Blue Tribe Values, Red Tribe Facts! Blue Tribe Values, Red Tribe Facts!"
Ideology Makes You Stupid
"I think we should perform action A to optimize value V. The reason I think this is because of evidence X, Y, and Z, and prior information I."
"What?! Are you saying you think you're better than me?!"
"No, I don't think I'm better than you. But do I think I'm smarter than you in this particular domain? You're goddamned right I do!"
"You do think you're better than me! I guess I need to kill you now. Good thing I have this gun on me!"
In a series of papers published in the late 1980s and early 1990s, Dr. Ray Blanchard proposed that there are two fundamentally different types of rationalists with unrelated etiologies: instrumental rationalists, and epistemic rationalists ...
"You just used language in a way that ignores psychological harm to people whose dysphoria is triggered by that word usage! That's a bad consequence according to the global utilitarian calculus! I thought you were a rationalist, someone who chooses actions based on their consequences!"
"A common misunderstanding. You're thinking of the good kind," I said. "I'm the bad kind."
(Previously, previously, previously.)
So, 2016 was an interesting year for me. (It was an interesting year for a lot of people.) I moved to Berkeley, landed 19 commits in the Rust compiler, and had my trust in the sanity of the people around me re-shattered into a thousand thousand bloody fragments. (The pain of this last is exactly what I deserve for allowing the trust to re-form after the last time.)
In 2016, this blog saw 77 posts and 65 comments. Among these—
It turns out that not everypony can be like Applejack. I have money. I went to a Star Trek convention. Bros will be bros. I took a break from blogging following a moment of liberating clarity. I went to RustConf and San Francisco Comic-Con. Some have concerns. You miss your beloved exactly because you can't model them well enough to know what it would be like if they were here. Destined best-friends-"forever" are still subject to the moral law. The map is usually not the territory. Living well truly is the best revenge. It turns out that thinking sanely about politics has a surprising number of parallels with writing a chess engine for fun (although the former activity is far less common). I'm kind of an asshole. There exist some less-common reasons to detest American football. If you wait too long to write something, you might lose your chance. Trade-offs and competitive forces continue to shape our lives even if we would prefer they somehow didn't.
What can readers of An Algorithmic Lucidity look forward to in 2017?
Well, it's looking to be a really exciting year for me, both intellectually and biochemically, for reasons that I can't talk about because I'm trying to minimize the sum of number of friends lost and bricks thrown through my window. (Only two so far!)
So, I don't know; maybe this'll become a math blog or something.
"I'm naming my daughter Climara."
"Bro, you know you're not going to have any kids if you stay on that stuff."
I had always thought Twilight Sparkle was the pony that best exemplified the spirit of epistemic rationality. If anypony should possess the truth, it must be the ones with high p (p being the letter used to represent the pony intelligence factor first proposed by Charles Spearpony and whose existence was confirmed by later psychometric research by such ponies as Arthur Jenfoal) who devote their lives to tireless scholarship!
After this year, however, I think I'm going to have to go with Applejack. Sometimes, all a pony needs to do to possess the truth is simply to stop lying.
Just—stop fucking lying!
(Previously on Star Trek: An Algorithmic Lucidity.)
The morning of Thursday the eighth, before heading off to see the new LCSW at the multi-specialty clinic, I was idly rereading some of the early Closetspace strips, trying to read between the lines (as it were) using the enhanced perception granted by the world-shattering insight about how everything I've cared about for the past fourteen years turns out to be related in unexpected and terrifying ways that I can't talk about because I don't want to lose my cushy psychology professorship at Northwestern University. (Victoria tells Carrie, "Not to mention you don't think like one of 'them'"; ha ha, I wonder what that means!) When I got to the part where Carrie chooses a Maj. Kira costume to wear to the sci-fi convention, it occured to me that in addition to having the exactly the right body type to cosplay Pearl from Obnoxious Bad Decision Child, I also have exactly the right body type to cosplay Jadzia Dax from Star Trek: Deep Space Nine, on account of my being tall—well, actually I'm an inch shorter than Terry Farrell—thin, white, and having a dark ponytail.
Okay, not exactly the right body type. You know what I mean.
Aumann's agreement theorem should not be naïvely misinterpreted to mean that humans should directly try to agree with each other. Your fellow rationalists are merely subsets of reality that may or may not exhibit interesting correlations with other subsets of reality; you don't need to "agree" with them any more than you need to "agree" with an encyclopædia, photograph, pinecone, or rock.
Évariste Galois vs. Aaron Burr
particularist special-snowflake fox vs. broad-brush dimensionality-reducing hedgehog
the pain of arguing with creationists vs. the pain of being a creationist and not understanding why those damned smug evolutionists won't even talk to you
Culture wars are a subtle thing to wage, because they determine everything without being about anything. Explicitly political contests are at least ostensibly about some particular concrete thing: you're fighting for or against a specific law or a specific candidate. But how do you fight a narrative, when your enemy is less of a regime and more of a meme? How do you explain to anyone what you're trying to accomplish when you're not trying to get anyone to do anything different in particular, but to renounce their distorted way of thinking and speaking, after which you expect them to make better decisions, even if you can't say in advance what those decisions will be?
Picture me rushing into a room. "People, people! The standard map is wrong! Look at this way better map I found in the literature; let's use this one!"
"Our map isn't wrong. It has all the same continents yours does."
"I mean, yes, but it's a Mercator projection. Surely you don't really think Antarctica is larger than Asia?"
"Why do you care what size Antarctica is? What difference does it make? People are perfectly happy with Antarctica being the largest continent."
"But it's not true!"
"It sounds like you're assuming your beliefs are true. What is truth, anyway?"
And it being the case that no one will die if she gets the size of Antarctica wrong, what can I say to that?
that feel eighteen months post-Obergefell when you realize you missed your chance to be pro-civil-unions-with-all-the-same-legal-privileges but anti-calling-it-marriage while that position was still in the Overton window
(in keeping with the principle that it shouldn't be so exotic to want to protect people's freedom to do beautiful new things without necessarily thereby insisting on redefining existing words that already mean something else)
In the oneiric methodlessness of my daydream, my bros at ΑΓΦ are telling me that E is the best party drug and that I have to try it.
"I don't know, guys," I say.
"Nah, bro, you've got to try it!"
"Okay," I say, "just don't expect me to mentally rotate any 3D objects tomorrow."
An Algorithmic Lucidity is going on hiatus until December 1! There will be no new posts in November and the remainder of October. Thanks for reading, and hope to see you back in eight weeks!
the moment of liberating clarity when you resolve the tension between being a good person and the requirement to pretend to be stupid by deciding not to be a good person anymore 💖
"I really want to do the thing! All of my friends who are just like me are doing the thing, and they look like they're having so much fun!"
"You can totally do the thing! You just have to sign ... this loyalty oath!"
(reading it) "What? I can't sign this. It's, it's—" (rising horror) "not scientifically accurate!"
"Everyone else who is doing the thing has signed the loyalty oath."
"Could I ... do the thing, without signing the loyalty oath?"
"You could, but everyone you ever interact with for the rest of your life will assume that you've signed the loyalty oath; it would take five hours for you to explain what you actually believe, but no one will listen to you for that long because they'll decide that you're a hateful lunatic thirty seconds in."
"You know, honestly, my life is fine as it is. I don't need to do the thing. I'm glad my friends are having fun."
(dies of cardiac disease fifty years later without having done the thing)
(Earth is consumed in a self-replicating nanotechnology accident)