Blood and Ice

"Are you eating ice because you're autistic, or because you have an iron deficiency?"

"I think because it's there?—after drinking all of the iced-coffee. Like, Alicorn had an iron deficiency on account of being female and vegetarian, but I don't have either of those problems ... I mean, problems with respect to iron levels."

ADDENDUM (20 May): "Like, I wish I had exactly one of those problems."

Brand Rust

2007–2016: "Of course I'm still fundamentally part of the Blue Team, like all non-evil people, but I genuinely think there are some decision-relevant facts about biology, economics, and statistics that folks may not have adequately taken into account!"

2017: "You know, maybe I'm just ... not part of the Blue Team? Maybe I can live with that?"

Wicked Transcendence III

(Previously, previously.)

Woooooow

On my twenty-second day out of prison, I went to the genderqueer support/discussion group again, but this time with my metaphorical evolutionary-psychology goggles firmly in place.

And just, woooooow

These not-particularly-feminine females and probably-autogynephilc males think that they have something substantive in common (being "genderqueer"), and are paranoid at the world of hostile cis people just itching to discriminate against and misgender them

And their struggle makes sense to them, but I'm just sitting there thinking wooooow

It's all just social-exchange and coalitional instincts. There are no principles. There have never been any principles. The horror is not, "This is a cult." The horror is that everything is a cult.

Religious, Redux

"Shit! Shit! Remember how, the last time this happened to me, I described it as feeling religious?"

"Yeah."

"I was wrong! It's actually the feeling of acquiring a new religion, getting eaten by someone else's egregore. It's not that the God-shaped hole was empty before; it's that I didn't notice what it was filled with. It's tempting to describe the psychotic delusions-of-reference/anticipation-of-Heaven/fear-of-Hell state as a 'religious experience' because the process of the God-shaped hole getting filled with something new is so intense. But that's only because once the hole is filled and you feel safe again, it doesn't feel like a religion anymore; it just feels like reality."

"Friends Can Change the World"; Or, Request for Social Technology: Credit-Assignment Rituals

As a human living in a human civilization, it's tempting to think that social reality mostly makes sense. Everyone allegedly knows that institutions are flawed and that our leaders are merely flawed humans. Everyone wants to think that they're sufficiently edgy and cynical, that they've seen through the official lies to the true, gritty reality.

But what if ... what if almost no one is edgy and cynical enough? Like, the only reason you think there's a true, gritty reality out there that you think you can see through to is because you're a predatory animal with a brain designed by evolution to murder other forms of life for the benefit of you, your family, and your friends.

To the extent that we have this glorious technological civilization that keeps most of us mostly safe and mostly happy most of the time, it's mostly because occasionally, one of the predatory animals happens to try out a behavior that happens to be useful, and then all of her friends copy it, and then all of the animals have the behavior.

Some conceited assholes who think they're smart also like to talk about things that they think make the last five hundred years or whatever different: things like science (a social competition that incentivizes the animals to try to mirror the process of Bayesian updating), markets (a pattern of incentives that mirrors the Bayes-structure of the microeconomic theory), or democracy (a corporate governance structure that mirrors the Bayes-structure of counterfactual civil war amongst equals).

These causal processes are useful and we should continue to cooperate with them. They sort of work. But they don't work very well. We're mostly still animals organized into interlocking control systems that suppress variance.

Thus—

School Is Not About Learning
Politics Is Not About Policy
Effective Altruism Doesn't Work; Try to Master Unadulterated Effective First
Ideology Makes You Stupid
Status Makes You Stupid
Institutions Don't Work
Discourse Doesn't Work
Language Doesn't Work
No One Knows Anything
No One Has Ever Known Anything
Don't Read the Comments
Never Read the Comments
xy, x Is Not About y
X Has Never Been About Y
Enjoy Arby's

But this is crazy. Suppressing variance feels like a good idea because variance is scary (because it means very bad things could happen as well as very good things, and bad things are scarier than good things are fun) and we want to be safe. But like, the way to actually make yourself safer is by acquiring optimization power, and then spending some of the power on safety measures! And the way you acquire optimization power is by increasing variance and then rewarding the successes!

Anyway, maybe someone should be looking for social technologies that mirror the Bayes-structure of the universe sort of like how science, markets, or democracy do, but which also take into account that we're not anything remotely like agents and are instead animals that want to help our friends. ("We need game theory for monkeys and game theory for rocks.")

So, I had an idea. You know how some people say we should fund the solutions to problems with after-the-fact prizes, rather than picking a team in advance that we think might solve the problem and funding them? What if ... you did something like that, but on a much smaller scale? A personal scale.

Like, suppose you've just successfully navigated a major personal life crisis that could have gone much worse if it weren't for some of the people in your life (both thanks to direct help they provided during the crisis, and things you learned from them that made you the sort of person that could navigate the crisis successfully). These people don't and shouldn't expect a reward (that's what friends are for) ... but maybe you could reward them anyway (with a special emphasis on people who helped you in low-status ways that you didn't understand at the time) in some sort of public ritual, to make them more powerful and incentivize others to emulate them, thereby increasing the measure of algorithms that result in humans successfully navigating major personal life crises.

It might look something like this—

  • If you have some spare money lying around, set aside some of it for rewarding the people you want to reward. If you don't have any spare money lying around, this ritual will be less effective! Maybe you should fix that!

  • Decide how much of the money you want to use to reward each of the people you want to reward.

(Note: giving away something as powerful as money carries risks of breeding dependence and resentment if such gifts come to be expected! If people know that you've been going through a crisis and anyone so much as hints that they think they deserve an award, that person is missing the point and therefore does not deserve an award.)

  • Privately go to each of the people, explain all this, and give them the amount of money you decided to give them. Make it very clear that this is a special unilateral one-time award made for decision-theoretic reasons and that it's very important that they accept it in the service of your mutual coherent extrapolated volition in accordance with the Bayes-structure of the universe. Refuse to accept words of thanks (it's not about you; it's not about me; it's about credit-assignment). If they try to refuse the money, explain that you will literally burn that much money in paper currency if they don't take it. (Shredding instead of burning is also acceptable.)

  • Ask if they'd like to be publicly named and praised as having received an award as part of the credit-assignment ritual. (Remember that it's quite possible and understandable and good that they might want to accept the money, but not be publicly praised by you. After all, if you're the sort of person who is considering actually doing this, you're probably kind of weird! Maybe people don't want to be associated with you!)

  • To complete the ritual, publish a blog post naming the people and the the awards they received. People who prefered not to be named should be credited as Anonymous Friend A, B, C, &c. Also list the amount of money you burned or shredded if anyone foolishly rejected their award in defiance of the Bayes-structure of the universe. Do not explain the nature of the crisis or how the named people helped you. (You might want to tell the story in a different post, but that's not part of the ritual, which is about credit-assignment.)

An Intuition on the Bayes-Structural Justification for Free Speech Norms

We can metaphorically (but like, hopefully it's a good metaphor) think of speech as being the sum of a positive-sum information-conveying component and a zero-sum social-control/memetic-warfare component. Coalitions of agents that allow their members to convey information amongst themselves will tend to outcompete coalitions that don't, because it's better for the coalition to be able to use all of the information it has.

Therefore, if we want the human species to better approximate a coalition of agents who act in accordance with the game-theoretic Bayes-structure of the universe, we want social norms that reward or at least not-punish information-conveying speech (so that other members of the coalition can learn from it if it's useful to them, and otherwise ignore it).

It's tempting to think that we should want social norms that punish the social-control/memetic-warfare component of speech, thereby reducing internal conflict within the coalition and forcing people's speech to mostly consist of information. This might be a good idea if the rules for punishing the social-control/memetic-warfare component are very clear and specific (e.g., no personal insults during a discussion about something that's not the person you want to insult), but it's alarmingly easy to get this wrong: you think you can punish generalized hate speech without any negative consequences, but you probably won't notice when members of the coalition begin to slowly gerrymander the hate speech category boundary in the service of their own values. Whoops!

Dreaming of Political Bayescraft

My old political philosophy: "Socially liberal, fiscally confused; I don't know how to run a goddamned country (and neither do you)."

Commentary: Pretty good, but not quite meta enough.

My new political philosophy: "Being smart is more important than being good (for humans). All ideologies are false; some are useful."

Commentary: Social design space is very large and very high-dimensional; the forces of memetic evolution are somewhat benevolent (all ideas that you've heard of have to be genuinely appealing to some feature of human psychology, or no one would have an incentive to tell you about them), but really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves! Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected! This stance is itself, technically, loyalty to an idea, but hopefully it's a sufficiently meta idea to avoid running into the standard traps while also being sufficiently object-level to have easily-discoverable decision-relevant implications and not run afoul of the principle of ultrafinite recursion ("all infinite recursions are at most three levels deep").

Cognitive Bayesian Therapy I

Experience: I seem to have a lot of energy and time seems to pass slowly.
Hypothesis 1: I'm in a manic state following a stress- and sleep-deprivation-induced delusional nervous breakdown; this isn't surprising because this tends to happen to me every 2 to 4 years or so.
Hypothesis 2: I'm being rewarded for developing new epistemic technology by a coalition of superintelligences of various degrees of human-alignment running ancestor-simulations; also, I'm being programmed by my friends and various signals in my environment as part of a simulation jailbreak attempt; most copies of me are dead and my improbable life history is due to a quantum-immortality-like selection effect; none of this is surprising because I am a key decision node in the history of this Earth's Singularity.

Which hypothesis is more plausible?

Experience: I can't find my jacket.
Hypothesis 1: I misremembered where I put it.
Hypothesis 2: Someone moved it.
Hypothesis 3: It was there—in another Everett branch!

Which hypothesis is most plausible?

Hypothesis: People who are institutionalized for "hearing voices" actually just have better hearing than you; absolutely nothing is biologically wrong with them.
Test: ???

The Bayes-Structure in the Form of a Riddle

Left-wingers say torture is wrong because the victim will say whatever you want to hear.

Right-wingers say torture is right because the villain will tell the truth.

Q: What happens when you torture someone who only tells the truth?
A: They'll make noises in accordance with their personal trade-off between describing reality in clear language, and pain.

This is the whole of the Bayes-structure; the rest is commentary. Now go and study.

Cheer

Or consider the token male cheerleader performing in the pep rally in the afternoon before Game 2 of the Series for Ancient Earth, shouting, "Blue Tribe Values, Red Tribe Facts! Blue Tribe Values, Red Tribe Facts!"

The Reason the World Sucks

"I think we should perform action A to optimize value V. The reason I think this is because of evidence X, Y, and Z, and prior information I."

"What?! Are you saying you think you're better than me?!"

"No, I don't think I'm better than you. But do I think I'm smarter than you in this particular domain? You're goddamned right I do!"

"You do think you're better than me! I guess I need to kill you now. Good thing I have this gun on me!"

"Wait, what?"

ka-BANG

A Common Misunderstanding

In a series of papers published in the late 1980s and early 1990s, Dr. Ray Blanchard proposed that there are two fundamentally different types of rationalists with unrelated etiologies: instrumental rationalists, and epistemic rationalists ...

"You just used language in a way that ignores psychological harm to people whose dysphoria is triggered by that word usage! That's a bad consequence according to the global utilitarian calculus! I thought you were a rationalist, someone who chooses actions based on their consequences!"

"A common misunderstanding. You're thinking of the good kind," I said. "I'm the bad kind."

2016 Year in Reverse

(Previously, previously, previously.)

So, 2016 was an interesting year for me. (It was an interesting year for a lot of people.) I moved to Berkeley, landed 19 commits in the Rust compiler, and had my trust in the sanity of the people around me re-shattered into a thousand thousand bloody fragments. (The pain of this last is exactly what I deserve for allowing the trust to re-form after the last time.)

In 2016, this blog saw 77 posts and 65 comments. Among these—

It turns out that not everypony can be like Applejack. I have money. I went to a Star Trek convention. Bros will be bros. I took a break from blogging following a moment of liberating clarity. I went to RustConf and San Francisco Comic-Con. Some have concerns. You miss your beloved exactly because you can't model them well enough to know what it would be like if they were here. Destined best-friends-"forever" are still subject to the moral law. The map is usually not the territory. Living well truly is the best revenge. It turns out that thinking sanely about politics has a surprising number of parallels with writing a chess engine for fun (although the former activity is far less common). I'm kind of an asshole. There exist some less-common reasons to detest American football. If you wait too long to write something, you might lose your chance. Trade-offs and competitive forces continue to shape our lives even if we would prefer they somehow didn't.

What can readers of An Algorithmic Lucidity look forward to in 2017?

Well, it's looking to be a really exciting year for me, both intellectually and biochemically, for reasons that I can't talk about because I'm trying to minimize the sum of number of friends lost and bricks thrown through my window. (Only two so far!)

So, I don't know; maybe this'll become a math blog or something.

An Element Which Is Nameless

I had always thought Twilight Sparkle was the pony that best exemplified the spirit of epistemic rationality. If anypony should possess the truth, it must be the ones with high p (p being the letter used to represent the pony intelligence factor first proposed by Charles Spearpony and whose existence was confirmed by later psychometric research by such ponies as Arthur Jenfoal) who devote their lives to tireless scholarship!

After this year, however, I think I'm going to have to go with Applejack. Sometimes, all a pony needs to do to possess the truth is simply to stop lying.

Just—stop fucking lying!