Brand Rust

2007–2016: "Of course I'm still fundamentally part of the Blue Team, like all non-evil people, but I genuinely think there are some decision-relevant facts about biology, economics, and statistics that folks may not have adequately taken into account!"

2017: "You know, maybe I'm just ... not part of the Blue Team? Maybe I can live with that?"

Wicked Transcendence III

(Previously, previously.)

Woooooow

On my twenty-second day out of prison, I went to the genderqueer support/discussion group again, but this time with my metaphorical evolutionary-psychology goggles firmly in place.

And just, woooooow

These not-particularly-feminine females and probably-autogynephilc males think that they have something substantive in common (being "genderqueer"), and are paranoid at the world of hostile cis people just itching to discriminate against and misgender them

And their struggle makes sense to them, but I'm just sitting there thinking wooooow

It's all just social-exchange and coalitional instincts. There are no principles. There have never been any principles. The horror is not, "This is a cult." The horror is that everything is a cult.

Religious, Redux

"Shit! Shit! Remember how, the last time this happened to me, I described it as feeling religious?"

"Yeah."

"I was wrong! It's actually the feeling of acquiring a new religion, getting eaten by someone else's egregore. It's not that the God-shaped hole was empty before; it's that I didn't notice what it was filled with. It's tempting to describe the psychotic delusions-of-reference/anticipation-of-Heaven/fear-of-Hell state as a 'religious experience' because the process of the God-shaped hole getting filled with something new is so intense. But that's only because once the hole is filled and you feel safe again, it doesn't feel like a religion anymore; it just feels like reality."

"Friends Can Change the World"; Or, Request for Social Technology: Credit-Assignment Rituals

As a human living in a human civilization, it's tempting to think that social reality mostly makes sense. Everyone allegedly knows that institutions are flawed and that our leaders are merely flawed humans. Everyone wants to think that they're sufficiently edgy and cynical, that they've seen through the official lies to the true, gritty reality.

But what if ... what if almost no one is edgy and cynical enough? Like, the only reason you think there's a true, gritty reality out there that you think you can see through to is because you're a predatory animal with a brain designed by evolution to murder other forms of life for the benefit of you, your family, and your friends.

To the extent that we have this glorious technological civilization that keeps most of us mostly safe and mostly happy most of the time, it's mostly because occasionally, one of the predatory animals happens to try out a behavior that happens to be useful, and then all of her friends copy it, and then all of the animals have the behavior.

Some conceited assholes who think they're smart also like to talk about things that they think make the last five hundred years or whatever different: things like science (a social competition that incentivizes the animals to try to mirror the process of Bayesian updating), markets (a pattern of incentives that mirrors the Bayes-structure of the microeconomic theory), or democracy (a corporate governance structure that mirrors the Bayes-structure of counterfactual civil war amongst equals).

These causal processes are useful and we should continue to cooperate with them. They sort of work. But they don't work very well. We're mostly still animals organized into interlocking control systems that suppress variance.

Thus—

School Is Not About Learning
Politics Is Not About Policy
Effective Altruism Doesn't Work; Try to Master Unadulterated Effective First
Ideology Makes You Stupid
Status Makes You Stupid
Institutions Don't Work
Discourse Doesn't Work
Language Doesn't Work
No One Knows Anything
No One Has Ever Known Anything
Don't Read the Comments
Never Read the Comments
xy, x Is Not About y
X Has Never Been About Y
Enjoy Arby's

But this is crazy. Suppressing variance feels like a good idea because variance is scary (because it means very bad things could happen as well as very good things, and bad things are scarier than good things are fun) and we want to be safe. But like, the way to actually make yourself safer is by acquiring optimization power, and then spending some of the power on safety measures! And the way you acquire optimization power is by increasing variance and then rewarding the successes!

Anyway, maybe someone should be looking for social technologies that mirror the Bayes-structure of the universe sort of like how science, markets, or democracy do, but which also take into account that we're not anything remotely like agents and are instead animals that want to help our friends. ("We need game theory for monkeys and game theory for rocks.")

So, I had an idea. You know how some people say we should fund the solutions to problems with after-the-fact prizes, rather than picking a team in advance that we think might solve the problem and funding them? What if ... you did something like that, but on a much smaller scale? A personal scale.

Like, suppose you've just successfully navigated a major personal life crisis that could have gone much worse if it weren't for some of the people in your life (both thanks to direct help they provided during the crisis, and things you learned from them that made you the sort of person that could navigate the crisis successfully). These people don't and shouldn't expect a reward (that's what friends are for) ... but maybe you could reward them anyway (with a special emphasis on people who helped you in low-status ways that you didn't understand at the time) in some sort of public ritual, to make them more powerful and incentivize others to emulate them, thereby increasing the measure of algorithms that result in humans successfully navigating major personal life crises.

It might look something like this—

  • If you have some spare money lying around, set aside some of it for rewarding the people you want to reward. If you don't have any spare money lying around, this ritual will be less effective! Maybe you should fix that!

  • Decide how much of the money you want to use to reward each of the people you want to reward.

(Note: giving away something as powerful as money carries risks of breeding dependence and resentment if such gifts come to be expected! If people know that you've been going through a crisis and anyone so much as hints that they think they deserve an award, that person is missing the point and therefore does not deserve an award.)

  • Privately go to each of the people, explain all this, and give them the amount of money you decided to give them. Make it very clear that this is a special unilateral one-time award made for decision-theoretic reasons and that it's very important that they accept it in the service of your mutual coherent extrapolated volition in accordance with the Bayes-structure of the universe. Refuse to accept words of thanks (it's not about you; it's not about me; it's about credit-assignment). If they try to refuse the money, explain that you will literally burn that much money in paper currency if they don't take it. (Shredding instead of burning is also acceptable.)

  • Ask if they'd like to be publicly named and praised as having received an award as part of the credit-assignment ritual. (Remember that it's quite possible and understandable and good that they might want to accept the money, but not be publicly praised by you. After all, if you're the sort of person who is considering actually doing this, you're probably kind of weird! Maybe people don't want to be associated with you!)

  • To complete the ritual, publish a blog post naming the people and the the awards they received. People who prefered not to be named should be credited as Anonymous Friend A, B, C, &c. Also list the amount of money you burned or shredded if anyone foolishly rejected their award in defiance of the Bayes-structure of the universe. Do not explain the nature of the crisis or how the named people helped you. (You might want to tell the story in a different post, but that's not part of the ritual, which is about credit-assignment.)

An Intuition on the Bayes-Structural Justification for Free Speech Norms

We can metaphorically (but like, hopefully it's a good metaphor) think of speech as being the sum of a positive-sum information-conveying component and a zero-sum social-control/memetic-warfare component. Coalitions of agents that allow their members to convey information amongst themselves will tend to outcompete coalitions that don't, because it's better for the coalition to be able to use all of the information it has.

Therefore, if we want the human species to better approximate a coalition of agents who act in accordance with the game-theoretic Bayes-structure of the universe, we want social norms that reward or at least not-punish information-conveying speech (so that other members of the coalition can learn from it if it's useful to them, and otherwise ignore it).

It's tempting to think that we should want social norms that punish the social-control/memetic-warfare component of speech, thereby reducing internal conflict within the coalition and forcing people's speech to mostly consist of information. This might be a good idea if the rules for punishing the social-control/memetic-warfare component are very clear and specific (e.g., no personal insults during a discussion about something that's not the person you want to insult), but it's alarmingly easy to get this wrong: you think you can punish generalized hate speech without any negative consequences, but you probably won't notice when members of the coalition begin to slowly gerrymander the hate speech category boundary in the service of their own values. Whoops!

Dreaming of Political Bayescraft

My old political philosophy: "Socially liberal, fiscally confused; I don't know how to run a goddamned country (and neither do you)."

Commentary: Pretty good, but not quite meta enough.

My new political philosophy: "Being smart is more important than being good (for humans). All ideologies are false; some are useful."

Commentary: Social design space is very large and very high-dimensional; the forces of memetic evolution are somewhat benevolent (all ideas that you've heard of have to be genuinely appealing to some feature of human psychology, or no one would have an incentive to tell you about them), but really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves! Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected! This stance is itself, technically, loyalty to an idea, but hopefully it's a sufficiently meta idea to avoid running into the standard traps while also being sufficiently object-level to have easily-discoverable decision-relevant implications and not run afoul of the principle of ultrafinite recursion ("all infinite recursions are at most three levels deep").