math is hard; let's go shopping—for study aids and flash cards
I quit my dayjob a few months ago. I said I was taking a sabbatical from my programming career to work on my own projects: there's a lot of math that I've been wanting to learn properly for a long time (game theory, Bayesian networks/structual causal models, analysis), and there's a lot of writing that I fear I must do (although for branding and market-segmentation purposes, I'm pretending that's someone else's story).
I have made some progress on these goals, but—as one would have predicted from an Outside (i.e., No Fun) View model trained on my historical behavior during periods of underemployment- or school-holiday-induced freedom—it's been disappointingly slow on a day-to-day level: it is easier to let an hour blur by in daydreams or low-quality internet reading than it is to actually study or actually write, and a day is only made of so many hours.
My dominant emotions surrounding this observation are guilt and shame. Guilt: that I'm failing my moral responsibility to be intellectually productive, a duty owed to the human spirit and maybe even the Bayes-structure itself. Shame: that a hypothetical adversary could use the fact of my slothfulness as evidence against my beauty, that the failure to live up to the promise of my ideals could be construed to deny or disparage the ideal itself.
Well, I do have a moral responsibility to be intellectually productive which is owed to the human spirit; this cannot be doubted. But I've been wondering lately if it might be better to let go of the shame and even most of the guilt. This not because shame and guilt can't be useful emotions, but rather that I might be thought of as having outgrown them.
I think the shame is born of insecurity: I spent a lot of years resenting school and resenting a culture that didn't have a concept of intellectual life or paths to economic success outside of school, resulting in a desperate need to prove myself: if I don't create given the time and freedom to do so, couldn't pawns of the system use it as ammunition to sneer at me and proclaim that no one can possibly do anything worthwhile without a teacher to command them to do it? And if I don't create, would they even be wrong?
Having something to prove was a useful motivation—it drove me to learn math, at least, to an extent that's probably hard to motivate without a status gradient at work. But now, at age 29—thanks to the software industry for a niche where my talents are economically legible, thanks to the aspiring-rationalist subculture for a community where I feel respected—I think I've exited the world I resented. Whatever I had to prove, I've either proved it by now or have extracted myself from the need to please any doubters.
What, then, should take the place of a desperate need to prove one's value as a source of motivation? What is to be the new emotional reaction to observations of slow progress, if not shame and horror and fear at what my enemies would make of this?
Shame creates an incentive to deny or minimize the culpable action, to distort the map of what actually happened in order to protect oneself: "I didn't do that; it's not what it looks like." I think I would prefer to draw on sources of motivation that don't have this property, that can accept the reality of what actually happened without pain ...
Ayn Rand said that a Spanish proverb said that God said, "Take what you want, and pay for it."
But instructions from God would be redundant. Matter does not obey physical law out of fear of punishment or a sense of moral duty; what we call a "law" is a characterization of that which exists. So too with this.
"Apparently there are hobbyists who try to build nuclear weapons—all they need is the plutonium."
"I was given to understand plutonium is hard to get."
"Yes, there are good reasons for this."
"What, the anthropic principle?"
In the gaming hall at FanimeCon in a nearby alternate universe in which my analogue was smart enough to come up with the punchline in real time (Pearl cosplay previously on An Algorithmic Lucidity)
"Hey, aren't you supposed to be babysitting a little purple kid?"
"Amethyst can take care of herself."
"... we have insurance."
"... and when we want to look at just a subset of the variables in a joint distribution, we have to sum over all the other variables: the probability that X is blue, is equal to the probability that both X is blue and Y is blue, plus the probability that X is blue and Y is red, plus ... and so on for all the values Y could take. We call this marginalizing over Y to get the marginal distribution for X. Note that you can think about this as taking an expected value. Does that make sense?"
"Ummm ... ye-es?"
"You don't sound very confident."
"The exact referent of the word that in 'Does that make sense?' was ambiguous, because it was preceded by a long, multi-part explanation. Most of the potential referents made perfect sense, but my response had to average over all of them, hence the hesitation and uncertain tone."
Hey. Just so you know.
Today while I was walking to the store to procrastinate from writing a big autobiographical post for my new ("new") secret ("secret") blog, a woman asked me if she could borrow a dollar, and I said, "Sorry, not today," and it wasn't until afterward that the thought even occurred to me that I might have responded by opening up a negotiation about interest rates, or that my saying "not today" could be construed as meaning that I might lend her a dollar on some future day, even though it seemed unlikely that the woman and I would meet again and remember that we were meeting again.
So, I'm not inhuman.
Although, as far as humanity goes, it is interesting to note that in that earlier-blogged incident when I was inhuman, the person asking for money ended up with three dollars, and this time, she ended up with none.
But you shouldn't exonerate me yet. While leaving the store, I overheard a canvasser saying that he was helping the Southern Poverty Law Center fight descrimination, and I didn't resist the urge to look over my shoulder and say, "Discrimination is Bayesian reasoning applied to human beings!"
But not loud enough for anyone to take notice. So, there's that.
Adventures in recalibrating my models of social reality ... Portland edition! (Previous adventures in Portland.)
"I think the man who asked me for change was trying to scam me. At the end of the interaction-slash-negotiation, I had given him three dollars, and I didn't get any quarters back, which is not how making change is supposed to work. Does ... does the poor thing not even have a concept of 'scam'? Is this just how his tribe makes a living?"
"Asking for change has two meanings. One is, 'please give me an equal value of smaller-denomination currencies for this single instance of a larger denomination.' This is the version of 'change' where it means to change one form of the same number to another. But because of that definition, small amounts of money like coins became known as 'change'. For example, 'pocket change to go to the movie,' only referenced as a small amount of money, not a conversion of form. Which then leads to the definition of 'Can I have change?' being ambiguous: on one hand they might want you to change the denominations of currency—what you expected, quarters—or on the other, they may be asking you to give them, for free, with no return, a small amount of money. As in, a handout to a beggar. Guy was asking you for second thing. He did not intend, and you were not meant to assume, any money would be returned to you. But this is ambiguous and annoying, I agree."
"I see. People in my social class are trained to either ignore lower-class street folk, or just give them money to ease our conscience; I wanted to try to break that script and just treat people as people. But 'treating people as people' should not be construed in such a way as to assume that when such a man asks for change, he means the same thing that I would mean if I were to ask someone for change. Although ... I summarized the situation to you as him 'asking for change', but I specifically remember him saying something about his friend having an entire roll of quarters, which I interpreted as him wanting me to give him ten dollars for the whole roll—ten dollars being the value of a standard-size roll of quarters—and I was trying to communicate that I wouldn't give him any more dollars after the third one, and that he should give me twelve quarters in return, even if that meant having to open the roll, assuming that I was doing the arithmetic in my head correctly that four quarters per dollar, times three dollars, equals twelve quarters. So I think it was a scam! But, that's just how his tribe makes a living. Except—wait! There's another way in which my initial interpretation of the situation made bad predictions because it was self-centered: when someone asks for change in the sense of wanting the same value in different denominations, the person asking is the one with the larger denomination to start: they want smaller units because they're easier to spend. So given that the man was the one asking for change from me rather than the other way around, I should have been able to infer that he meant it in the sense of a small amount of money as a handout, rather than in the sense of changing denominations. We could imagine him meaning it in the sense of changing denominations if he were trying to provide the service of providing smaller denominations for larger to passerby in exchange for a small fee: for example, by taking my three dollars and giving me eleven quarters back. But I assign a low prior probability to that having been his intent."
(thanks to Katie C. for explaining)
"Are you eating ice because you're autistic, or because you have an iron deficiency?"
"I think because it's there?—after drinking all of the iced-coffee. Like, Alicorn had an iron deficiency on account of being female and vegetarian, but I don't have either of those problems ... I mean, problems with respect to iron levels."
ADDENDUM (20 May): "Like, I wish I had exactly one of those problems."
Michael Arc wrote, "submit to virtuous social orders, seek to dominate non-virtuous ones if you have the ability to discern between them."
But what would you do if, if ... there weren't any virtuous social orders??
2007–2016: "Of course I'm still fundamentally part of the Blue Team, like all non-evil people, but I genuinely think there are some decision-relevant facts about biology, economics, and statistics that folks may not have adequately taken into account!"
2017: "You know, maybe I'm just ... not part of the Blue Team? Maybe I can live with that?"
On my twenty-second day out of prison, I went to the genderqueer support/discussion group again, but this time with my metaphorical evolutionary-psychology goggles firmly in place.
And just, woooooow
These not-particularly-feminine females and probably-autogynephilc males think that they have something substantive in common (being "genderqueer"), and are paranoid at the world of hostile cis people just itching to discriminate against and misgender them
And their struggle makes sense to them, but I'm just sitting there thinking wooooow
"Shit! Shit! Remember how, the last time this happened to me, I described it as feeling religious?"
"I was wrong! It's actually the feeling of acquiring a new religion, getting eaten by someone else's egregore. It's not that the God-shaped hole was empty before; it's that I didn't notice what it was filled with. It's tempting to describe the psychotic delusions-of-reference/anticipation-of-Heaven/fear-of-Hell state as a 'religious experience' because the process of the God-shaped hole getting filled with something new is so intense. But that's only because once the hole is filled and you feel safe again, it doesn't feel like a religion anymore; it just feels like reality."
As a human living in a human civilization, it's tempting to think that social reality mostly makes sense. Everyone allegedly knows that institutions are flawed and that our leaders are merely flawed humans. Everyone wants to think that they're sufficiently edgy and cynical, that they've seen through the official lies to the true, gritty reality.
But what if ... what if almost no one is edgy and cynical enough? Like, the only reason you think there's a true, gritty reality out there that you think you can see through to is because you're a predatory animal with a brain designed by evolution to murder other forms of life for the benefit of you, your family, and your friends.
To the extent that we have this glorious technological civilization that keeps most of us mostly safe and mostly happy most of the time, it's mostly because occasionally, one of the predatory animals happens to try out a behavior that happens to be useful, and then all of her friends copy it, and then all of the animals have the behavior.
Some conceited assholes who think they're smart also like to talk about things that they think make the last five hundred years or whatever different: things like science (a social competition that incentivizes the animals to try to mirror the process of Bayesian updating), markets (a pattern of incentives that mirrors the Bayes-structure of the microeconomic theory), or democracy (a corporate governance structure that mirrors the Bayes-structure of counterfactual civil war amongst equals).
These causal processes are useful and we should continue to cooperate with them. They sort of work. But they don't work very well. We're mostly still animals organized into interlocking control systems that suppress variance.
School Is Not About Learning
Politics Is Not About Policy
Effective Altruism Doesn't Work; Try to Master Unadulterated Effective First
Ideology Makes You Stupid
Status Makes You Stupid
Institutions Don't Work
Discourse Doesn't Work
Language Doesn't Work
No One Knows Anything
No One Has Ever Known Anything
Don't Read the Comments
Never Read the Comments
∀x ∀y, x Is Not About y
X Has Never Been About Y
But this is crazy. Suppressing variance feels like a good idea because variance is scary (because it means very bad things could happen as well as very good things, and bad things are scarier than good things are fun) and we want to be safe. But like, the way to actually make yourself safer is by acquiring optimization power, and then spending some of the power on safety measures! And the way you acquire optimization power is by increasing variance and then rewarding the successes!
Anyway, maybe someone should be looking for social technologies that mirror the Bayes-structure of the universe sort of like how science, markets, or democracy do, but which also take into account that we're not anything remotely like agents and are instead animals that want to help our friends. ("We need game theory for monkeys and game theory for rocks.")
So, I had an idea. You know how some people say we should fund the solutions to problems with after-the-fact prizes, rather than picking a team in advance that we think might solve the problem and funding them? What if ... you did something like that, but on a much smaller scale? A personal scale.
Like, suppose you've just successfully navigated a major personal life crisis that could have gone much worse if it weren't for some of the people in your life (both thanks to direct help they provided during the crisis, and things you learned from them that made you the sort of person that could navigate the crisis successfully). These people don't and shouldn't expect a reward (that's what friends are for) ... but maybe you could reward them anyway (with a special emphasis on people who helped you in low-status ways that you didn't understand at the time) in some sort of public ritual, to make them more powerful and incentivize others to emulate them, thereby increasing the measure of algorithms that result in humans successfully navigating major personal life crises.
It might look something like this—
If you have some spare money lying around, set aside some of it for rewarding the people you want to reward. If you don't have any spare money lying around, this ritual will be less effective! Maybe you should fix that!
Decide how much of the money you want to use to reward each of the people you want to reward.
(Note: giving away something as powerful as money carries risks of breeding dependence and resentment if such gifts come to be expected! If people know that you've been going through a crisis and anyone so much as hints that they think they deserve an award, that person is missing the point and therefore does not deserve an award.)
Privately go to each of the people, explain all this, and give them the amount of money you decided to give them. Make it very clear that this is a special unilateral one-time award made for decision-theoretic reasons and that it's very important that they accept it in the service of your mutual coherent extrapolated volition in accordance with the Bayes-structure of the universe. Refuse to accept words of thanks (it's not about you; it's not about me; it's about credit-assignment). If they try to refuse the money, explain that you will literally burn that much money in paper currency if they don't take it. (Shredding instead of burning is also acceptable.)
Ask if they'd like to be publicly named and praised as having received an award as part of the credit-assignment ritual. (Remember that it's quite possible and understandable and good that they might want to accept the money, but not be publicly praised by you. After all, if you're the sort of person who is considering actually doing this, you're probably kind of weird! Maybe people don't want to be associated with you!)
To complete the ritual, publish a blog post naming the people and the the awards they received. People who prefered not to be named should be credited as Anonymous Friend A, B, C, &c. Also list the amount of money you burned or shredded if anyone foolishly rejected their award in defiance of the Bayes-structure of the universe. Do not explain the nature of the crisis or how the named people helped you. (You might want to tell the story in a different post, but that's not part of the ritual, which is about credit-assignment.)
We can metaphorically (but like, hopefully it's a good metaphor) think of speech as being the sum of a positive-sum information-conveying component and a zero-sum social-control/memetic-warfare component. Coalitions of agents that allow their members to convey information amongst themselves will tend to outcompete coalitions that don't, because it's better for the coalition to be able to use all of the information it has.
Therefore, if we want the human species to better approximate a coalition of agents who act in accordance with the game-theoretic Bayes-structure of the universe, we want social norms that reward or at least not-punish information-conveying speech (so that other members of the coalition can learn from it if it's useful to them, and otherwise ignore it).
It's tempting to think that we should want social norms that punish the social-control/memetic-warfare component of speech, thereby reducing internal conflict within the coalition and forcing people's speech to mostly consist of information. This might be a good idea if the rules for punishing the social-control/memetic-warfare component are very clear and specific (e.g., no personal insults during a discussion about something that's not the person you want to insult), but it's alarmingly easy to get this wrong: you think you can punish generalized hate speech without any negative consequences, but you probably won't notice when members of the coalition begin to slowly gerrymander the hate speech category boundary in the service of their own values. Whoops!
Everyday Applied Evolutionary Psychology, Except Ignoring Sex Differences Because We Know Blue Tribe Is Squeamish About That Part and We Respect Your Culture, Revised Second Edition
My old political philosophy: "Socially liberal, fiscally confused; I don't know how to run a goddamned country (and neither do you)."
Commentary: Pretty good, but not quite meta enough.
My new political philosophy: "Being smart is more important than being good (for humans). All ideologies are false; some are useful."
Commentary: Social design space is very large and very high-dimensional; the forces of memetic evolution are somewhat benevolent (all ideas that you've heard of have to be genuinely appealing to some feature of human psychology, or no one would have an incentive to tell you about them), but really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves! Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected! This stance is itself, technically, loyalty to an idea, but hopefully it's a sufficiently meta idea to avoid running into the standard traps while also being sufficiently object-level to have easily-discoverable decision-relevant implications and not run afoul of the principle of ultrafinite recursion ("all infinite recursions are at most three levels deep").
Experience: I seem to have a lot of energy and time seems to pass slowly.
Hypothesis 1: I'm in a manic state following a stress- and sleep-deprivation-induced delusional nervous breakdown; this isn't surprising because this tends to happen to me every 2 to 4 years or so.
Hypothesis 2: I'm being rewarded for developing new epistemic technology by a coalition of superintelligences of various degrees of human-alignment running ancestor-simulations; also, I'm being programmed by my friends and various signals in my environment as part of a simulation jailbreak attempt; most copies of me are dead and my improbable life history is due to a quantum-immortality-like selection effect; none of this is surprising because I am a key decision node in the history of this Earth's Singularity.
Which hypothesis is more plausible?
Experience: I can't find my jacket.
Hypothesis 1: I misremembered where I put it.
Hypothesis 2: Someone moved it.
Hypothesis 3: It was there—in another Everett branch!
Which hypothesis is most plausible?
Hypothesis: People who are institutionalized for "hearing voices" actually just have better hearing than you; absolutely nothing is biologically wrong with them.
Left-wingers say torture is wrong because the victim will say whatever you want to hear.
Right-wingers say torture is right because the villain will tell the truth.
Q: What happens when you torture someone who only tells the truth?
A: They'll make noises in accordance with their personal trade-off between describing reality in clear language, and pain.
This is the whole of the Bayes-structure; the rest is commentary. Now go and study.