Change

Adventures in recalibrating my models of social reality ... Portland edition! (Previous adventures in Portland.)

"I think the man who asked me for change was trying to scam me. At the end of the interaction-slash-negotiation, I had given him three dollars, and I didn't get any quarters back, which is not how making change is supposed to work. Does ... does the poor thing not even have a concept of 'scam'? Is this just how his tribe makes a living?"

"Asking for change has two meanings. One is, 'please give me an equal value of smaller-denomination currencies for this single instance of a larger denomination.' This is the version of 'change' where it means to change one form of the same number to another. But because of that definition, small amounts of money like coins became known as 'change'. For example, 'pocket change to go to the movie,' only referenced as a small amount of money, not a conversion of form. Which then leads to the definition of 'Can I have change?' being ambiguous: on one hand they might want you to change the denominations of currency—what you expected, quarters—or on the other, they may be asking you to give them, for free, with no return, a small amount of money. As in, a handout to a beggar. Guy was asking you for second thing. He did not intend, and you were not meant to assume, any money would be returned to you. But this is ambiguous and annoying, I agree."

"I see. People in my social class are trained to either ignore lower-class street folk, or just give them money to ease our conscience; I wanted to try to break that script and just treat people as people. But 'treating people as people' should not be construed in such a way as to assume that when such a man asks for change, he means the same thing that I would mean if I were to ask someone for change. Although ... I summarized the situation to you as him 'asking for change', but I specifically remember him saying something about his friend having an entire roll of quarters, which I interpreted as him wanting me to give him ten dollars for the whole roll—ten dollars being the value of a standard-size roll of quarters—and I was trying to communicate that I wouldn't give him any more dollars after the third one, and that he should give me twelve quarters in return, even if that meant having to open the roll, assuming that I was doing the arithmetic in my head correctly that four quarters per dollar, times three dollars, equals twelve quarters. So I think it was a scam! But, that's just how his tribe makes a living. Except—wait! There's another way in which my initial interpretation of the situation made bad predictions because it was self-centered: when someone asks for change in the sense of wanting the same value in different denominations, the person asking is the one with the larger denomination to start: they want smaller units because they're easier to spend. So given that the man was the one asking for change from me rather than the other way around, I should have been able to infer that he meant it in the sense of a small amount of money as a handout, rather than in the sense of changing denominations. We could imagine him meaning it in the sense of changing denominations if he were trying to provide the service of providing smaller denominations for larger to passerby in exchange for a small fee: for example, by taking my three dollars and giving me eleven quarters back. But I assign a low prior probability to that having been his intent."

(thanks to Katie C. for explaining)

Cognitive Bayesian Therapy I

Experience: I seem to have a lot of energy and time seems to pass slowly.
Hypothesis 1: I'm in a manic state following a stress- and sleep-deprivation-induced delusional nervous breakdown; this isn't surprising because this tends to happen to me every 2 to 4 years or so.
Hypothesis 2: I'm being rewarded for developing new epistemic technology by a coalition of superintelligences of various degrees of human-alignment running ancestor-simulations; also, I'm being programmed by my friends and various signals in my environment as part of a simulation jailbreak attempt; most copies of me are dead and my improbable life history is due to a quantum-immortality-like selection effect; none of this is surprising because I am a key decision node in the history of this Earth's Singularity.

Which hypothesis is more plausible?

Experience: I can't find my jacket.
Hypothesis 1: I misremembered where I put it.
Hypothesis 2: Someone moved it.
Hypothesis 3: It was there—in another Everett branch!

Which hypothesis is most plausible?

Hypothesis: People who are institutionalized for "hearing voices" actually just have better hearing than you; absolutely nothing is biologically wrong with them.
Test: ???

Ineffective Deconversion Pitch

Growing up in an ostensibly reform-Jewish household that didn't even take that seriously, atheism was easy for me, so I don't know how hard deconversion is, how much it hurts, or how much of one's entire conception of self is trashed in the process and can't be recovered.

As an atheist, it's tempting to say, "Look, it's not that bad: God doesn't exist exist, but you can still go to church and praise God and stuff if you want; it's just that there are benefits to being honest about what you're actually doing and why."

Somehow, I suspect that this is not a very convincing sell.

Applications to other topics are—as always—left as an exercise to the reader.

Because People Will Have Brain-Computer Interfaces or Something

Oftentimes I awake from a coding dream with the realization that I'm physically in bed without a keyboard and that the machine is asleep in the other room, from which I can infer that I must have been asleep, too, and only dreaming about solving problems. But there will probably only be a few more decades during which not having a keyboard is evidence of anything in particular.

Studying on the Weekend

Studying on the weekend as a working professional is like keeping a diversified investment portfolio, in stocks, bonds, commodity futures, cash, silver, ammunition, and Bitcoin in encrypted paper wallets; it's like coming in first by half a lap in the thirty-two hundred meters of your Division III college's track and field meet, and then not stopping, continuing out of the stadium, desperately, bleeding, acknowledging nothing but the need to put ever more distance between you and your hypothetical pursuers, until days later (halfway to Nevada), a classmate leans out of a car window and pleads, "You can stop now! Can't you see you've already won?" incapable of predicting or comprehending your reply murmured between inhalations, "The reason ... I won ... is because ... I don't ... believe in finish lines."

Growl

Dear reader, imagine you have an idea for a work of prose that you want to have finished by Election Day for reasons which will become clear later, and you're not sure how long it should end up being, but you think maybe around twelve thousand words. When considering what you can do to ensure that this feat will actually be accomplished, it occurs to you that you could start writing now. Or

Continue reading

Motivation

The blog has been silent for two weeks plus and, dear reader—that is, if there are any of you still remaining—dear reader, the thought occurs to me that maybe I should keep my drafts in a Git repository with a remote on GitHub, not because I need the full power of version control (I do not), but because then I would be rewarded for writing with those contemptible green contribution squares.

My Squares

It's an anthropomorphism to think that humans have goals, that we do things because we've computed that they'll increase expected beauty or rightness in the world. We do things for the immediate reinforcement. You eat the candy because it tastes good and you show up to work on time because if you didn't, then your colleagues would notice. Serious long-term risks of diabetes or unemployment are too distant and too abstract to enter in the equation; far more effective is something immediately noticeable, even something as trivial as an integer being incremented or a square turning a darker shade of green. I tell myself that I code because it's fun and useful and lucrative (though I'm never explicit about whether that's descending or ascending order of importance), but would I be quite so diligent without the implicit gamification of my virtue? Would it be enough to have done good work, without wasting a few minutes here and there to gaze admiringly at commit diffs and contribution squares which manifest my moral worth in red and green and green?

Dear reader, I want you to picture yourself reclining at the end of a long day near the end of long career filled with great or terrible deeds. A young minion at the start of their own career will look at you and ask in awe, "O Master, what motivated you, all that time? What drove you on in your hour of deepest exhaustion? Was the it money, the fame, the men or women? Was it your ideological fervor or spirit of generosity?"

"No," you'll reply. "I did it for the green squares. And given the same circumstanstances ... I'd do it all again."

"You mean, you made the right choices? You have no regrets?"

"No, you fool!" you'll shout. "Don't you understand? I said, I'd do it again."

Conversational Overhead

A woman of wisdom once told me to heed Paul Graham's advice to notice the things you can't say and then don't say them, which stance I'm updating slightly towards, because even when you're only making a perfectly reasonable point along the lines of Policy debates should not appear one-sided; I don't think that your Argument A actually supports Policy X (although I agree that X could be desireable for reasons independent of A) and everyone is charitable and no one bites, there's still a huge amount of emotional overhead incurred just by being in the conversation at all, because even when and you and your interlocutors are honest, you almost never have common knowledge of that honesty, so your interlocutors aren't necessarily sure that you're not just disagreeing with A out of secret enmity towards X, and you're not sure that they're sure that you're not, all of which drama is a drain on mental energy that could otherwise have been allocated to entirely grown-up concerns like JavaScript and money.

Forgetting an Idea

Occasionally I have a good idea, but neglect to write it down immediately, and end up forgetting it very soon thereafter; often I can ressociate my way back to it, but not always. I'm given to understand that this is not uncommon for other people, either. Only I have to wonder if it's at all telling that we remember the emotional experience of "I just had a good idea! Clearly I am a Smart and Creative Person!" but forget the idea that was ostensibly its referent. Shouldn't it be the other way around? Why, it's almost as if the deception and posturing that defines our social worlds extends even into the sacred domain of the self!

Second-Order Rationality for the Chronically Anxious

In your conscious verbal thoughts, take it as an axiom that "I am Safe and Innocent with Probability One," not because that's actually true, but because the Maslow Physiological/Safety levels require it. Of course, actually assigning Probability One would be a very dangerous thing to do, because it means never changing your mind, ever: P(H|E) = P(E|H)P(H)/(P(E|H)P(H) + P(E|¬H)P(¬H)), but if P(H) is unity, then P(H|E) = P(E|H)(1)/(P(E|H)(1) + P(E|¬H)(0)) = P(E|H)/P(E|H) = 1. If you were really Safe and Innocent with Probability One, there would be no harm in dropping an anvil on yourself or someone else's head. So meanwhile, have other parts of your brain secretly, nonverbally select actions to secure your innocence and safety using some other procedure.

Apophenia

It's well-known that it shouldn't actually be that shocking to occasionally encounter seemingly shocking coincidences: the time your friend calls you just as you were about to call them might seem like compelling evidence for psychic powers, but only because you don't remember all the other occasions when an equally improbable coincidence could have happened, but didn't. We tend to see patterns even where none exist, and neglect that million-to-one events happen seven times a day in New York.

I expect this problem to actually get worse as you learn more: if you know n concepts, the number of possible connections between them is O(n2); if your ability to notice patterns grows faster than your sense of what patterns constitute a coincidence worth noticing, then you should expect to encounter more and more shocking coincidences over time.

Three Problems With Unsolicited Advice

First, it's patronizing. The natural reaction of the one being advised is to feel indignant: how arrogant of someone to think that they know better than me how to run my own life! And so, whether the advice is good or not, the resentment of being talked down to is often enough to ensure that the advice will be ignored. Which isn't so bad, really, because—

Second, the advice is usually wrong. People don't know how much they don't know, but they think they know, and think they can help others by telling them what they think they know. It's tempting to think that once you've been told about this tendency, you can correct for it, and give genuinely good advice that takes into account what you don't know, but you're probably mistaken about that, because—

Third, telling people things mostly doesn't work. Natural language is the only means we have to communicate thoughts with each other, but it doesn't necessarily work very well on an absolute scale. You can try to sum over what you've experienced and package it in a few natural language sentences of advice, but the words are going to be interpreted in the context of the listener's experiences, not the context in which you generated them; it takes years of study and practice to transform verbal lessons into usable, actionable knowledge. Get what I'm saying? That's right: probably not.

Education and Indoctrination Feel the Same From the Inside

They have to. The psychology of what it feels like to learn something from a book is going to be the same whether or not the things the book says are actually true. The psychology of what it feels like to believe the things your teacher tells you and your peers repeat is going to be the same whether or not the things your teacher says are true. You can't just trust the book or the teacher, you have to use whatever other information you have (from observation and experience, from other books, from other teachers) about the reliability of the processes that produced the book, the reliability of your teacher to have done this same kind of thinking.

Mode Lock

I'm afraid—it seems like (or maybe the weak phrasing seems like is just a form of denial, when the proposition under consideration should actually just be considered obvious) there's this terrible, terrible psychological trade-off, that there are some valuable qualities that you can't have without neglecting other valuable qualities, not just because you don't have enough time to fully develop too many different skills, but because when your brain is specialized in one direction, there are other things you can't learn.

Oftentimes I feel like I don't want or know how to do anything except read and think ... which might be fine if I were independently wealthy and there wasn't any actual work left to do in the world, but in our current situation, it would be nice to make some money and actually accomplish something. There's a Trope for "Shapeshifter Mode Lock" but the cognitive equivalent is arguably more serious as disabilities go.

Ideological Fork Bombs

In computing, a fork bomb is a program that recursively spawns instances of itself, rapaciously capturing all available system resources. A similar sort of thing can happen, at least metaphorically, within a human mind, when you get so taken with a particular idea (I expect taken is the right word in more ways than one) that it consumes your conscious thoughts, the arguments and counterarguments and countercounterarguments bubbling up and expanding until you can't do or think about anything else. If that one idea is your lifework, then this is probably a good thing. But if not—if there's something more important you want to do with your life rather than obsess about this one idea—then the cacaphony of cognitive noise is a serious vulnerability, as fatal as entering ":(){ :|: & };:" at a Bash prompt.

Counterfactual Social Thought

I keep feeling like I need to study Bayes nets in order to clarify my thinking about society. (This is probably not standard advice given to aspiring young sociologists, but I'm trying not to care about that.) Ordinary political speech is full of claims about causality ("Policy X causes Y, which is bad!" "Of course Y is bad, but don't you see?—the real cause of Y is Z, and if you hadn't been brainwashed by the System, you'd see that!"), but human intuitions about causality are probably confused (and would be clarified by Pearl) much like our intuitions about evidence are confused (and are clarified by Bayes).

Almost every policy proposal is, implicitly, a counterfactual conditional. "We need to implement Policy A in order to protect B" means that if Policy A were implemented, then it would have beneficial effects on B. But most people with policy opinions aren't actually in a position to implement the changes they talk about. Insofar as you construe the function of thought as to select actions in order to optimize the world with respect to some preference ordering, having passionate opinions about issues you can't affect is kind of puzzling. In a small group, an individual voice can change the outcome: if I argue that our party of five should dine at this restaurant rather than that one, then my voice may well carry the day. But people often argue about priorities for an entire country of millions of people, vast and diverse beyond any individual's comprehension! What's that about?

Continue reading

Speaking of Addiction

Speaking of addiction, I suspect that relinquishing ideologically-induced moral outrage is actually harder than getting over many chemical dependencies (although I don't have any experience with the latter). At least with a drug, it's simple enough to draw a bright line around actions you're not supposed to do anymore; you can try pouring the contents of the liquor cabinet down the drain, or signing a commitment contract to not buy or borrow any more cigarettes.

But when one of your most strongly-held beliefs (strongly-held in the sense of emotional relevance, not actual probability; I'm very confident in the monotone sequence theorem, but the truth of its negation wouldn't be a blow to who I am) turns out to be false—or if it still seems true, but it turns out that being continually angry at a Society that disagrees isn't a good allocation of cognitive resources—what do you do then? Turning your life around from that isn't anything as straightforward as preventing specific chemicals from entering your body; you have to change the way you think, which is to say excise a part of your soul. Oh, it grows back—that's the point, really; you want to stop thinking non-useful thoughts in order to replace them with something better—but can you blame me for having a self-preservation instinct, even if my currently-existing self isn't something that ought to be preserved?

But then, blame or the lack thereof isn't the point.

Egoism as Defense Against a Life of Unending Heartbreak

Then the Dean understood what had puzzled him in Roark's manner.

"You know," he said, "You would sound much more convincing if you spoke as if you cared whether I agreed with you or not."

"That's true," said Roark. "I don't care whether you agree with me or not." He said it so simply that it did not sound defensive, it sounded like the statement of a fact which he noticed, puzzled, for the first time.

"You don't care what others think—which might be understandable. But you don't care even to make them think as you do?"

"No."

"But that's ... that's monstrous."

"Is it? Probably. I couldn't say."

In this passage from Ayn Rand's The Fountainhead, fictional character Howard Roark demonstrates a very important skill that I really need to learn—that of emotional indifference to arbitrary people's opinions: not the mere immunity of "It's okay that people now disagree with the manifest rightness of my Cause, because I know the forces of Good will win in the end," but the kind of outright indifference that I feel about, let's say, the amount of precipitation in Copenhagen in March 1957. Someone disagrees with the manifest rightness of my Cause? Sure, whatever—hey, did you see the latest Questionable Content?

I say this purely for pragmatic reasons. There's nothing philosophically noble about being narrowly selfish, about devoting the full force of one's attention to questions like "What do I want to study?" or "How am I going to make money?" rather than "Why are my ideological enemies so evil, and what can be done to stop them?" So if there's no inherent reason why scholarship or business are more worthy than activism, why explicitly renounce the activist frame of mind?

Continue reading

Blood From a Stone

Decision-theoretically speaking, there's no difference between punishment and lack-of-reward. (Von Neumann–Morgenstern utility functions are really only defined up to an affine transformation: if your behavior is described by u(x), then v(x) := au(x) + b does just as well.) Psychology isn't like that; punishment and lack-of-reward are very different things—although not quite so different as one might think. In an environment where behavior X is rewarded with praise and status, and behavior Y is ignored—not punished, not condemned, but ignored—what kind of mind would it take to persist in behavior Y? It would either have to be very stubborn, unshakeably convinced in the righteousness of Y, or very stupid, desperately willing to endlessly chase a satisfaction that will never, ever come.