Speaking of addiction, I suspect that relinquishing ideologically-induced moral outrage is actually harder than getting over many chemical dependencies (although I don't have any experience with the latter). At least with a drug, it's simple enough to draw a bright line around actions you're not supposed to do anymore; you can try pouring the contents of the liquor cabinet down the drain, or signing a commitment contract to not buy or borrow any more cigarettes.
But when one of your most strongly-held beliefs (strongly-held in the sense of emotional relevance, not actual probability; I'm very confident in the monotone sequence theorem, but the truth of its negation wouldn't be a blow to who I am) turns out to be false—or if it still seems true, but it turns out that being continually angry at a Society that disagrees isn't a good allocation of cognitive resources—what do you do then? Turning your life around from that isn't anything as straightforward as preventing specific chemicals from entering your body; you have to change the way you think, which is to say excise a part of your soul. Oh, it grows back—that's the point, really; you want to stop thinking non-useful thoughts in order to replace them with something better—but can you blame me for having a self-preservation instinct, even if my currently-existing self isn't something that ought to be preserved?
But then, blame or the lack thereof isn't the point.
Then the Dean understood what had puzzled him in Roark's manner.
"You know," he said, "You would sound much more convincing if you spoke as if you cared whether I agreed with you or not."
"That's true," said Roark. "I don't care whether you agree with me or not." He said it so simply that it did not sound defensive, it sounded like the statement of a fact which he noticed, puzzled, for the first time.
"You don't care what others think—which might be understandable. But you don't care even to make them think as you do?"
"But that's ... that's monstrous."
"Is it? Probably. I couldn't say."
In this passage from Ayn Rand's The Fountainhead, fictional character Howard Roark demonstrates a very important skill that I really need to learn—that of emotional indifference to arbitrary people's opinions: not the mere immunity of "It's okay that people now disagree with the manifest rightness of my Cause, because I know the forces of Good will win in the end," but the kind of outright indifference that I feel about, let's say, the amount of precipitation in Copenhagen in March 1957. Someone disagrees with the manifest rightness of my Cause? Sure, whatever—hey, did you see the latest Questionable Content?
I say this purely for pragmatic reasons. There's nothing philosophically noble about being narrowly selfish, about devoting the full force of one's attention to questions like "What do I want to study?" or "How am I going to make money?" rather than "Why are my ideological enemies so evil, and what can be done to stop them?" So if there's no inherent reason why scholarship or business are more worthy than activism, why explicitly renounce the activist frame of mind?
Decision-theoretically speaking, there's no difference between punishment and lack-of-reward. (Von Neumann–Morgenstern utility functions are really only defined up to an affine transformation: if your behavior is described by u(x), then v(x) := au(x) + b does just as well.) Psychology isn't like that; punishment and lack-of-reward are very different things—although not quite so different as one might think. In an environment where behavior X is rewarded with praise and status, and behavior Y is ignored—not punished, not condemned, but ignored—what kind of mind would it take to persist in behavior Y? It would either have to be very stubborn, unshakeably convinced in the righteousness of Y, or very stupid, desperately willing to endlessly chase a satisfaction that will never, ever come.
Friend of the blog Alicorn tweets:
Why is the word "dreams" used to describe both pseudorandom nocturnal hallucinations and also heartfelt aspirations for real life?
A cynic might reply: because both the nocturnal hallucinations and the heartfelt aspirations are, for the most part, composed of lies. How many people, what proportion of the time, will actually lift a finger (or open a book, or make a telephone call) to work towards actually achieving what they believe to be heartfelt aspirations?
Supposedly the method of pomodoros is a great technology for overcoming procrastination: you work in twenty- or twenty-five-minute timed blocks, each of which are atomic, indivisible: you have to work through the block, and if you let yourself wander away to something else, then it doesn't count. Katja Grace explains why this is a good idea:
While working, there are various moments when it would be easier to stop than to continue, particularly if you mostly feel the costs and benefits available in the next second or so, and if you assume that you could start again shortly [...] Counting short blocks of continuous time working pretty much solves this problem for me. [...] [A]t any given moment there might be a tiny short term benefit to stopping for a second, but there is a huge cost to it. In my case this seems to remove stopping as an option, in the same way that a hundred dollar price on a menu item removes it as an option without apparent expense of willpower.
It seems as if my outlook on life varies drastically with mood. In the moments when I feel brave and ambitious, I rarely seem to remember that it won't last: that in a week or a day, the moment will be gone and I'll feel weak and scared again—and of course it goes conversely, too.
We don't have the technology or the wisdom to redesign our own emotions. If the moments of weakness-and-fear aren't going away, and if neither mood is exactly a belief that could be destroyed by the truth, then it seems like it would at least be useful to remember, if for no other reason than to avoid wasting cognition devising plans and expectations that aren't sufficiently robust to ordinary emotional variation.
It feels immoral to even think of using techniques to motivate oneself; one should instead just use one's free will to choose the correct action. How utterly degrading it would be, how insulting to the very notion of human dignity, to stoop to the level of contemplating one's own psychology using mere cause-and-effect reasoning, as if one were some sort of animal, or a machine!
But this moralizing is itself immoral, because it doesn't work. If I'm not smart enough to do the right thing for the right reasons, then I might at least aspire to do the right thing for the wrong reasons for the right reasons.
Let me know if someone's actually done this.
Experiment: Use undergraduate schoolstudents as test subjects. Give each subject a shuffled deck of playing cards and ask them to sort it by suit and rank as quickly as possible. Time how long each subject takes to complete the task.
Prediction: A minority of computer science students will markedly outperform everyone else.
Dear reader, I have this ... friend, who has this problem, and I wanted to ask—
What do you mean, Who is he? You wouldn't know ... her, and—
You must realize that I'm already aware that it's a standard trope for someone to say "I Have This Friend" when they're really talking about themselves, and given that I know it's already a standard trope, I would never be so obvious as to actually do it! Therefore you must truthfully conclude that I really am talking about a—
Okay, that's a good point. No, I didn't consider the fact that that reasoning can't possibly be sound because if it were, then people would use it as an excuse to falsely claim that they were speaking about a friend rather than themselves, thereby contradicting the assumption that the reasoning is—
Well, we could try to calculate the probability that I really am talking about a friend conditional on your epistemic state and taking into account the game-theoretic considerations just mentioned, but that could take all night, so will you just listen to my transparent lies for fuck's sake?