I had always thought Twilight Sparkle was the pony that best exemplified the spirit of epistemic rationality. If anypony should possess the truth, it must be the ones with high p (p being the letter used to represent the pony intelligence factor first proposed by Charles Spearpony and whose existence was confirmed by later psychometric research by such ponies as Arthur Jenfoal) who devote their lives to tireless scholarship!
After this year, however, I think I'm going to have to go with Applejack. Sometimes, all a pony needs to do to possess the truth is simply to stop lying.
Just—stop fucking lying!
Aumann's agreement theorem should not be naïvely misinterpreted to mean that humans should directly try to agree with each other. Your fellow rationalists are merely subsets of reality that may or may not exhibit interesting correlations with other subsets of reality; you don't need to "agree" with them any more than you need to "agree" with an encyclopædia, photograph, pinecone, or rock.
Physical pain is the worst thing in the world, and the work of effective altruists will not be done until the last nociceptor falls silent and not a single moment of suffering remains to be computed across our entire future light cone.
But the emotional pain of discovering that your cherished belief is false, that everything you've ever cared about is not only utterly unattainable, but may in fact not even be coherent?—yeah, I'm pretty sadomasochistic about that. That's rationality; that's what it feels like to be alive.
(more commonly known as Bayes's theorem, but I like my name better)
Growing up in an ostensibly reform-Jewish household that didn't even take that seriously, atheism was easy for me, so I don't know how hard deconversion is, how much it hurts, or how much of one's entire conception of self is trashed in the process and can't be recovered.
As an atheist, it's tempting to say, "Look, it's not that bad: God doesn't exist exist, but you can still go to church and praise God and stuff if you want; it's just that there are benefits to being honest about what you're actually doing and why."
Somehow, I suspect that this is not a very convincing sell.
Applications to other topics are—as always—left as an exercise to the reader.
In your conscious verbal thoughts, take it as an axiom that "I am Safe and Innocent with Probability One," not because that's actually true, but because the Maslow Physiological/Safety levels require it. Of course, actually assigning Probability One would be a very dangerous thing to do, because it means never changing your mind, ever: P(H|E) = P(E|H)P(H)/(P(E|H)P(H) + P(E|¬H)P(¬H)), but if P(H) is unity, then P(H|E) = P(E|H)(1)/(P(E|H)(1) + P(E|¬H)(0)) = P(E|H)/P(E|H) = 1. If you were really Safe and Innocent with Probability One, there would be no harm in dropping an anvil on yourself or someone else's head. So meanwhile, have other parts of your brain secretly, nonverbally select actions to secure your innocence and safety using some other procedure.
There's this deeply uncomfortable tension between being an animal physiologically incapable of caring about anything other than what happens to me in the near future, and the knowledge of the terrifying symmetry that cannot be unseen: that my own suffering can't literally be more important, just because it's mine. You do some philosophy and decide that your sphere of moral concern should properly extend to all sentient life—whatever sentient turns out to mean—but life is built to survive at the expense of other life.
I want to say, "Why can't everyone just get along and be nice?"—but those are just English words that only make sense to other humans from my native culture, who share the cognitive machinery that generated them. The real world is made out of physics and game theory; my entire concept of "getting along and being nice" is the extremely specific, contingent result of the pattern of cooperation and conflict in my causal past: the billions of corpses on the way to Homo sapiens, the thousands of years of culture on the way to the early twenty-first century United States, the nonshared environmental noise on the way to me. Even if another animal would agree that pleasure is better than pain and peace is better than war, the real world has implementation details that we won't agree on, and the implementation details have to be settled somehow.
I console myself with the concept of decision-theoretic irrelevance: insofar as we construe the function of thought as to select actions, being upset about things that you can't affect is a waste of cognition. It doesn't help anyone for me to be upset about all the suffering in the world when I don't know how to alleviate it. Even in the face of moral and ontological uncertainty, there are still plenty of things-worth-doing. I will play positive-sum games, acquire skills, acquire resources, and use the resources to protect some of the things I care about, making the world slightly less terrible with me than without me. And if I'm left with the lingering intuition that there was supposed to be something else, some grand ideal more important than friendship and Pareto improvements ... I don't remember it anymore.
The great Brian Kernighan wrote, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"
It's not just good advice for programmers. The same principle applies to any sort of planning and any sort of reasoning: the most intricate, sophisticated thoughts you can think, the thoughts at the very edge of your current abilities, are going to be less reliable than simpler thoughts that you can not only conceive of, but also understand in detail exactly why they're correct. Thus, insofar as you're thinking to achieve an outcome in the world, insofar as you actually care about your plan working, then (other things being equal) simple plans are preferable.
(On the other hand, if what you really want to do is show off how smart you are, then you should think and say complicated things. At the meta level, this is itself a simple plan, as contrasted to complicated and nonobvious schemes to achieve the outcome of looking smart.)
They have to. The psychology of what it feels like to learn something from a book is going to be the same whether or not the things the book says are actually true. The psychology of what it feels like to believe the things your teacher tells you and your peers repeat is going to be the same whether or not the things your teacher says are true. You can't just trust the book or the teacher, you have to use whatever other information you have (from observation and experience, from other books, from other teachers) about the reliability of the processes that produced the book, the reliability of your teacher to have done this same kind of thinking.
Dear reader, you occasionally hear people with conservative tendencies complain that the problem with Society today is that people lack personal responsibility: that the young and the poor need to take charge of themselves and stop mooching off their parents or the government: to shut up, do their homework, and get a job. I lack any sort of conservative tendency and would never say that sort of thing, but I would endorse a related-but-quite-distinct concept that I want to refer to using the same phrase personal responsibility, as long as it's clear from context that I don't mean it in the traditional, conservative way.
The problem with the traditional sense of personal responsibility is that it's not personal; it's an attempt to shame people into doing what the extant social order expects of them. I'm aware that that kind of social pressure often does serve useful purposes—but I think it's possible to do better. The local authorities really don't know everything; the moral rules and social norms you were raised with can actually be mistaken in all sorts of disastrous ways that no one warned you about. So I think people should strive to take personal responsibility for their own affairs not as a burdensome duty to Society, but because it will actually result in better outcomes, both for the individual in question, and for Society.
I keep feeling like I need to study Bayes nets in order to clarify my thinking about society. (This is probably not standard advice given to aspiring young sociologists, but I'm trying not to care about that.) Ordinary political speech is full of claims about causality ("Policy X causes Y, which is bad!" "Of course Y is bad, but don't you see?—the real cause of Y is Z, and if you hadn't been brainwashed by the System, you'd see that!"), but human intuitions about causality are probably confused (and would be clarified by Pearl) much like our intuitions about evidence are confused (and are clarified by Bayes).
Almost every policy proposal is, implicitly, a counterfactual conditional. "We need to implement Policy A in order to protect B" means that if Policy A were implemented, then it would have beneficial effects on B. But most people with policy opinions aren't actually in a position to implement the changes they talk about. Insofar as you construe the function of thought as to select actions in order to optimize the world with respect to some preference ordering, having passionate opinions about issues you can't affect is kind of puzzling. In a small group, an individual voice can change the outcome: if I argue that our party of five should dine at this restaurant rather than that one, then my voice may well carry the day. But people often argue about priorities for an entire country of millions of people, vast and diverse beyond any individual's comprehension! What's that about?
Speaking of addiction, I suspect that relinquishing ideologically-induced moral outrage is actually harder than getting over many chemical dependencies (although I don't have any experience with the latter). At least with a drug, it's simple enough to draw a bright line around actions you're not supposed to do anymore; you can try pouring the contents of the liquor cabinet down the drain, or signing a commitment contract to not buy or borrow any more cigarettes.
But when one of your most strongly-held beliefs (strongly-held in the sense of emotional relevance, not actual probability; I'm very confident in the monotone sequence theorem, but the truth of its negation wouldn't be a blow to who I am) turns out to be false—or if it still seems true, but it turns out that being continually angry at a Society that disagrees isn't a good allocation of cognitive resources—what do you do then? Turning your life around from that isn't anything as straightforward as preventing specific chemicals from entering your body; you have to change the way you think, which is to say excise a part of your soul. Oh, it grows back—that's the point, really; you want to stop thinking non-useful thoughts in order to replace them with something better—but can you blame me for having a self-preservation instinct, even if my currently-existing self isn't something that ought to be preserved?
But then, blame or the lack thereof isn't the point.
When you encounter someone who expresses a political or social opinion that you find absolutely abhorrent, it is instructive to consider the extent to which this person is making a mistake, and the extent to which they simply have different values from you. Is this opinion something that they would immediately relinquish, if only they knew they knew the true facts of which they are now ignorant?—or is it reflective of some quality essential to their agency, a basic motive far too sacred to be destroyed by the truth?
(Of course, it is also instructive to consider whether you're making a mistake. But that is not the subject of this post.)
Some would say that it is useless to consider such questions, that human cognition doesn't separate cleanly into beliefs and values, and that even if such a thing could be done, it is futile for any present-day human to consider the matter, given our ignorance of our own psychology. And yet, the question still seems to make sense to me. If I can't know, I can guess. And I don't guess the same thing every time.
I reserve the right to arbitrarily change my beliefs or behavior at any time.