Disagreement Comes From the Dark World

In “Truth or Dare”, Duncan Sabien articulates a phenomenon in which expectations of good or bad behavior can become self-fulfilling: people who expect to be exploited and feel the need to put up defenses both elicit and get sorted into a Dark World where exploitation is likely and defenses are necessary, whereas people who expect beneficence tend to attract beneficence in turn.

Among many other examples, Sabien highlights the phenomenon of gift economies: a high-trust culture in which everyone is eager to help each other out whenever they can is a nicer place to live than a low-trust culture in which every transaction must be carefully tracked for fear of enabling free-riders.

I’m skeptical of the extent to which differences between high- and low-trust cultures can be explained by self-fulfilling prophecies as opposed to pre-existing differences in trust-worthiness, but I do grant that self-fulfilling expectations can sometimes play a role: if I insist on always being paid back immediately and in full, it makes sense that that would impede the development of gift-economy culture among my immediate contacts. So far, the theory articulated in the essay seems broadly plausible.

Later, however, the post takes an unexpected turn:

Treating all of the essay thus far as prerequisite and context:

This is why you should not trust Zack Davis, when he tries to tell you what constitutes good conduct and productive discourse. Zack Davis does not understand how high-trust, high-cooperation dynamics work. He has never seen them. They are utterly outside of his experience and beyond his comprehension. What he knows how to do is keep his footing in a world of liars and thieves and pickpockets, and he does this with genuinely admirable skill and inexhaustible tenacity.

But (as far as I can tell, from many interactions across years) Zack Davis does not understand how advocating for and deploying those survival tactics (which are 100% appropriate for use in an adversarial memetic environment) utterly destroys the possibility of building something Better. Even if he wanted to hit the “cooperate” button—

(In contrast to his usual stance, which from my perspective is something like “look, if we all hit ‘defect’ together, in full foreknowledge, then we don’t have to extend trust in any direction and there’s no possibility of any unpleasant surprises and you can all stop grumping at me for repeatedly ‘defecting’ because we’ll all be cooperating on the meta level, it’s not like I didn’t warn you which button I was planning on pressing, I am in fact very consistent and conscientious.”)

—I don’t think he knows where it is, or how to press it.

(Here I’m talking about the literal actual Zack Davis, but I’m also using him as a stand-in for all the dark world denizens whose well-meaning advice fails to take into account the possibility of light.)

As a reader of the essay, I reply: wait, who? Am I supposed to know who this Davies person is? Ctrl-F search confirms that they weren’t mentioned earlier in the piece; there’s no reason for me to have any context for whatever this section is about.

As Zack Davis, however, I have a more specific reply, which is: yeah, I don’t think that button does what you think it does. Let me explain.


In figuring out what would constitute good conduct and productive discourse, it’s important to appreciate how bizarre the human practice of “discourse” looks in light of Aumann’s dangerous idea.

There’s only one reality. If I’m a Bayesian reasoner honestly reporting my beliefs about some question, and you’re also a Bayesian reasoner honestly reporting your beliefs about the same question, we should converge on the same answer, not because we’re cooperating with each other, but because it is the answer. When I update my beliefs based on your report on your beliefs, it’s strictly because I expect your report to be evidentially entangled with the answer. Maybe that’s a kind of “trust”, but if so, it’s in the same sense in which I “trust” that an increase in atmospheric pressure will exert force on the exposed basin of a classical barometer and push more mercury up the reading tube. It’s not personal and it’s not reciprocal: the barometer and I aren’t doing each other any favors. What would that even mean?

In contrast, my friends and I in a gift economy are doing each other favors. That kind of setting featuring agents with a mixture of shared and conflicting interests is the context in which the concepts of “cooperation” and “defection” and reciprocal “trust” (in the sense of people trusting each other, rather than a Bayesian robot trusting a barometer) make sense. If everyone pitches in with chores when they can, we all get the benefits of the chores being done—that’s cooperation. If you never wash the dishes, you’re getting the benefits of a clean kitchen without paying the costs—that’s defection. If I retaliate by refusing to wash any dishes myself, then we both suffer a dirty kitchen, but at least I’m not being exploited—that’s mutual defection. If we institute a chore wheel with an auditing regime, that reëstablishes cooperation, but we’re paying higher transaction costs for our lack of trust. And so on: Sabien’s essay does a good job of explaining how there can be more than one possible equilibrium in this kind of system, some of which are much more pleasant than others.

If you’ve seen high-trust gift-economy-like cultures working well and low-trust backstabby cultures working poorly, it might be tempting to generalize from the domains of interpersonal or economic relationships, to rational (or even “rationalist”) discourse. If trust and cooperation are essential for living and working together, shouldn’t the same lessons apply straightforwardly to finding out what’s true together?

Actually, no. The issue is that the payoff matrices are different.

Life and work involve a mixture of shared and conflicting interests. The existence of some conflicting interests is an essential part of what it means for you and me to be two different agents rather than interchangable parts of the same hivemind: we should hope to do well together, but when push comes to shove, I care more about me doing well than you doing well. The art of cooperation is about maintaining the conditions such that push does not in fact come to shove.

But correct epistemology does not involve conflicting interests. There’s only one reality. Bayesian reasoners cannot agree to disagree. Accordingly, when humans successfully approach the Bayesian ideal, it doesn’t particularly feel like cooperating with your beloved friends, who see you with all your blemishes and imperfections but would never let a mere disagreement interfere with loving you. It usually feels like just perceiving things—resolving disagreements so quickly that you don’t even notice them as disagreements.

Suppose you and I have just arrived at a bus stop. The bus arrives every half-hour. I don’t know when the last bus was, so I don’t know when the next bus will be: I assign a uniform probability distribution over the next thirty minutes. You recently looked at the transit authority’s published schedule, which says the bus will come in six minutes: most of your probability-mass is concentrated tightly around six minutes from now.

We might not consciously notice this as a “disagreement”, but it is: you and I have different beliefs about when the next bus will arrive; our probability distributions aren’t the same. It’s also very ephemeral: when I ask, “When do you think the bus will come?” and you say, “six minutes; I just checked the schedule”, I immediately replace my belief with yours, because I think the published schedule is probably right and there’s no particular reason for you to lie about what it says.

Alternatively, suppose that we both checked different versions of the schedule, which disagree: the schedule I looked at said the next bus is in twenty minutes, not six. When we discover the discrepancy, we infer that one of the schedules must have been outdated, and both adopt a distribution with most of the probability-mass in separate clumps around six and twenty minutes from now. Our initial beliefs can’t both have been right—but there’s no reason for me to weight my prior belief more heavily just because it was mine.

At worst, approximating ideal belief exchange feels like working on math. Suppose you and I are studying the theory of functions of a complex variable. We’re trying to prove or disprove the proposition that if an entire function satisfies f(x + 1) = f(x) for real x, then f(z + 1) = f(z) for all complex z. I suspect the proposition is false and set about trying to construct a counterexample; you suspect the proposition is true and set about trying to write a proof by contradiction. Our different approaches do seem to imply different probabilistic beliefs about the proposition, but I can’t be confident in my strategy just because it’s mine, and we expect the disagreement to be transient: as soon as I find my counterexample or you find your reductio, we should be able to share our work and converge.


Most real-world disagreements of interest don’t look like the bus arrival or math problem examples—qualitatively, not as a matter of trying to prove quantitatively harder theorems. Real-world disagreements tend to persist; they’re predictable—in flagrant contradiction of how the beliefs of Bayesian reasoners would follow a random walk. From this we can infer that typical human disagreements aren’t “honest”, in the sense that at least one of the participants is behaving as if they have some other goal than getting to the truth.

Importantly, this characterization of dishonesty is using a functionalist criterion: when I say that people are behaving as if they have some other goal than getting to the truth, that need not imply that anyone is consciously lying; “mere” bias is sufficient to carry the argument.

Dishonest disagreements end up looking like conflicts because they are disguised conflicts. The parties to a dishonest disagreement are competing to get their preferred belief accepted, where beliefs are being preferred for some reason other than their accuracy: for example, because acceptance of the belief would imply actions that would benefit the belief-holder. If it were true that my company is the best, it would follow logically that customers should buy my products and investors should fund me. And yet a discussion with me about whether or not my company is the best probably doesn’t feel like a discussion about bus arrival times or the theory of functions of a complex variable. You probably expect me to behave as if I thought my belief is better “because it’s mine”, to treat attacks on the belief as if they were attacks on my person: a conflict rather than a disagreement.

“My company is the best” is a particularly stark example of a typically dishonest belief, but the pattern is very general: when people are attached to their beliefs for whatever reason—which is true for most of the beliefs that people spend time disagreeing about, as contrasted to math and bus-schedule disagreements that resolve quickly—neither party is being rational (which doesn’t mean neither party is right on the object level). Attempts to improve the situation should take into account that the typical case is not that of truthseekers who can do better at their shared goal if they learn to trust each other, but rather of people who don’t trust each other because each correctly perceives that the other is not truthseeking.

Again, “not truthseeking” here is meant in a functionalist sense. It doesn’t matter if both parties subjectively think of themselves as honest. The “distrust” that prevents Aumann-agreement-like convergence is about how agents respond to evidence, not about subjective feelings. It applies as much to a mislabeled barometer as it does to a human with a functionally-dishonest belief. If I don’t think the barometer readings correspond to the true atmospheric pressure, I might still update on evidence from the barometer in some way if I have a guess about how its labels correspond to reality, but I’m still going to disagree with its reading according to the false labels.


There are techniques for resolving economic or interpersonal conflicts that involve both parties adopting a more cooperative approach, each being more willing to do what the other party wants (while the other reciprocates by doing more of what the first one wants). Someone who had experience resolving interpersonal conflicts using techniques to improve cooperation might be tempted to apply the same toolkit to resolving dishonest disagreements.

It might very well work for resolving the disagreement. It probably doesn’t work for resolving the disagreement correctly, because cooperation is about finding a compromise amongst agents with partially conflicting interests, and in a dishonest disagreement in which both parties have non-epistemic goals, trying to do more of what the other party functionally “wants” amounts to catering to their bias, not systematically getting closer to the truth.

Cooperative approaches are particularly dangerous insofar as they seem likely to produce a convincing but false illusion of rationality, despite the participants’ best of subjective conscious intentions. It’s common for discussions to involve more than one point of disagreement. An apparently productive discussion might end with me saying, “Okay, I see you have a point about X, but I was still right about Y.”

This is a success if the reason I’m saying that is downstream of you in fact having a point about X but me in fact having been right about Y. But another state of affairs that would result in me saying that sentence, is that we were functionally playing a social game in which I implicitly agreed to concede on X (which you visibly care about) in exchange for you ceding ground on Y (which I visibly care about).

Let’s sketch out a toy model to make this more concrete. “Truth or Dare” uses color perception as an illustration of confirmation bias: if you’ve been primed to make the color yellow salient, it’s easy to perceive an image as being yellower than it is.

Suppose Jade and Ruby consciously identify as truthseekers, but really, Jade is biased to perceive non-green things as green 20% of the time, and Ruby is biased to perceive non-red things as red 20% of the time. In our functionalist sense, we can model Jade as “wanting” to misrepresent the world as being greener than it is, and Ruby as “wanting” to misrepresent the world is being redder than it is.

Confronted with a sequence of gray objects, Jade and Ruby get into a heated argument: Jade thinks 20% of the objects are green and 0% are red, whereas Ruby thinks they’re 0% green and 20% red.

As tensions flare, someone who didn’t understand the deep disanalogy between human relations and epistemology might propose that Jade and Ruby should strive be more “cooperative”, establish higher “trust.”

What does that mean? Honestly, I’m not entirely sure, but I worry that if someone takes high-trust gift-economy-like cultures as their inspiration and model for how to approach intellectual disputes, they’ll end up giving bad advice in practice.

Cooperative human relationships result in everyone getting more of what they want. If Jade wants to believe that the world is greener than it is and Ruby wants to believe that the world is redder than it is, then naïve attempts at “cooperation” might involve Jade making an effort to see things Ruby’s way at Ruby’s behest, and vice versa. But Ruby is only going to insist that Jade make an effort to see it her way when Jade says an item isn’t red. (That’s what Ruby cares about.) Jade is only going to insist that Ruby make an effort to see it her way when Ruby says an item isn’t green. (That’s what Jade cares about.)

If the two (perversely) succeed at seeing things the other’s way, they would end up converging on believing that the sequence of objects is 20% green and 20% red (rather than the 0% green and 0% red that it actually is). They’d be happier, but they would also be wrong. In order for the pair to get the correct answer, then without loss of generality, when Ruby says an object is red, Jade needs to stand her ground: “No, it’s not red; no, I don’t trust you and won’t see things your way; let’s break out the Pantone swatches.” But that doesn’t seem very “cooperative” or “trusting”.


At this point, a proponent of the high-trust, high-cooperation dynamics that Sabien champions is likely to object that the absurd “20% green, 20% red” mutual-sycophancy outcome in this toy model is clearly not what they meant. (As Sabien takes pains to clarify in “Basics of Rationalist Discourse”, “If two people disagree, it’s tempting for them to attempt to converge with each other, but in fact the right move is for both of them to try to see more of what’s true.”)

Obviously, the mutual sycophancy outcome is clearly not what proponents of trust and cooperation consciously intend. The problem is that mutual sycophancy seems to be the natural outcome of treating interpersonal conflicts as analogous to epistemic disagreements and trying to resolve them both using cooperative practices, when in fact the decision-theoretic structure of those situations are very different. The text of “Truth or Dare” seems to treat the analogy as a strong one; it wouldn’t make sense to spend so many thousands of words discussing gift economies and the eponymous party game and then draw a conclusion about “what constitutes good conduct and productive discourse”, if gift economies and the party game weren’t relevant to what constitutes productive discourse.

“Truth or Dare” seems to suggest that it’s possible to escape the Dark World by excluding the bad guys. “[F]rom the perspective of someone with light world privilege, […] it did not occur to me that you might be hanging around someone with ill intent at all,” Sabien imagines a denizen of the light world saying. “Can you, um. Leave? Send them away? Not be spending time in the vicinity of known or suspected malefactors?”

If we’re talking about holding my associates to a standard of ideal truthseeking (as contrasted to a lower standard of “not using this truth-or-dare game to blackmail me”), then, no, I think I’m stuck spending time in the vicinity of people who are known or suspected to be biased. I can try to mitigate the problem by choosing less biased friends, but when we do disagree, I have no choice but to approach that using the same rules of reasoning that I would use with a possibly-mislabeled barometer, which do not have a particularly cooperative character. Telling us that the right move is for both of us to try to see more of what’s true is tautologically correct but non-actionable; I don’t know how to do that except by my usual methodology, which Sabien has criticized as characteristic of living in a dark world.

That is to say: I do not understand how high-trust, high-cooperation dynamics work. I’ve never seen them. They are utterly outside my experience and beyond my comprehension. What I do know is how to keep my footing in a world of people with different goals from me, which I try to do with what skill and tenacity I can manage.

And if someone should say that I should not be trusted when I try to explain what constitutes good conduct and productive discourse … well, I agree!

I don’t want people to trust me, because I think trust would result in us getting the wrong answer.

I want people to read the words I write, think it through for themselves, and let me know in the comments if I got something wrong.

An Intuition on the Bayes-Structural Justification for Free Speech Norms

We can metaphorically (but like, hopefully it's a good metaphor) think of speech as being the sum of a positive-sum information-conveying component and a zero-sum social-control/memetic-warfare component. Coalitions of agents that allow their members to convey information amongst themselves will tend to outcompete coalitions that don't, because it's better for the coalition to be able to use all of the information it has.

Therefore, if we want the human species to better approximate a coalition of agents who act in accordance with the game-theoretic Bayes-structure of the universe, we want social norms that reward or at least not-punish information-conveying speech (so that other members of the coalition can learn from it if it's useful to them, and otherwise ignore it).

It's tempting to think that we should want social norms that punish the social-control/memetic-warfare component of speech, thereby reducing internal conflict within the coalition and forcing people's speech to mostly consist of information. This might be a good idea if the rules for punishing the social-control/memetic-warfare component are very clear and specific (e.g., no personal insults during a discussion about something that's not the person you want to insult), but it's alarmingly easy to get this wrong: you think you can punish generalized hate speech without any negative consequences, but you probably won't notice when members of the coalition begin to slowly gerrymander the hate speech category boundary in the service of their own values. Whoops!

Your Periodic Reminder I

Aumann's agreement theorem should not be naïvely misinterpreted to mean that humans should directly try to agree with each other. Your fellow rationalists are merely subsets of reality that may or may not exhibit interesting correlations with other subsets of reality; you don't need to "agree" with them any more than you need to "agree" with an encyclopædia, photograph, pinecone, or rock.

Measure

In a sufficiently large universe, everything that can happen happens somewhere, but it's clearly not an even distribution. Flip a quantum coin a hundred times, and there have to be some versions of you who see a hundred heads, but they're so vastly outnumbered by versions of you who see a properly random-looking sequence of heads and tails that it's not worth thinking about: it mostly doesn't happen.

When you write a computer program, or build a bridge, or just think something, we might prefer to take the viewpoint that you're not creating anything so much as you are instantiating that pattern locally, thereby increasing its measure in the multiverse: there might be other ways for that program, that bridge, that thought to come about somewhere, but it's getting some of its support from you.

Decisionmaking is about exerting some control over the distribution of measure over patterns in the multiverse: agent-like patterns select actions so as to allocate measure to their preferred patterns. Maybe there are some versions of me with an ice-cream cone that form by chance, or are deliberately created by alien civilizations investigating how humans respond to ice-cream, but if I get an ice-cream cone, it's mostly because humans evolved and then developed cultures which domesticated dairy cows and cultivated sugar and so on and so forth until eventually I was born and grew up and put effort into acquiring ice-cream. When you decide, you help determine the distribution of what happens to the sections of the multiverse that depend on copies of your decision—choose carefully.

The Demandingness Objection

"Well, I'm not giving up dairy, but I can probably give up meat, and milk is at the very bottom of Brian's table of suffering per kilogram demanded, so I'd be contributing to much less evil than I was before. That's good, right?

"For all the unimaginably terrible things our species do to each other and to other creatures, we're not—we're probably not any worse than the rest of nature. Gazelles suffer terribly as lions eat them alive, but we can't intervene because then the lions would starve, and the gazelles would have a population explosion and starve, too. We have this glorious idea that people need to consent before sex, but male ducks just rape the females, and there's no one to stop it—nothing else besides humans around capable of formulating the proposition, as a proposition, that the torment and horror in the world is wrong and should stop. Animals have been eating each other for hundreds of millions of years; we may be murderous, predatory apes, but we're murderous, predatory apes with Reason—well, sort of—and a care/harm moral foundation that lets some of us, with proper training, to at least wish to be something better.

"I don't actually know much history or biology, but I know enough to want it to not be real, to not have happened that way. But it couldn't have been otherwise. In the absence of an ontologically fundamental creator God, Darwinian evolution is the only way to get purpose from nowhere, design without a designer. My wish for things to have been otherwise ... probably isn't even coherent; any wish for the nature of reality to have been different, can only be made from within reality.

Continue reading

Relativity

"Empathy hurts.

"I'm grateful for being fantastically, unimaginably rich by world-historical standards—and I'm terrified of it being taken away. I feel bad for all the creatures in the past—and future?—who are stuck in a miserable Malthusian equilibrium.

"I simultaneously want to extend my circle of concern out to all sentient life, while personally feeling fear and revulsion towards anything slightly different from what I'm used to.

"Anna keeps telling me I have a skewed perspective on what constitutes a life worth living. I'm inclined to think that animals and poor people have a wretched not-worth-living existence, but perhaps they don't feel so sorry for themselves?—for the same reason that hypothetical transhumans might think my life has been wretched and not worth living, even while I think it's been pretty good on balance.

"But I'm haunted. After my recent ordeal in the psych ward, the part of me that talks described it as 'hellish.' But I was physically safe the entire time. If something so gentle as losing one night of sleep and being taken away from my usual environment was enough to get me to use the h-word, then what about all the actual suffering in the world? What hope is there for transhumanism, if the slightest perturbation sends us spiraling off into madness?

"The other week I was reading Julian Simon's book on overcoming depression; he wrote that depression arises from negative self-comparisons: comparing your current state to some hypothetical more positive state. But personal identity can't actually exist; time can't actually exist the way we think it does. If pain and suffering are bad when they're implemented in my skull, then they have to be bad when implemented elsewhere.

"Anna said that evolutionarily speaking, bad experiences are more intense than good ones because you can lose all your fitness in a short time period. But if 'the brain can't multiply' is a bias—if two bad things are twice as bad as one, no matter where they are in space and time, even if no one is capable of thinking that way—then so is 'the brain can't integrate': long periods of feeling pretty okay count for something, too.

"I'm not a negative utilitarian; I'm a preference utilitarian. I'm not a preference utilitarian; I'm a talking monkey with delusions of grandeur."

I Don't Understand Time

Our subjective experience would have it that time "moves forward": the past is no longer, and the future is indeterminate and "hasn't happened yet." But it can't actually work that way: special relativity tells us that there's no absolute space of simultaneity; given two spacelike separated events, whether one happened "before" or "after" the other depends on where you are and how fast you're going. This leads us to a "block universe" view: our 3+1 dimensional universe, past, present, and future, simply exists, and the subjective arrow of time somehow arises from our perspective embedded within it.

Without knowing much in the way of physics or cognitive science myself, I can only wonder if there aren't still more confusions to dissolved, intuitions to be unlearned in the service of a more accurate understanding. We know things about the past from our memories and by observing documents; we might then say that memories and documents are forms of probabilistic evidence about another point in spacetime. But predictions about the future are also a form of probabilistic evidence about another point in spacetime. There's a sort of symmetry there, isn't there? Could we perhaps imagine that minds constructed differently from our own wouldn't perceive the same kind of arrow of time that we do?

The Horror of Naturalism

There's this deeply uncomfortable tension between being an animal physiologically incapable of caring about anything other than what happens to me in the near future, and the knowledge of the terrifying symmetry that cannot be unseen: that my own suffering can't literally be more important, just because it's mine. You do some philosophy and decide that your sphere of moral concern should properly extend to all sentient life—whatever sentient turns out to mean—but life is built to survive at the expense of other life.

I want to say, "Why can't everyone just get along and be nice?"—but those are just English words that only make sense to other humans from my native culture, who share the cognitive machinery that generated them. The real world is made out of physics and game theory; my entire concept of "getting along and being nice" is the extremely specific, contingent result of the pattern of cooperation and conflict in my causal past: the billions of corpses on the way to Homo sapiens, the thousands of years of culture on the way to the early twenty-first century United States, the nonshared environmental noise on the way to me. Even if another animal would agree that pleasure is better than pain and peace is better than war, the real world has implementation details that we won't agree on, and the implementation details have to be settled somehow.

I console myself with the concept of decision-theoretic irrelevance: insofar as we construe the function of thought as to select actions, being upset about things that you can't affect is a waste of cognition. It doesn't help anyone for me to be upset about all the suffering in the world when I don't know how to alleviate it. Even in the face of moral and ontological uncertainty, there are still plenty of things-worth-doing. I will play positive-sum games, acquire skills, acquire resources, and use the resources to protect some of the things I care about, making the world slightly less terrible with me than without me. And if I'm left with the lingering intuition that there was supposed to be something else, some grand ideal more important than friendship and Pareto improvements ... I don't remember it anymore.

Continuum Utilitarianism

You hear people talk about positive (maximize pleasure) versus negative (minimize pain) utilitarianism, or average versus total utilitarianism, none of which seem very satisfactory. For example, average utilitarianism taken literally would suggest killing everyone but the happiest person, and total utilitarianism implies what Derek Parfit called the repugnant conclusion: that for any possible world with lots of happy people, the total utilitarian must prefer another possible world with many more people whose lives are just barely worth living.

But really, it shouldn't be that surprising that there's no simple, intuitively satisfying population ethics, because any actual preference ordering over possible worlds is going to have to make tradeoffs: how much pleasure and how much pain distributed across how many people's lives in what manner, what counts as a "person," &c.

Actually Personal Responsibility

Dear reader, you occasionally hear people with conservative tendencies complain that the problem with Society today is that people lack personal responsibility: that the young and the poor need to take charge of themselves and stop mooching off their parents or the government: to shut up, do their homework, and get a job. I lack any sort of conservative tendency and would never say that sort of thing, but I would endorse a related-but-quite-distinct concept that I want to refer to using the same phrase personal responsibility, as long as it's clear from context that I don't mean it in the traditional, conservative way.

The problem with the traditional sense of personal responsibility is that it's not personal; it's an attempt to shame people into doing what the extant social order expects of them. I'm aware that that kind of social pressure often does serve useful purposes—but I think it's possible to do better. The local authorities really don't know everything; the moral rules and social norms you were raised with can actually be mistaken in all sorts of disastrous ways that no one warned you about. So I think people should strive to take personal responsibility for their own affairs not as a burdensome duty to Society, but because it will actually result in better outcomes, both for the individual in question, and for Society.

Continue reading

Mathematics Is the Subfield of Philosophy That Humans Are Good At

By philosophy I understand the discipline of discovering truths about reality by means of thinking very carefully. Contrast to science, where we try to come up with theories that predict our observations. Philosophers of number have observed that the first ten trillion nontrivial zeros of the Riemann zeta function are on the critical line, but people don't speak of the Riemann hypothesis as being almost certainly true, not necessarily because they anticipate a counterexample lurking somewhere above ½ + 1026i (although "large" counterexamples are not unheard-of in the philosophy of numbers), but rather because while empirical examination is certainly helpful, it's not really what we do. Mere empiricism is usually sufficient for knowing (with high probability) what is true, but as philosophers, we want to explain why, and moreover, why it could not have been otherwise.

When we try this on topics like numbers or shapes, it works really, really well: our philosophers quickly reach ironclad consensuses about matters far removed from human intuition. When we try it on topics like justice or existence ... it doesn't work so well. I think it's sad.