I was curious to see how various prognosticators—specifically, FiveThirtyEight and The Economist's models, and the PredictIt prediction markets—did on predicting the state-by-state (plus the District of Columbia) results of the recent U.S. presidential election.
2007–2016: "Of course I'm still fundamentally part of the Blue Team, like all non-evil people, but I genuinely think there are some decision-relevant facts about biology, economics, and statistics that folks may not have adequately taken into account!"
2017: "You know, maybe I'm just ... not part of the Blue Team? Maybe I can live with that?"
We can metaphorically (but like, hopefully it's a good metaphor) think of speech as being the sum of a positive-sum information-conveying component and a zero-sum social-control/memetic-warfare component. Coalitions of agents that allow their members to convey information amongst themselves will tend to outcompete coalitions that don't, because it's better for the coalition to be able to use all of the information it has.
Therefore, if we want the human species to better approximate a coalition of agents who act in accordance with the game-theoretic Bayes-structure of the universe, we want social norms that reward or at least not-punish information-conveying speech (so that other members of the coalition can learn from it if it's useful to them, and otherwise ignore it).
It's tempting to think that we should want social norms that punish the social-control/memetic-warfare component of speech, thereby reducing internal conflict within the coalition and forcing people's speech to mostly consist of information. This might be a good idea if the rules for punishing the social-control/memetic-warfare component are very clear and specific (e.g., no personal insults during a discussion about something that's not the person you want to insult), but it's alarmingly easy to get this wrong: you think you can punish generalized hate speech without any negative consequences, but you probably won't notice when members of the coalition begin to slowly gerrymander the hate speech category boundary in the service of their own values. Whoops!
My old political philosophy: "Socially liberal, fiscally confused; I don't know how to run a goddamned country (and neither do you)."
Commentary: Pretty good, but not quite meta enough.
My new political philosophy: "Being smart is more important than being good (for humans). All ideologies are false; some are useful."
Commentary: Social design space is very large and very high-dimensional; the forces of memetic evolution are somewhat benevolent (all ideas that you've heard of have to be genuinely appealing to some feature of human psychology, or no one would have an incentive to tell you about them), but really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves! Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected! This stance is itself, technically, loyalty to an idea, but hopefully it's a sufficiently meta idea to avoid running into the standard traps while also being sufficiently object-level to have easily-discoverable decision-relevant implications and not run afoul of the principle of ultrafinite recursion ("all infinite recursions are at most three levels deep").
that feel eighteen months post-Obergefell when you realize you missed your chance to be pro-civil-unions-with-all-the-same-legal-privileges but anti-calling-it-marriage while that position was still in the Overton window
(in keeping with the principle that it shouldn't be so exotic to want to protect people's freedom to do beautiful new things without necessarily thereby insisting on redefining existing words that already mean something else)
It looks like the opposing all-human team is winning the exhibition game of me and my it's-not-chess engine (as White) versus everyone in the office who (unlike me) actually knows something about chess (as Black). I mean, naïvely, my team is up a bishop right now, but our king is pretty exposed, and the principal variation that generated one of our recent moves (16. Bxb4 Bf5 17. Kd1 Qxd4+ 18. Kc1 Ng3 19. Qxc7 Nxh1) looks dreadful.
Real chess aficionados (chessters? chessies?) will laugh at me, but it actually took me a while to understand why Ng3 was in that principal variation (I might even have invoked the engine again to help). The position after Ng3 looks like
a b c d e f g h 8 ♜ ♜ ♚ 7 ♟ ♟ ♟ ♟ ♟ ♟ 6 5 ♝ 4 ♗ ♛ 3 ♙ ♞ 2 ♙ ♕ ♙ ♙ ♙ 1 ♖ ♘ ♔ ♗ ♖
and—forgive me—I didn't understand why that wasn't refuted by fxg3 or hxg3; in my novice's utter blindness, I somehow failed to see the discovered attack on the white queen, the necessity of evading which allows the black knight to capture the white rook, and preparation for which was clearly the purpose of 16. ..Bf5 (insofar as we—anthropomorphically?—attribute purpose to a sequence of moves discovered by a minimax search algorithm which doesn't represent concepts like discovered attack anywhere).
Apparently a gang of extortionists calling themselves the "California state Bureau for Private Postsecondary Education" are threatening to shut down a number of organizations that provide assistance in learning to program, including App Academy, which I recently benefitted from attending. I could explain why the behavior of the BPPE is an outrage that must be opposed by anyone with a scrap of decency in their heart, but I'm too busy coding and counting my money.
I keep feeling like I need to study Bayes nets in order to clarify my thinking about society. (This is probably not standard advice given to aspiring young sociologists, but I'm trying not to care about that.) Ordinary political speech is full of claims about causality ("Policy X causes Y, which is bad!" "Of course Y is bad, but don't you see?—the real cause of Y is Z, and if you hadn't been brainwashed by the System, you'd see that!"), but human intuitions about causality are probably confused (and would be clarified by Pearl) much like our intuitions about evidence are confused (and are clarified by Bayes).
Almost every policy proposal is, implicitly, a counterfactual conditional. "We need to implement Policy A in order to protect B" means that if Policy A were implemented, then it would have beneficial effects on B. But most people with policy opinions aren't actually in a position to implement the changes they talk about. Insofar as you construe the function of thought as to select actions in order to optimize the world with respect to some preference ordering, having passionate opinions about issues you can't affect is kind of puzzling. In a small group, an individual voice can change the outcome: if I argue that our party of five should dine at this restaurant rather than that one, then my voice may well carry the day. But people often argue about priorities for an entire country of millions of people, vast and diverse beyond any individual's comprehension! What's that about?
Then the Dean understood what had puzzled him in Roark's manner.
"You know," he said, "You would sound much more convincing if you spoke as if you cared whether I agreed with you or not."
"That's true," said Roark. "I don't care whether you agree with me or not." He said it so simply that it did not sound defensive, it sounded like the statement of a fact which he noticed, puzzled, for the first time.
"You don't care what others think—which might be understandable. But you don't care even to make them think as you do?"
"But that's ... that's monstrous."
"Is it? Probably. I couldn't say."
In this passage from Ayn Rand's The Fountainhead, fictional character Howard Roark demonstrates a very important skill that I really need to learn—that of emotional indifference to arbitrary people's opinions: not the mere immunity of "It's okay that people now disagree with the manifest rightness of my Cause, because I know the forces of Good will win in the end," but the kind of outright indifference that I feel about, let's say, the amount of precipitation in Copenhagen in March 1957. Someone disagrees with the manifest rightness of my Cause? Sure, whatever—hey, did you see the latest Questionable Content?
I say this purely for pragmatic reasons. There's nothing philosophically noble about being narrowly selfish, about devoting the full force of one's attention to questions like "What do I want to study?" or "How am I going to make money?" rather than "Why are my ideological enemies so evil, and what can be done to stop them?" So if there's no inherent reason why scholarship or business are more worthy than activism, why explicitly renounce the activist frame of mind?
I don't enjoy being provoked into despising my fellow Americans for being alien savages, when it's so much better to just ignore them for the same reason.
I don't know how to run a goddamned country,
And neither do you.