I was curious to see how various prognosticators—specifically, FiveThirtyEight and The Economist's models, and the PredictIt prediction markets—did on predicting the state-by-state (plus the District of Columbia) results of the recent U.S. presidential election.
Continue reading
Category Archives: social science
The Cynic's Null Hypothesis
Michael Arc wrote, "submit to virtuous social orders, seek to dominate non-virtuous ones if you have the ability to discern between them."
But what would you do if, if ... there weren't any virtuous social orders??
"Friends Can Change the World"; Or, Request for Social Technology: Credit-Assignment Rituals
As a human living in a human civilization, it's tempting to think that social reality mostly makes sense. Everyone allegedly knows that institutions are flawed and that our leaders are merely flawed humans. Everyone wants to think that they're sufficiently edgy and cynical, that they've seen through the official lies to the true, gritty reality.
But what if ... what if almost no one is edgy and cynical enough? Like, the only reason you think there's a true, gritty reality out there that you think you can see through to is because you're a predatory animal with a brain designed by evolution to murder other forms of life for the benefit of you, your family, and your friends.
To the extent that we have this glorious technological civilization that keeps most of us mostly safe and mostly happy most of the time, it's mostly because occasionally, one of the predatory animals happens to try out a behavior that happens to be useful, and then all of her friends copy it, and then all of the animals have the behavior.
Some conceited assholes who think they're smart also like to talk about things that they think make the last five hundred years or whatever different: things like science (a social competition that incentivizes the animals to try to mirror the process of Bayesian updating), markets (a pattern of incentives that mirrors the Bayes-structure of the microeconomic theory), or democracy (a corporate governance structure that mirrors the Bayes-structure of counterfactual civil war amongst equals).
These causal processes are useful and we should continue to cooperate with them. They sort of work. But they don't work very well. We're mostly still animals organized into interlocking control systems that suppress variance.
Thus—
School Is Not About Learning
Politics Is Not About Policy
Effective Altruism Doesn't Work; Try to Master Unadulterated Effective First
Ideology Makes You Stupid
Status Makes You Stupid
Institutions Don't Work
Discourse Doesn't Work
Language Doesn't Work
No One Knows Anything
No One Has Ever Known Anything
Don't Read the Comments
Never Read the Comments
∀x ∀y, x Is Not About y
X Has Never Been About Y
Enjoy Arby's
But this is crazy. Suppressing variance feels like a good idea because variance is scary (because it means very bad things could happen as well as very good things, and bad things are scarier than good things are fun) and we want to be safe. But like, the way to actually make yourself safer is by acquiring optimization power, and then spending some of the power on safety measures! And the way you acquire optimization power is by increasing variance and then rewarding the successes!
Anyway, maybe someone should be looking for social technologies that mirror the Bayes-structure of the universe sort of like how science, markets, or democracy do, but which also take into account that we're not anything remotely like agents and are instead animals that want to help our friends. ("We need game theory for monkeys and game theory for rocks.")
So, I had an idea. You know how some people say we should fund the solutions to problems with after-the-fact prizes, rather than picking a team in advance that we think might solve the problem and funding them? What if ... you did something like that, but on a much smaller scale? A personal scale.
Like, suppose you've just successfully navigated a major personal life crisis that could have gone much worse if it weren't for some of the people in your life (both thanks to direct help they provided during the crisis, and things you learned from them that made you the sort of person that could navigate the crisis successfully). These people don't and shouldn't expect a reward (that's what friends are for) ... but maybe you could reward them anyway (with a special emphasis on people who helped you in low-status ways that you didn't understand at the time) in some sort of public ritual, to make them more powerful and incentivize others to emulate them, thereby increasing the measure of algorithms that result in humans successfully navigating major personal life crises.
It might look something like this—
-
If you have some spare money lying around, set aside some of it for rewarding the people you want to reward. If you don't have any spare money lying around, this ritual will be less effective! Maybe you should fix that!
-
Decide how much of the money you want to use to reward each of the people you want to reward.
(Note: giving away something as powerful as money carries risks of breeding dependence and resentment if such gifts come to be expected! If people know that you've been going through a crisis and anyone so much as hints that they think they deserve an award, that person is missing the point and therefore does not deserve an award.)
-
Privately go to each of the people, explain all this, and give them the amount of money you decided to give them. Make it very clear that this is a special unilateral one-time award made for decision-theoretic reasons and that it's very important that they accept it in the service of your mutual coherent extrapolated volition in accordance with the Bayes-structure of the universe. Refuse to accept words of thanks (it's not about you; it's not about me; it's about credit-assignment). If they try to refuse the money, explain that you will literally burn that much money in paper currency if they don't take it. (Shredding instead of burning is also acceptable.)
-
Ask if they'd like to be publicly named and praised as having received an award as part of the credit-assignment ritual. (Remember that it's quite possible and understandable and good that they might want to accept the money, but not be publicly praised by you. After all, if you're the sort of person who is considering actually doing this, you're probably kind of weird! Maybe people don't want to be associated with you!)
-
To complete the ritual, publish a blog post naming the people and the the awards they received. People who prefered not to be named should be credited as Anonymous Friend A, B, C, &c. Also list the amount of money you burned or shredded if anyone foolishly rejected their award in defiance of the Bayes-structure of the universe. Do not explain the nature of the crisis or how the named people helped you. (You might want to tell the story in a different post, but that's not part of the ritual, which is about credit-assignment.)
Missing Books III
Everyday Applied Evolutionary Psychology, Except Ignoring Sex Differences Because We Know Blue Tribe Is Squeamish About That Part and We Respect Your Culture, Revised Second Edition
Dreaming of Political Bayescraft
My old political philosophy: "Socially liberal, fiscally confused; I don't know how to run a goddamned country (and neither do you)."
Commentary: Pretty good, but not quite meta enough.
My new political philosophy: "Being smart is more important than being good (for humans). All ideologies are false; some are useful."
Commentary: Social design space is very large and very high-dimensional; the forces of memetic evolution are somewhat benevolent (all ideas that you've heard of have to be genuinely appealing to some feature of human psychology, or no one would have an incentive to tell you about them), but really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves! Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected! This stance is itself, technically, loyalty to an idea, but hopefully it's a sufficiently meta idea to avoid running into the standard traps while also being sufficiently object-level to have easily-discoverable decision-relevant implications and not run afoul of the principle of ultrafinite recursion ("all infinite recursions are at most three levels deep").
Type Theory
We never know what people are actually thinking; all we can do is make inferences from their behavior, including inferences about the inferences they're making.
Sometimes someone makes an expression or a comment that seems to carry an overtone of contempt; I know your type, it seems to say, and I disapprove. And there's a distinct pain in being on the receiving end of this, wanting to reply to the implication, but expecting to lack the shared context needed for the reply to begin to make sense—
"Yes, but I don't think you've adequately taken into account that I know that you know my type, that I know your type, that we can respect each other even if we are different types of creatures optimizing different things, and that I know that this is all relative to my inert, irrelevant sense of what I think you should adequately take into account, which I know that you may have no reason to care about."
Missing Refutations
It looks like the opposing all-human team is winning the exhibition game of me and my it's-not-chess engine (as White) versus everyone in the office who (unlike me) actually knows something about chess (as Black). I mean, naïvely, my team is up a bishop right now, but our king is pretty exposed, and the principal variation that generated one of our recent moves (16. Bxb4 Bf5 17. Kd1 Qxd4+ 18. Kc1 Ng3 19. Qxc7 Nxh1) looks dreadful.
Real chess aficionados (chessters? chessies?) will laugh at me, but it actually took me a while to understand why Ng3 was in that principal variation (I might even have invoked the engine again to help). The position after Ng3 looks like
a b c d e f g h
8 ♜ ♜ ♚
7 ♟ ♟ ♟ ♟ ♟ ♟
6
5 ♝
4 ♗ ♛
3 ♙ ♞
2 ♙ ♕ ♙ ♙ ♙
1 ♖ ♘ ♔ ♗ ♖
and—forgive me—I didn't understand why that wasn't refuted by fxg3 or hxg3; in my novice's utter blindness, I somehow failed to see the discovered attack on the white queen, the necessity of evading which allows the black knight to capture the white rook, and preparation for which was clearly the purpose of 16. ..Bf5 (insofar as we—anthropomorphically?—attribute purpose to a sequence of moves discovered by a minimax search algorithm which doesn't represent concepts like discovered attack anywhere).
Mirage
(just some quick notes, hopefully in the spirit of delightfully quirky symmetry-breaking)
In her little 2010 book The Mirage of a Space Between Nature and Nurture, Evelyn Fox Keller examines some of the eternal conceptual confusions surrounding the perennially popular nature/nurture question. Like, it's both, and everyone knows it's both, so why can't the discourse move on to more interesting and well-specified questions? That the oppositional form of the question isn't well-specified can be easily seen just from simple thought experiments. One such from the book: if one person has PKU, a high-phenylalanine diet, and a low IQ, and another person doesn't have PKU, eats a low-phenylalanine diet, and has a normal IQ, we can't attribute the IQ difference to either diet or genetics alone; the question dissolves once you understand the causal mechanism. Keller argues that the very idea of distinguishing heredity and environment as distinct, separable, exclusive alternatives whose relative contributions can be compared is a historically recent one that we can probably blame on Francis Galton.
The "Bay Area" was ostensibly hosting the big game this year. They blocked off a big swath around the Embarcadero this last week to put on Super Bowl City, "a free-to-the-public fan village [...] with activities, concerts, and more." I really don't see how much sense this makes, given that the actual game was 45 miles away in Santa Clara, just as I don't think we (can I still say we if I only work in the city?) really have a football team anymore; I like to imagine someone just forgot to rename them the Santa Clara 49ers. Even you don't think Santa Clara is big enough to be a real city—and it's bigger than Green Bay—then why not San Jose, which is a lot closer? I think I would forgive it if the marketers had at least taken advantage of the golden (sic) opportunity to flaunt the single-"digit" Roman numeral L (so graceful! so succinct!), but for some dumb reason they went Arabic this year and called it Super Bowl 50. Anyway, on a whim, I toured through Super Bowl City after work on Friday. It was as boring as it was packed, and it was packed. I wasn't sure if my whimsy was worth waiting in the throng of people to get in the obvious entrance on Market Street (the metal-detection security theater really took its toll on throughput), but I happened to hear a docent shouting that there was a less-crowded entrance if you went around and took a left each on Beale and Mission, so I did that. There were attractions, I guess?—if you could call them that. There were rooms with corporate exhibits, and an enormous line to try some be-the-quarterback VR game, and loud recorded music, and a stage with live music, and an empty stage where TV broadcasts would presumably be filmed later. There was a big statue of a football made out of cut-up beer cans near one of the stands where they were selling beer for $8, which sounded really expensive to me, although admittedly I don't have much of a sense for how much beer normally costs. In summary, I didn't see the appeal of the "fan village," although I do understand what it feels like to be enthusiastic about the game itself—I really do, even if I haven't been paying much attention in recent years.
"I Have the Honor to Be Your Obedient Servant"
A friend of the blog recently told me that I'm meaner in meatspace (what some prefer to call by the bizarre misnomer "real life") than you would guess from my online persona. I'm not proud to have prompted this observation, but I didn't deny it, either. And yet—insofar as one has any reflectively-endorsed non-nice social impulses (to create incentives for good behavior, or perhaps from an ungentle although-sadistic-would-be-far-too-strong-of-a-word æsthetic that appreciates a world in which people don't always get everything they want), it does seem like the correct strategy: in meatspace, you can react to verbal and nonverbal cues in real time and try to smooth things over if you go too far, whereas in the blogosphere, it's possible to die in a harrowing thermonuclear flamewar and not even know until you check your messages the next day. We must use diplomacy where we cannot wield our weapons so precisely.
Dismal Science
There's something that feels viscerally distasteful and fundamentally morally dubious about looking for a job or a significant other. Search and comparison are for crass, commonplace, material things: we might say that this brand of soap smells nice, but is expensive, or that this car gets poor mileage, but is cheap, and while we may err in our judgment of any particular product, the general procedure must be regarded as legitimate: there's nothing problematic about going out to shop for some soap or a car and purchasing the best that happens to be available on one's budget, even if there's no sense of destiny and perfection about the match. Rather, we want to be clean, and we want to go places, and we took action to make these things come to pass.
Speculative Rules of Engagement
"Whoever displays intense negative emotion first, loses" is not in any way a law inherent to the nature of interpersonal conflict, but we can make-believe that it were profitable to believe as much. What would that look like?
Preemptive Low-Status Behavior Is Not Always a Good Idea
"... and in conclusion, please don't hit me with a mastodon bone."
"What? I wasn't planning on hitting you with a mastodon bone. Although—now that you mention it, that does sound fun!"
Engineering Selection
This whole business of being alive used to seem so much simpler and less morally ambiguous before I realized that the strong do what they can and the weak suffer what they must, that it has always been thus and could not have been otherwise. The other day I was reading Luke Muehlhauser's interview with Steve Hsu, and Hsu says:
Let me add that, in my opinion, each society has to decide for itself (e.g. through democratic process) whether it wants to legalize or forbid activities that amount to genetic engineering. Intelligent people can reasonably disagree as to whether such activity is wise.
There was once a time in my youth when I would have objected with principled transhumanist/libertarian fervor against the suggestion that the glorious potential of designer babies might be suppressed by the tyranny of the majority.
I don't have (those kinds of) principles anymore. Nor faith that freedom to enhance will inevitably turn out to be for the best. These days, my thoughts are more attuned to practical concerns. Oh, I'm sure he's just saying that because it sounds nice and deferential to contemporary political sensibilities and he doesn't want to catch any more flak than he does already. Obviously, the societies than forbid it are just going to get crushed under the boot of history.
Think about it. The arrival of Europeans in North America didn't go very well for the people who were already here—and that was just a matter of mere guns, germs, and steel (in Jared Diamond's immortal phrase). What happens to our precious concept of democratic process when someone has the option to mass-produce von Neumann-level intellects to design the next generation of superguns, ultragerms, and adamantium-unobtanium alloy?
The Future of Ideas
William Gibson famously said, "The future is already here—it's just not very evenly distributed." It's easy to imagine a science-fictional fantasy world where everything is made of diamond and plastic, and literally everyone has their own brigade of robots, spacepacks, and jetcars to do their bidding, but as Gibson points out, the real world doesn't actually work like this: there's nothing contradictory about the high technology allowing you to read this post existing in the same world where millions of others are starving, thirsty, and illiterate. The Earth is just a very big place compared to what we know how to imagine personally; the wealth and wonders that exist in some places, don't exist everywhere. As long as this is true, we should expect variance in wealth to increase, as new toys for the rich get invented faster than the basics can be provisioned for everyone; Carlos Slim can purchase extravagances that hadn't been invented in the days of Cornelius Vanderbilt, but dying of malaria is the same as it's ever been.
A similar thing could be said about knowledge and ideas. Human civilization has been rapidly accumulating knowledge, but we're not getting proportionately more capable as individuals. People typically don't have the resources or inclination to learn deeply outside of their own specialties, and many never get to master any specialty at all. There's nothing contradictory about our brightest scholars seeing more deeply into the true structure of the world beneath the world than the uninitiated would have ever conceived possible, while at the same time, the masses labor under the most primitive of superstitions. As long as this is true, we should expect variance in knowledge to increase, as the cognitive elite continues to advance the frontier of the known faster than the basics can be taught to everyone; our master biologists know more about the nature of life than their analogues in the days of Darwin and Wallace, but to the proletariat, "God did it in six days" probably still sounds like as good of an explanation as it's ever been.
Diversity Is Strength
I feel like schools, prisons, and mental hospitals are all making the same mistake: locking children, criminals, and crazy people up together just creates more childishness, criminality, and madness.
Draft of a Letter to a Former Teacher, Which I Did Not Send Because Doing So Would Be a Bad Idea
Dear [name redacted]:
So, I'm trying (mostly unsuccessfully) to stop being bitter, because I'm powerless to change anything, and so being bitter is a waste of time when I could be doing something useful instead, but I still don't understand how a good person like you can actually think our so-called educational system is actually a good idea. I can totally understand being practical and choosing to work within the system because it's all we've got; there's nothing wrong with selling out as long as you get a good price. If you think you're actually helping your students become better thinkers and writers, then that's great, and you should be praised for having more patience than me. But I don't understand how you can unambiguously say that this gargantuan soul-destroying engine of mediocrity deserves more tax money without at least displaying a little bit of uncertainty!
Goodhart's World
Someone needs to write a history of the entire world in terms of incentive systems and agents' attempts to game them. We have money to incentivize the production of useful goods and services, but we all know that there are lots of ways to make money that don't actually help anyone. Even in jobs that are actually useful, people spend a lot of their effort on trying to look like they're doing good work, rather than actually doing good work. And don't get me started about what passes for "education." (Seriously, don't.)
Much in a similar theme could be said about romance, and about economic systems in other places and times. And there's even a standpoint from which the things that we think are truly valuable for their own sake—wealth and happiness and true love, &c.—can be said to be the result of our species gaming the incentives that evolution built into us because they happened to promote inclusive genetic fitness in the ancestral environment.
The future is the same thing: superhuman artificial intelligence gaming the utility function we gave it, instead of the one we should have given it. Only there will be no one we'd recognize as a person to read or write that chapter.
A Political Orientation (Provisional)
Socially liberal,
Fiscally confused;
I don't know how to run a goddamned country,
And neither do you.
Trying to Buy a Lamp
Dear reader, I had wanted to tell you an anecdote about a recent incident in which I considered myself to have been outrageously mistreated, but it occurred to me that you probably would not find the story at all worthy of note. In fact, I fear you would be quite likely to think less of me for complaining in such a melodramatic fashion about something which the prevailing norms of our Society consider quite ordinary and proper. And what authority do I have to insist that it's Society that is in the wrong, and not I?
So I won't tell you. Instead, let me tell you a completely unrelated anecdote about my analogue in an alternate universe not entirely unlike our own. You see, recently, my alternate-universe analogue wanted to buy a table lamp, so he went—or let us say in a manner of speaking that I went—to a store to purchase one.
In the showroom, I found a lamp I liked, flagged down a salesman, and said to him, "I'd like to buy this lamp."
"Have you previously purchased a side table from us before?" he said.
"No," I said, somewhat puzzled by the seemingly irrelevant question.
"Well, you can't buy a lamp unless you already have a table to put it on," said the salesman in a tone of polite condescension.
"Oh, I certainly agree that it simply wouldn't do to get a lamp without having a table to put it on," I said, "but you see, I already have a table."
"So you did buy a table from us."
"No," I said.
"So you don't have a table."