"Yes, and—" Requires the Possibility of "No, Because—"

Scott Garrabrant gives a number of examples to illustrate that “Yes Requires the Possibility of No”. We can understand the principle in terms of information theory. Consider the answer to a yes-or-no question as a binary random variable. The “amount of information” associated with a random variable is quantified by the entropy, the expected value of the negative logarithm of the probability of the outcome. If we know in advance of asking that the answer to the question will always be Yes, then the entropy is −P(Yes)·log(P(Yes)) − P(No)·log(P(No)) = −1·log(1) − 0·log(0) = 0.1 If you already knew what the answer would be, then the answer contains no information; you didn’t learn anything new by asking.


In the art of improvisational theater (“improv” for short), actors perform scenes that they make up as they go along. Without a script, each actor’s choices of what to say and do amount to implied assertions about the fictional reality being portrayed, which have implications for how the other actors should behave. A choice that establishes facts or gives direction to the scene is called an offer. If an actor opens a scene by asking their partner, “Is it serious, Doc?”, that’s an offer that the first actor is playing a patient awaiting diagnosis, and the second actor is playing a doctor.

A key principle of improv is often known as “Yes, and” after an exercise that involves starting replies with those words verbatim, but the principle is broader and doesn’t depend on the particular words used: actors should “accept” offers (“Yes”), and respond with their own complementary offers (“and”). The practice of “Yes, and” is important for maintaining momentum while building out the reality of the scene.

Rejecting an offer is called blocking, and is frowned upon. If one actor opens the scene with, “Surrender, Agent Stone, or I’ll shoot these hostages!”—establishing a scene in which they’re playing an armed villain being confronted by an Agent Stone—it wouldn’t do for their partner to block by replying, “That’s not my name, you don’t have a gun, and there are no hostages.” That would halt the momentum and confuse the audience. Better for the second actor to say, “Go ahead and shoot, Dr. Skull! You’ll find that my double agent on your team has stolen your bullets”—accepting the premise (“Yes”), then adding new elements to the scene (“and”, the villain’s name and the double agent).

Notice a subtlety: the Agent Stone character isn’t “Yes, and”-ing the Dr. Skull character’s demand to surrender. Rather, the second actor is “Yes, and”-ing the first actor’s worldbuilding offers (where the offer happens to involve their characters being in conflict). Novice improvisers are sometimes tempted to block to try to control the scene when they don’t like their partner’s offers, but it’s almost always a mistake. Persistently blocking your partner’s offers kills the vibe, and with it, the scene. No one wants to watch two people arguing back-and-forth about what reality is.


Proponents of collaborative truthseeking think that many discussions benefit from a more “open” or “interpretive” mode in which participants prioritize constructive contributions that build on each other’s work rather than tearing each other down.

The analogy to improv’s “Yes, and” doctrine writes itself, right down to the subtlety that collaborative truthseeking does not discourage disagreement as such—any more than the characters in an improv sketch aren’t allowed to be in conflict. What’s discouraged is the persistent blocking of offers, refusing to cooperate with the “scene” of discourse your partner is trying to build. Partial disagreement with polite elaboration (“I see what you’re getting at, but have you considered …”) is typically part of the offer—that we’re “playing” reasonable people having a cooperative intellectual discussion. Only wholesale negation (“That’s not a thing”) is blocking—by rejecting the offer that we’re both playing reasonable people.

Whatever you might privately think of your interlocutor’s contribution, it’s not hard to respond in a constructive manner without lying. Like a good improv actor, you can accept their contribution to the scene/discourse (“Yes”), then add your own contribution (“and”). If nothing else, you can write about how their comment reminded you of something else you’ve read, and your thoughts about that.

Reading over a discussion conducted under such norms, it’s easy to not see a problem. People are building on each other’s contributions; information is being exchanged. That’s good, right?

The problem is that while the individual comments might (or might not) make sense when read individually, the harmonious social exchange of mutually building on each other’s contributions isn’t really a conversation unless the replies connect to each other in a less superficial way that risks blocking.

What happens when someone says something wrong or confusing or unclear? If their interlocutor prioritizes correctness and clarity, the natural behavior is to say, “No, that’s wrong, because …” or “No, I didn’t understand that”—and not only that, but to maintain that “No” until clarity is forthcoming. That’s blocking. It feels much more cooperative to let it pass in order to keep the scene going—with the result that falsehood, confusion, and unclarity accumulate as the interaction goes on.

There’s a reason improv is almost synonymous with improv comedy. Comedy thrives on absurdity: much of the thrill and joy of improv comedy is in appreciating what lengths of cleverness the actors will go to maintain the energy of a scene that has long since lost any semblance of coherence or plausibility. The rules that work for improv comedy don’t even work for (non-improvised, dramatic) fiction; it certainly won’t work for philosophy.

Per Garrabrant’s principle, the only way an author could reliably expect discussion of their work to illuminate what they’re trying to communicate is if they knew they were saying something the audence already believed. If you’re thinking carefully about what the other person said, you’re often going to end up saying “No” or “I don’t understand”, not just “Yes, and”: if you’re committed to validating your interlocutor’s contribution to the scene before providing your own, you’re not really talking to each other.


  1. I’m glossing over a technical subtlety here by assuming—pretending?—that 0·log(0) = 0, when log(0) is actually undefined. But it’s the correct thing to pretend, because the linear factor p goes to zero faster than log p can go to negative infinity. Formally: \lim_{p \to 0^+} p \log(p) = \lim_{p \to 0^+} \frac{\log(p)}{1/p} = \lim_{p \to 0^+} \frac{1/p}{-1/p^2} = 0


The Relationship Between Social Punishment and Shared Maps

A punishment is when one agent (the punisher) imposes costs on another (the punished) in order to affect the punished’s behavior. In a Society where thieves are predictably imprisoned and lashed, people will predictably steal less than they otherwise would, for fear of being imprisoned and lashed.

Punishment is often imposed by formal institutions like police and judicial systems, but need not be. A controversial orator who finds a rock thrown through her window can be said to have been punished in the same sense: in a Society where controversial orators predictably get rocks thrown through their windows, people will predictably engage in less controversial speech, for fear of getting rocks thrown through their windows.

In the most basic forms of punishment, which we might term “physical”, the nature of the cost imposed on the punished is straightforward. No one likes being stuck in prison, or being lashed, or having a rock thrown through her window.

But subtler forms of punishment are possible. Humans are an intensely social species: we depend on friendship and trade with each other in order to survive and thrive. Withholding friendship or trade can be its own form of punishment, no less devastating than a whip or a rock. This is called “social punishment”.

Effective social punishment usually faces more complexities of implementation than physical punishment, because of the greater number of participants needed in order to have the desired deterrent effect. Throwing a rock only requires one person to have a rock; effectively depriving a punishment-target of friendship may require many potential friends to withhold their beneficence.

How is the collective effort of social punishment to be coordinated? If human Societies were hive-minds featuring an Authority that could broadcast commands to be reliably obeyed by the hive’s members, then there would be no problem. If the hive-queen wanted to socially punish Mallory, she could just issue a command, “We’re giving Mallory the silent treatment now”, and her majesty’s will would be done.

No such Authority exists. But while human Societies lack a collective will, they often have something much closer to collective beliefs: shared maps that (hopefully) reflect the territory. No one can observe enough or think quickly enough to form her own independent beliefs about everything. Most of what we think we know comes from others, who in turn learned it from others. Furthermore, one of our most decision-relevant classes of belief concern the character and capabilities of other people with whom we might engage in friendship or trade relations.

As a consequence, social punishment is typically implemented by means of reputation: spreading beliefs about the punishment-target that merely imply that benefits should be withheld from the target, rather than by directly coordinating explicit sanctions. Social punishers don’t say, “We’re giving Mallory the silent treatment now.” (Because, who’s we?) They simply say that Mallory is stupid, dishonest, cruel, ugly, &c. These are beliefs that, if true, imply that people will do worse for themselves by helping Mallory. (If Mallory is stupid, she won’t be as capable of repaying favors. If she’s dishonest, she might lie to you. If she’s cruel … &c.) Negative-valence beliefs about Mallory double as “social punishments”, because if those beliefs appear on shared maps, the predictable consequence will be that Mallory will be deprived of friendship and trade opportunities.

We notice a critical difference between social punishments and physical punishments. Beliefs can be true or false. A rock or a jail cell is not a belief. You can’t say that the rock is false, but you can say it’s false that Mallory is stupid.

The linkage between collective beliefs and social punishment creates distortions that are important to track. People have an incentive to lie to prevent negative-valence beliefs about themselves from appearing on shared maps (even if the beliefs are true). People who have enemies whom they hate have an incentive to lie to insert negative-valence beliefs about their enemies onto the shared map (even if the beliefs are false). The stakes are high: an erroneously thrown rock only affects its target, but an erroneous map affects everyone using that map to make decisions about the world (including decisions about throwing rocks).

Intimidated by the stakes, some actors in Society who understand the similarity between social and physical punishment, but don’t understand the relationship between social punishment and shared maps, might try to take steps to limit social punishment. It would be bad, they reason, if people were trapped in a cycle of mutual recrimination of physical punishments. Nobody wins if I throw a rock through your window to retaliate for you throwing a rock through my window, &c. Better to foresee that and make sure no one throws any rocks at all, or at least not big ones. They imagine that they can apply the same reasoning to social punishments without paying any costs to the accuracy of shared maps, that we can account for social standing and status in our communication without sacrificing any truthseeking.

It’s mostly an illusion. If Alice possesses evidence that Mallory is stupid, dishonest, cruel, ugly, &c., she might want to publish that evidence in order to improve the accuracy of shared maps of Mallory’s character and capabilities. If the evidence is real and its recipients understand the filters through which it reached them, publishing the evidence is prosocial, because it helps people make higher-quality decisions regarding friendship and trade opportunities with Mallory.

But it also functions as social punishment. If Alice tries to disclaim, “Look, I’m not trying to ‘socially punish’ Mallory; I’m just providing evidence to update the part of the shared map which happens to be about Mallory’s character and capabilities”, then Bob, Carol, and Dave probably won’t find the disclaimer very convincing.

And yet—might not Alice be telling the truth? There are facts of the matter that are relevant to whether Mallory is stupid, dishonest, cruel, ugly, &c.! (Even if we’re not sure where to draw the boundary of dishonest, if Mallory said something false, and we can check that, and she knew it was false, and we can check that from her statements elsewhere, that should make people more likely to affirm the dishonest characterization.) Those words mean things! They’re not rocks—or not only rocks. Is there any way to update the shared map without the update itself being construed as “punishment”?

It’s questionable. One might imagine that by applying sufficient scrutiny to nuances of tone and word choice, Alice might succeed at “neutrally” conveying the evidence in her possession without any associated scorn or judgment.

But judgments supervene on facts and values. If lying is bad, and Mallory lied, it logically follows that Mallory did a bad thing. There’s no way to avoid that implication without denying one of the premises. Nuances of tone and wording that seem to convey an absence of judgment might only succeed at doing so by means of obfuscation: strained abuses of language whose only function is to make it less clear to the inattentive reader that the thing Mallory did was lying.

At best, Alice might hope to craft the publication of the evidence in a way that omits her own policy response. There is a real difference between merely communicating that Mallory is stupid, dishonest, cruel, ugly, &c. (with the understanding that other people will use this information to inform their policies about trade opportunities), and furthermore adding that “therefore I, Alice, am going to withhold trade opportunities from Mallory, and withhold trade opportunities from those who don’t withhold trade opportunities from her.” The additional information about Alice’s own policy response might be exposed by fiery rhetoric choices and concealed by more clinical descriptions.

Is that enough to make the clinical description not a “social punishment”? Personally, I buy it, but I don’t think Bob, Carol, or Dave do.

"Friends Can Change the World"; Or, Request for Social Technology: Credit-Assignment Rituals

As a human living in a human civilization, it's tempting to think that social reality mostly makes sense. Everyone allegedly knows that institutions are flawed and that our leaders are merely flawed humans. Everyone wants to think that they're sufficiently edgy and cynical, that they've seen through the official lies to the true, gritty reality.

But what if ... what if almost no one is edgy and cynical enough? Like, the only reason you think there's a true, gritty reality out there that you think you can see through to is because you're a predatory animal with a brain designed by evolution to murder other forms of life for the benefit of you, your family, and your friends.

To the extent that we have this glorious technological civilization that keeps most of us mostly safe and mostly happy most of the time, it's mostly because occasionally, one of the predatory animals happens to try out a behavior that happens to be useful, and then all of her friends copy it, and then all of the animals have the behavior.

Some conceited assholes who think they're smart also like to talk about things that they think make the last five hundred years or whatever different: things like science (a social competition that incentivizes the animals to try to mirror the process of Bayesian updating), markets (a pattern of incentives that mirrors the Bayes-structure of the microeconomic theory), or democracy (a corporate governance structure that mirrors the Bayes-structure of counterfactual civil war amongst equals).

These causal processes are useful and we should continue to cooperate with them. They sort of work. But they don't work very well. We're mostly still animals organized into interlocking control systems that suppress variance.

Thus—

School Is Not About Learning
Politics Is Not About Policy
Effective Altruism Doesn't Work; Try to Master Unadulterated Effective First
Ideology Makes You Stupid
Status Makes You Stupid
Institutions Don't Work
Discourse Doesn't Work
Language Doesn't Work
No One Knows Anything
No One Has Ever Known Anything
Don't Read the Comments
Never Read the Comments
xy, x Is Not About y
X Has Never Been About Y
Enjoy Arby's

But this is crazy. Suppressing variance feels like a good idea because variance is scary (because it means very bad things could happen as well as very good things, and bad things are scarier than good things are fun) and we want to be safe. But like, the way to actually make yourself safer is by acquiring optimization power, and then spending some of the power on safety measures! And the way you acquire optimization power is by increasing variance and then rewarding the successes!

Anyway, maybe someone should be looking for social technologies that mirror the Bayes-structure of the universe sort of like how science, markets, or democracy do, but which also take into account that we're not anything remotely like agents and are instead animals that want to help our friends. ("We need game theory for monkeys and game theory for rocks.")

So, I had an idea. You know how some people say we should fund the solutions to problems with after-the-fact prizes, rather than picking a team in advance that we think might solve the problem and funding them? What if ... you did something like that, but on a much smaller scale? A personal scale.

Like, suppose you've just successfully navigated a major personal life crisis that could have gone much worse if it weren't for some of the people in your life (both thanks to direct help they provided during the crisis, and things you learned from them that made you the sort of person that could navigate the crisis successfully). These people don't and shouldn't expect a reward (that's what friends are for) ... but maybe you could reward them anyway (with a special emphasis on people who helped you in low-status ways that you didn't understand at the time) in some sort of public ritual, to make them more powerful and incentivize others to emulate them, thereby increasing the measure of algorithms that result in humans successfully navigating major personal life crises.

It might look something like this—

  • If you have some spare money lying around, set aside some of it for rewarding the people you want to reward. If you don't have any spare money lying around, this ritual will be less effective! Maybe you should fix that!

  • Decide how much of the money you want to use to reward each of the people you want to reward.

(Note: giving away something as powerful as money carries risks of breeding dependence and resentment if such gifts come to be expected! If people know that you've been going through a crisis and anyone so much as hints that they think they deserve an award, that person is missing the point and therefore does not deserve an award.)

  • Privately go to each of the people, explain all this, and give them the amount of money you decided to give them. Make it very clear that this is a special unilateral one-time award made for decision-theoretic reasons and that it's very important that they accept it in the service of your mutual coherent extrapolated volition in accordance with the Bayes-structure of the universe. Refuse to accept words of thanks (it's not about you; it's not about me; it's about credit-assignment). If they try to refuse the money, explain that you will literally burn that much money in paper currency if they don't take it. (Shredding instead of burning is also acceptable.)

  • Ask if they'd like to be publicly named and praised as having received an award as part of the credit-assignment ritual. (Remember that it's quite possible and understandable and good that they might want to accept the money, but not be publicly praised by you. After all, if you're the sort of person who is considering actually doing this, you're probably kind of weird! Maybe people don't want to be associated with you!)

  • To complete the ritual, publish a blog post naming the people and the the awards they received. People who prefered not to be named should be credited as Anonymous Friend A, B, C, &c. Also list the amount of money you burned or shredded if anyone foolishly rejected their award in defiance of the Bayes-structure of the universe. Do not explain the nature of the crisis or how the named people helped you. (You might want to tell the story in a different post, but that's not part of the ritual, which is about credit-assignment.)

Dreaming of Political Bayescraft

My old political philosophy: "Socially liberal, fiscally confused; I don't know how to run a goddamned country (and neither do you)."

Commentary: Pretty good, but not quite meta enough.

My new political philosophy: "Being smart is more important than being good (for humans). All ideologies are false; some are useful."

Commentary: Social design space is very large and very high-dimensional; the forces of memetic evolution are somewhat benevolent (all ideas that you've heard of have to be genuinely appealing to some feature of human psychology, or no one would have an incentive to tell you about them), but really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves! Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected! This stance is itself, technically, loyalty to an idea, but hopefully it's a sufficiently meta idea to avoid running into the standard traps while also being sufficiently object-level to have easily-discoverable decision-relevant implications and not run afoul of the principle of ultrafinite recursion ("all infinite recursions are at most three levels deep").

Type Theory

We never know what people are actually thinking; all we can do is make inferences from their behavior, including inferences about the inferences they're making.

Sometimes someone makes an expression or a comment that seems to carry an overtone of contempt; I know your type, it seems to say, and I disapprove. And there's a distinct pain in being on the receiving end of this, wanting to reply to the implication, but expecting to lack the shared context needed for the reply to begin to make sense—

"Yes, but I don't think you've adequately taken into account that I know that you know my type, that I know your type, that we can respect each other even if we are different types of creatures optimizing different things, and that I know that this is all relative to my inert, irrelevant sense of what I think you should adequately take into account, which I know that you may have no reason to care about."

Missing Refutations

It looks like the opposing all-human team is winning the exhibition game of me and my it's-not-chess engine (as White) versus everyone in the office who (unlike me) actually knows something about chess (as Black). I mean, naïvely, my team is up a bishop right now, but our king is pretty exposed, and the principal variation that generated one of our recent moves (16. Bxb4 Bf5 17. Kd1 Qxd4+ 18. Kc1 Ng3 19. Qxc7 Nxh1) looks dreadful.

Real chess aficionados (chessters? chessies?) will laugh at me, but it actually took me a while to understand why Ng3 was in that principal variation (I might even have invoked the engine again to help). The position after Ng3 looks like

    a b c d e f g h
 8 ♜       ♜   ♚   
 7 ♟ ♟ ♟     ♟ ♟ ♟ 
 6                 
 5           ♝     
 4   ♗   ♛         
 3 ♙           ♞   
 2   ♙ ♕     ♙ ♙ ♙ 
 1 ♖ ♘ ♔     ♗   ♖ 

and—forgive me—I didn't understand why that wasn't refuted by fxg3 or hxg3; in my novice's utter blindness, I somehow failed to see the discovered attack on the white queen, the necessity of evading which allows the black knight to capture the white rook, and preparation for which was clearly the purpose of 16. ..Bf5 (insofar as we—anthropomorphically?—attribute purpose to a sequence of moves discovered by a minimax search algorithm which doesn't represent concepts like discovered attack anywhere).

Continue reading

Mirage

(just some quick notes, hopefully in the spirit of delightfully quirky symmetry-breaking)

In her little 2010 book The Mirage of a Space Between Nature and Nurture, Evelyn Fox Keller examines some of the eternal conceptual confusions surrounding the perennially popular nature/nurture question. Like, it's both, and everyone knows it's both, so why can't the discourse move on to more interesting and well-specified questions? That the oppositional form of the question isn't well-specified can be easily seen just from simple thought experiments. One such from the book: if one person has PKU, a high-phenylalanine diet, and a low IQ, and another person doesn't have PKU, eats a low-phenylalanine diet, and has a normal IQ, we can't attribute the IQ difference to either diet or genetics alone; the question dissolves once you understand the causal mechanism. Keller argues that the very idea of distinguishing heredity and environment as distinct, separable, exclusive alternatives whose relative contributions can be compared is a historically recent one that we can probably blame on Francis Galton.

The "Bay Area" was ostensibly hosting the big game this year. They blocked off a big swath around the Embarcadero this last week to put on Super Bowl City, "a free-to-the-public fan village [...] with activities, concerts, and more." I really don't see how much sense this makes, given that the actual game was 45 miles away in Santa Clara, just as I don't think we (can I still say we if I only work in the city?) really have a football team anymore; I like to imagine someone just forgot to rename them the Santa Clara 49ers. Even you don't think Santa Clara is big enough to be a real city—and it's bigger than Green Bay—then why not San Jose, which is a lot closer? I think I would forgive it if the marketers had at least taken advantage of the golden (sic) opportunity to flaunt the single-"digit" Roman numeral L (so graceful! so succinct!), but for some dumb reason they went Arabic this year and called it Super Bowl 50. Anyway, on a whim, I toured through Super Bowl City after work on Friday. It was as boring as it was packed, and it was packed. I wasn't sure if my whimsy was worth waiting in the throng of people to get in the obvious entrance on Market Street (the metal-detection security theater really took its toll on throughput), but I happened to hear a docent shouting that there was a less-crowded entrance if you went around and took a left each on Beale and Mission, so I did that. There were attractions, I guess?—if you could call them that. There were rooms with corporate exhibits, and an enormous line to try some be-the-quarterback VR game, and loud recorded music, and a stage with live music, and an empty stage where TV broadcasts would presumably be filmed later. There was a big statue of a football made out of cut-up beer cans near one of the stands where they were selling beer for $8, which sounded really expensive to me, although admittedly I don't have much of a sense for how much beer normally costs. In summary, I didn't see the appeal of the "fan village," although I do understand what it feels like to be enthusiastic about the game itself—I really do, even if I haven't been paying much attention in recent years.

Continue reading

"I Have the Honor to Be Your Obedient Servant"

A friend of the blog recently told me that I'm meaner in meatspace (what some prefer to call by the bizarre misnomer "real life") than you would guess from my online persona. I'm not proud to have prompted this observation, but I didn't deny it, either. And yet—insofar as one has any reflectively-endorsed non-nice social impulses (to create incentives for good behavior, or perhaps from an ungentle although-sadistic-would-be-far-too-strong-of-a-word æsthetic that appreciates a world in which people don't always get everything they want), it does seem like the correct strategy: in meatspace, you can react to verbal and nonverbal cues in real time and try to smooth things over if you go too far, whereas in the blogosphere, it's possible to die in a harrowing thermonuclear flamewar and not even know until you check your messages the next day. We must use diplomacy where we cannot wield our weapons so precisely.

Dismal Science

There's something that feels viscerally distasteful and fundamentally morally dubious about looking for a job or a significant other. Search and comparison are for crass, commonplace, material things: we might say that this brand of soap smells nice, but is expensive, or that this car gets poor mileage, but is cheap, and while we may err in our judgment of any particular product, the general procedure must be regarded as legitimate: there's nothing problematic about going out to shop for some soap or a car and purchasing the best that happens to be available on one's budget, even if there's no sense of destiny and perfection about the match. Rather, we want to be clean, and we want to go places, and we took action to make these things come to pass.

Continue reading

Engineering Selection

This whole business of being alive used to seem so much simpler and less morally ambiguous before I realized that the strong do what they can and the weak suffer what they must, that it has always been thus and could not have been otherwise. The other day I was reading Luke Muehlhauser's interview with Steve Hsu, and Hsu says:

Let me add that, in my opinion, each society has to decide for itself (e.g. through democratic process) whether it wants to legalize or forbid activities that amount to genetic engineering. Intelligent people can reasonably disagree as to whether such activity is wise.

There was once a time in my youth when I would have objected with principled transhumanist/libertarian fervor against the suggestion that the glorious potential of designer babies might be suppressed by the tyranny of the majority.

I don't have (those kinds of) principles anymore. Nor faith that freedom to enhance will inevitably turn out to be for the best. These days, my thoughts are more attuned to practical concerns. Oh, I'm sure he's just saying that because it sounds nice and deferential to contemporary political sensibilities and he doesn't want to catch any more flak than he does already. Obviously, the societies than forbid it are just going to get crushed under the boot of history.

Think about it. The arrival of Europeans in North America didn't go very well for the people who were already here—and that was just a matter of mere guns, germs, and steel (in Jared Diamond's immortal phrase). What happens to our precious concept of democratic process when someone has the option to mass-produce von Neumann-level intellects to design the next generation of superguns, ultragerms, and adamantium-unobtanium alloy?

The Future of Ideas

William Gibson famously said, "The future is already here—it's just not very evenly distributed." It's easy to imagine a science-fictional fantasy world where everything is made of diamond and plastic, and literally everyone has their own brigade of robots, spacepacks, and jetcars to do their bidding, but as Gibson points out, the real world doesn't actually work like this: there's nothing contradictory about the high technology allowing you to read this post existing in the same world where millions of others are starving, thirsty, and illiterate. The Earth is just a very big place compared to what we know how to imagine personally; the wealth and wonders that exist in some places, don't exist everywhere. As long as this is true, we should expect variance in wealth to increase, as new toys for the rich get invented faster than the basics can be provisioned for everyone; Carlos Slim can purchase extravagances that hadn't been invented in the days of Cornelius Vanderbilt, but dying of malaria is the same as it's ever been.

A similar thing could be said about knowledge and ideas. Human civilization has been rapidly accumulating knowledge, but we're not getting proportionately more capable as individuals. People typically don't have the resources or inclination to learn deeply outside of their own specialties, and many never get to master any specialty at all. There's nothing contradictory about our brightest scholars seeing more deeply into the true structure of the world beneath the world than the uninitiated would have ever conceived possible, while at the same time, the masses labor under the most primitive of superstitions. As long as this is true, we should expect variance in knowledge to increase, as the cognitive elite continues to advance the frontier of the known faster than the basics can be taught to everyone; our master biologists know more about the nature of life than their analogues in the days of Darwin and Wallace, but to the proletariat, "God did it in six days" probably still sounds like as good of an explanation as it's ever been.

Draft of a Letter to a Former Teacher, Which I Did Not Send Because Doing So Would Be a Bad Idea

Dear [name redacted]:

So, I'm trying (mostly unsuccessfully) to stop being bitter, because I'm powerless to change anything, and so being bitter is a waste of time when I could be doing something useful instead, but I still don't understand how a good person like you can actually think our so-called educational system is actually a good idea. I can totally understand being practical and choosing to work within the system because it's all we've got; there's nothing wrong with selling out as long as you get a good price. If you think you're actually helping your students become better thinkers and writers, then that's great, and you should be praised for having more patience than me. But I don't understand how you can unambiguously say that this gargantuan soul-destroying engine of mediocrity deserves more tax money without at least displaying a little bit of uncertainty!

Continue reading

Goodhart's World

Someone needs to write a history of the entire world in terms of incentive systems and agents' attempts to game them. We have money to incentivize the production of useful goods and services, but we all know that there are lots of ways to make money that don't actually help anyone. Even in jobs that are actually useful, people spend a lot of their effort on trying to look like they're doing good work, rather than actually doing good work. And don't get me started about what passes for "education." (Seriously, don't.)

Much in a similar theme could be said about romance, and about economic systems in other places and times. And there's even a standpoint from which the things that we think are truly valuable for their own sake—wealth and happiness and true love, &c.—can be said to be the result of our species gaming the incentives that evolution built into us because they happened to promote inclusive genetic fitness in the ancestral environment.

The future is the same thing: superhuman artificial intelligence gaming the utility function we gave it, instead of the one we should have given it. Only there will be no one we'd recognize as a person to read or write that chapter.