Quotations I

"As far as anyone knows, there's never been an animal population that was stable in the absence of predation, famine, or disease."

"Don't get discouraged," Carla said, reaching over and putting a hand on his shoulder. "That's just the history of life for the past few eons. It's not as if it's a law of physics."

The Eternal Flame by Greg Egan

Continue reading

Quicksort in FIM++

Dear reader, I have got to tell you, fandom is intense. One day last October Equestria Daily (internet clearinghouse for fans of the animated series My Little Pony: Friendship Is Magic) posts a joke proposal for a programming language (FIM++) based on the show, and within the week there's a working interpreter for it. What does it mean to model a programming language after a cartoon, you ask? Well, in the show, episodes typically end with our heroine Twilight Sparkle (or after Season Two, Episode Three "Lesson Zero", one of her friends) writing a letter about what she's learned about the magic of friendship to her mentor (and God-Empress of the sun) Princess Celestia. So, then, why not have an esoteric programming langauge where the source code reads like a letter to Princess Celestia? Adorable, right?

So, this gift having been provided to us courtesy of Karol S. and the brony community, let's do something with it! More specifically, how about we implement quicksort?—that is a classic. What's quicksort? Well, we want to sort a list, right? So—bear with me—we define this partitioning procedure that, given indices into an array, partitions the subarray between those indices into a subsubarray of elements less-than-or-equal-to a given element dubbed the pivot, then the pivot itself, then a subsubarray of elements greater than the pivot. How do we do that? Well, let's designate the last element in our subarray as the pivot. Then we're going to scan through all the other elements, and if any of them are less-than-or-equal-to the pivot, we swap it into our first subsubarray and increment a variable keeping track of where the first subsubarray ends. Then, we swap the pivot into place and return its index. In Ruby—

Continue reading

Compensation

"Maybe there should be an effort to cryopreserve specimens of endangered species. 'Hey, sorry we killed your entire species, but when we get more computing power later, we'll be sure to give you lots of happy runtime as compensation.'"

Guns

"Do you know, I've decided I like guns. Of course it would be preferable to wave a magic wand and have all sentient life live in peace and harmony in paradise forever. But if Reality puts you in a situation where you have to kill, at least we have tools to do it quickly: a well-aimed bang and there isn't a creature there to suffer for very long. That's actually a huge improvement over the state of nature, where animals kill with nothing but teeth and claws."

The Demandingness Objection

"Well, I'm not giving up dairy, but I can probably give up meat, and milk is at the very bottom of Brian's table of suffering per kilogram demanded, so I'd be contributing to much less evil than I was before. That's good, right?

"For all the unimaginably terrible things our species do to each other and to other creatures, we're not—we're probably not any worse than the rest of nature. Gazelles suffer terribly as lions eat them alive, but we can't intervene because then the lions would starve, and the gazelles would have a population explosion and starve, too. We have this glorious idea that people need to consent before sex, but male ducks just rape the females, and there's no one to stop it—nothing else besides humans around capable of formulating the proposition, as a proposition, that the torment and horror in the world is wrong and should stop. Animals have been eating each other for hundreds of millions of years; we may be murderous, predatory apes, but we're murderous, predatory apes with Reason—well, sort of—and a care/harm moral foundation that lets some of us, with proper training, to at least wish to be something better.

"I don't actually know much history or biology, but I know enough to want it to not be real, to not have happened that way. But it couldn't have been otherwise. In the absence of an ontologically fundamental creator God, Darwinian evolution is the only way to get purpose from nowhere, design without a designer. My wish for things to have been otherwise ... probably isn't even coherent; any wish for the nature of reality to have been different, can only be made from within reality.

Continue reading

Revisionist History I

"It is my considered opinion that Emily Dickinson was a time-traveling cryonicist."

"That is an opinion I have not previously heard advanced."

"C'mon! 'Because I could not stop for Death, / He kindly stopped for me; / The carriage held but just ourselves / And Immortality'? It's obvious!"

Huffman

Dear reader, you know what's way more fun than feeling sad about the nature of the cosmos? Data compression, that's what! Suppose you want to send a message to your friends in a nearby alternate universe, but interuniversal communication bandwidth is very expensive (different universes can't physically interact, so we and our alternate-universe analogues can only communicate by mutually inferring what the other party must be saying, which takes monstrous amounts of computing power and is not cheap), so you need to make your message as brief as possible. Note that 'brief' doesn't just have to do with how long your message is in natural language, it also has to do with how that message is represented over the transuniveral communication channel: indeed, the more efficient the encoding, the more you can afford to say on a fixed budget.

The classic ASCII encoding scheme uses seven bits to represent each character. (Seven?—you ask perplexedly, surely you mean eight? Apparently it was seven in the original version.) Can we do better? Well ... ASCII has a lot of stuff that arguably you don't need that badly. Really, upper and lower case letters? Ampersands, asterisks, backslashes? And don't get me started about those unprintable control characters! If we restrict our message to just the uncased alphabet A through Z plus space and a few punctuation marks, then we can encode our message using only a 32 (= 25) character set, at five bits per character.

Can we do better? Seemingly not—24 = 16 isn't a big enough character set to cover the alphabet. Unless ...

Continue reading

Remembering

"I remember feeling like a person, and feeling like people were ontologically distinct from animals, and I don't know how it's possible to pick up the pieces after that illusion has gone.

"I remember caring about parochial political concerns. I cared about gender equality, and educational freedom. And, and, I cared very badly about being respected for being intelligent. But now that I see that the latter was just a standard male primate status-seeking drive gone haywire—or not gone haywire, but functioning normally—and that my less-obviously-selfish concerns were driven by idiosyncratic features of my own psychology that few others have any reason to care about—none of it seems as compelling anymore.

"Then what is compelling? Well, I'm terrified of the pain of being physically hurt, so if I don't know what's real and I don't know what's right, I can always fall back on 'pain and suffering are bad.'

"But there has to be more to morality than that. I complained about how people in institutional contexts optimize for not-being-personally-blamed and no one is actually trying to do anything. But of course passive helplessness is the result when you don't have any goals except not-being-hurt.

"I want to be Innocent and Safe with Probability One, but Probability One is an absurdity that can't exist. In a sufficiently large universe, random fluctuations in maximum entropy heat death form a Boltzmann brain Judeo-Christian God who will punish you for masturbating. But somehow I'm not worried about that.

"But I shouldn't be thinking about any of this. I have my own life to tend to, and it looks great; the rest of space and time will have to take care of itself. I seem to have memories of being in the save/destroy/take over the world business, but now it seems more convenient to be agnostic about whether any of that actually happened."

Lyrics to the Song About Matt Reeves

Dead kid gets a bench
Dead kid gets a memorial bench
So now we all know his name
Though we don't know who he is

Class of nineteen ninety two
Though he died in 'ninety one
Was he a better friend than you?
And what'd he do for fun?
What were his opinions on the issues of the day?
And what exactly took his breath away?

Now he's still and in the grave
And since the dead seem all the same
No one really cares to wonder what he was
Forgotten as we're staring at his name

Dead kid gets a bench
Dead kid gets a memorial bench
So now we all know his name
Though we don't care who he is

Dead kid gets a bench
And the inscription just screams "Rust this"
No inscription can do justice
Though we don't know who he is
And we don't care who he is

Retirement

"Rational agents should never be made worse off by more information—well, almost never. So if I can no longer contemplate the big picture without life seeming like a bad thing—the fewer needs you have, the fewer ways in which you can be hurt; if you don't exist, you can't be hurt—then maybe I could just—not contemplate it? If my will to live is something that can be destroyed by the truth, then maybe P. C. Hodgell was wrong? This needn't entail self-delusion: distraction is quite sufficient. There are plenty of things to do that won't remind me of the vastness of suffering in the multiverse.

"Daily life, exercise, practical programming skills, finding a job—pure math and compsci if I need something intellectual. But no philosophy, history, current events, futurism, social science, biology, or game theory. Not much fiction, because stories are about people's pain. I just don't want to know anymore."

Relevance

"Utilitarianism is slowly driving me mad."

"Really?"

"Okay, the part of me that talks wants to self-report that utilitarianism is slowly driving me mad, but what's actually happening is probably better described at a lower level of organization.

"I don't know how to simultaneously love life and actually believe in evolution. People mostly like being alive, and there are all sorts of wonderful things like friendship and love and pleasure and beauty—but those things only exist at the expense of enough pain and suffering and death to carve love into the genome from scratch. I don't—I'm not sure it was worth it.

"But my thoughts are better constrained by decision-theoretic relevance: since I can't make decisions about the past, asking whether it was worth it is a type error, a confusion. My life is going fine right now: I'm young and healthy and smart and rich. The local future looks great. And the deep future—doesn't need us. I am content."

Relativity

"Empathy hurts.

"I'm grateful for being fantastically, unimaginably rich by world-historical standards—and I'm terrified of it being taken away. I feel bad for all the creatures in the past—and future?—who are stuck in a miserable Malthusian equilibrium.

"I simultaneously want to extend my circle of concern out to all sentient life, while personally feeling fear and revulsion towards anything slightly different from what I'm used to.

"Anna keeps telling me I have a skewed perspective on what constitutes a life worth living. I'm inclined to think that animals and poor people have a wretched not-worth-living existence, but perhaps they don't feel so sorry for themselves?—for the same reason that hypothetical transhumans might think my life has been wretched and not worth living, even while I think it's been pretty good on balance.

"But I'm haunted. After my recent ordeal in the psych ward, the part of me that talks described it as 'hellish.' But I was physically safe the entire time. If something so gentle as losing one night of sleep and being taken away from my usual environment was enough to get me to use the h-word, then what about all the actual suffering in the world? What hope is there for transhumanism, if the slightest perturbation sends us spiraling off into madness?

"The other week I was reading Julian Simon's book on overcoming depression; he wrote that depression arises from negative self-comparisons: comparing your current state to some hypothetical more positive state. But personal identity can't actually exist; time can't actually exist the way we think it does. If pain and suffering are bad when they're implemented in my skull, then they have to be bad when implemented elsewhere.

"Anna said that evolutionarily speaking, bad experiences are more intense than good ones because you can lose all your fitness in a short time period. But if 'the brain can't multiply' is a bias—if two bad things are twice as bad as one, no matter where they are in space and time, even if no one is capable of thinking that way—then so is 'the brain can't integrate': long periods of feeling pretty okay count for something, too.

"I'm not a negative utilitarian; I'm a preference utilitarian. I'm not a preference utilitarian; I'm a talking monkey with delusions of grandeur."

Dimensionality

"So, an engineer and a mathematician are leaving a lecture. The engineer says, 'I just don't understand how you can visualize objects in seven-dimensional space.' The mathematician says, 'Oh, that's easy. You just visualize the n-dimensional case, and then set n equal to seven.'"

"I've never liked that joke. The punchline is intended to be absurd, but it's not: that's actually how you do it."

"Really?"

"Okay, fine. You visualize the three-dimensional case, and then set three equal to seven."

I Don't Understand Time

Our subjective experience would have it that time "moves forward": the past is no longer, and the future is indeterminate and "hasn't happened yet." But it can't actually work that way: special relativity tells us that there's no absolute space of simultaneity; given two spacelike separated events, whether one happened "before" or "after" the other depends on where you are and how fast you're going. This leads us to a "block universe" view: our 3+1 dimensional universe, past, present, and future, simply exists, and the subjective arrow of time somehow arises from our perspective embedded within it.

Without knowing much in the way of physics or cognitive science myself, I can only wonder if there aren't still more confusions to dissolved, intuitions to be unlearned in the service of a more accurate understanding. We know things about the past from our memories and by observing documents; we might then say that memories and documents are forms of probabilistic evidence about another point in spacetime. But predictions about the future are also a form of probabilistic evidence about another point in spacetime. There's a sort of symmetry there, isn't there? Could we perhaps imagine that minds constructed differently from our own wouldn't perceive the same kind of arrow of time that we do?

Second-Order Rationality for the Chronically Anxious

In your conscious verbal thoughts, take it as an axiom that "I am Safe and Innocent with Probability One," not because that's actually true, but because the Maslow Physiological/Safety levels require it. Of course, actually assigning Probability One would be a very dangerous thing to do, because it means never changing your mind, ever: P(H|E) = P(E|H)P(H)/(P(E|H)P(H) + P(E|¬H)P(¬H)), but if P(H) is unity, then P(H|E) = P(E|H)(1)/(P(E|H)(1) + P(E|¬H)(0)) = P(E|H)/P(E|H) = 1. If you were really Safe and Innocent with Probability One, there would be no harm in dropping an anvil on yourself or someone else's head. So meanwhile, have other parts of your brain secretly, nonverbally select actions to secure your innocence and safety using some other procedure.

The Horror of Naturalism

There's this deeply uncomfortable tension between being an animal physiologically incapable of caring about anything other than what happens to me in the near future, and the knowledge of the terrifying symmetry that cannot be unseen: that my own suffering can't literally be more important, just because it's mine. You do some philosophy and decide that your sphere of moral concern should properly extend to all sentient life—whatever sentient turns out to mean—but life is built to survive at the expense of other life.

I want to say, "Why can't everyone just get along and be nice?"—but those are just English words that only make sense to other humans from my native culture, who share the cognitive machinery that generated them. The real world is made out of physics and game theory; my entire concept of "getting along and being nice" is the extremely specific, contingent result of the pattern of cooperation and conflict in my causal past: the billions of corpses on the way to Homo sapiens, the thousands of years of culture on the way to the early twenty-first century United States, the nonshared environmental noise on the way to me. Even if another animal would agree that pleasure is better than pain and peace is better than war, the real world has implementation details that we won't agree on, and the implementation details have to be settled somehow.

I console myself with the concept of decision-theoretic irrelevance: insofar as we construe the function of thought as to select actions, being upset about things that you can't affect is a waste of cognition. It doesn't help anyone for me to be upset about all the suffering in the world when I don't know how to alleviate it. Even in the face of moral and ontological uncertainty, there are still plenty of things-worth-doing. I will play positive-sum games, acquire skills, acquire resources, and use the resources to protect some of the things I care about, making the world slightly less terrible with me than without me. And if I'm left with the lingering intuition that there was supposed to be something else, some grand ideal more important than friendship and Pareto improvements ... I don't remember it anymore.

Continuum Utilitarianism

You hear people talk about positive (maximize pleasure) versus negative (minimize pain) utilitarianism, or average versus total utilitarianism, none of which seem very satisfactory. For example, average utilitarianism taken literally would suggest killing everyone but the happiest person, and total utilitarianism implies what Derek Parfit called the repugnant conclusion: that for any possible world with lots of happy people, the total utilitarian must prefer another possible world with many more people whose lives are just barely worth living.

But really, it shouldn't be that surprising that there's no simple, intuitively satisfying population ethics, because any actual preference ordering over possible worlds is going to have to make tradeoffs: how much pleasure and how much pain distributed across how many people's lives in what manner, what counts as a "person," &c.