Vast Expanses of Imperfection

Hard Truths from Soft Cats opines that

Your flaws don't make you beautiful or unique. They make you flawed.

While I agree re beauty, technically, your flaws actually do make you unique: the number of ways in which one can be flawed is vastly larger than the number of ways in which one can be perfect; the probability that someone else would turn out to be damaged in exactly the same way you are, is negligible.

It's just that uniqueness is overrated.

Nothing Good in Life Scales

The other day while rehearsing my arguments about how currently-existing social institutions are obviously insane, it became more salient that there's also no clear way to fix anything on a large scale. My perspective on How to Do Things Better is the idiosyncratic result of five years of my thinking; even if my vision is in the 99th percentile of Arbitrary People's Idiosyncratic Visions of How to Do Things Better (and everyone thinks that about herself, so don't take my word for it), it's not very transferable.

"Life Is Worth Protecting Now"

What makes a true story inspirational? I think people usually use that word to describe happy stories, stories that make us think that the world is a better place than we previously thought. But sometimes I want to use it to describe sad stories that remind us that the world is far worse than just the parts of it we're used to seeing firsthand, stories about innocent people being hurt by arbitrary causes. It's inspirational in the sense of a call to action, a reminder that there's still important work to be done in the world: I can't solve this particular problem, but there's a reference class of people containing me (reasonably intelligent, reasonably ambitious people, striving to become more effective) who can help fix a reference class of problems including this one—and that is a sacred responsibility that must not be betrayed. Or however you translate folderol like "sacred responsibility" and "must not be betrayed" into something more basic (Bayes-ic?).

Missing Words I

There are a lot of really important concepts that aren't easy to talk about, because we don't have standard words for them.

Like, there needs to be a word designating the skill or quality of possessing independent judgement—the ability to make decisions without getting distracted worrying about how to explain yourself to people who won't understand. Part of me wants to just call it sociopathy, but that's clearly not the right word.

The problem is endemic. Friend of the blog Mike Blume once lamented that we don't have a gender-neutral equivalent of gentlemanly. And we don't have an atheist equivalent of doing God's work, either.

Draft of a Letter to a Former Teacher, Which I Did Not Send Because Doing So Would Be a Bad Idea

Dear [name redacted]:

So, I'm trying (mostly unsuccessfully) to stop being bitter, because I'm powerless to change anything, and so being bitter is a waste of time when I could be doing something useful instead, but I still don't understand how a good person like you can actually think our so-called educational system is actually a good idea. I can totally understand being practical and choosing to work within the system because it's all we've got; there's nothing wrong with selling out as long as you get a good price. If you think you're actually helping your students become better thinkers and writers, then that's great, and you should be praised for having more patience than me. But I don't understand how you can unambiguously say that this gargantuan soul-destroying engine of mediocrity deserves more tax money without at least displaying a little bit of uncertainty!

Continue reading

Eigencritters

Say we have a linear transformation A and some nonzero vector v, and suppose that Av = λv for some scalar λ. This is a very special situation; we say that λ is an eigenvalue of A corresponding to the eigenvector v.

How can we find eigenvalues? Here's one criterion. If Av = λv for some unknown λ, we at least know that Av – λv equals the zero vector, which implies that the linear transformation (A – λI) maps v to zero. If (A – λI) maps v to zero, then it must have a nontrivial kernel, which is to say that it can't be invertible, and this happens exactly when its determinant is zero, because the determinant measures how the linear transformation distorts (signed) areas (volumes, 4-hypervolumes, &c.), so if the determinant is zero, it means you've lost a dimension; the space has been smashed infinitely thin. But det(A – λI) is a polynomial in λ, and so the roots of that polynomial are exactly the eigenvalues of A.

Speaking of Addiction

Speaking of addiction, I suspect that relinquishing ideologically-induced moral outrage is actually harder than getting over many chemical dependencies (although I don't have any experience with the latter). At least with a drug, it's simple enough to draw a bright line around actions you're not supposed to do anymore; you can try pouring the contents of the liquor cabinet down the drain, or signing a commitment contract to not buy or borrow any more cigarettes.

But when one of your most strongly-held beliefs (strongly-held in the sense of emotional relevance, not actual probability; I'm very confident in the monotone sequence theorem, but the truth of its negation wouldn't be a blow to who I am) turns out to be false—or if it still seems true, but it turns out that being continually angry at a Society that disagrees isn't a good allocation of cognitive resources—what do you do then? Turning your life around from that isn't anything as straightforward as preventing specific chemicals from entering your body; you have to change the way you think, which is to say excise a part of your soul. Oh, it grows back—that's the point, really; you want to stop thinking non-useful thoughts in order to replace them with something better—but can you blame me for having a self-preservation instinct, even if my currently-existing self isn't something that ought to be preserved?

But then, blame or the lack thereof isn't the point.

Goodhart's World

Someone needs to write a history of the entire world in terms of incentive systems and agents' attempts to game them. We have money to incentivize the production of useful goods and services, but we all know that there are lots of ways to make money that don't actually help anyone. Even in jobs that are actually useful, people spend a lot of their effort on trying to look like they're doing good work, rather than actually doing good work. And don't get me started about what passes for "education." (Seriously, don't.)

Much in a similar theme could be said about romance, and about economic systems in other places and times. And there's even a standpoint from which the things that we think are truly valuable for their own sake—wealth and happiness and true love, &c.—can be said to be the result of our species gaming the incentives that evolution built into us because they happened to promote inclusive genetic fitness in the ancestral environment.

The future is the same thing: superhuman artificial intelligence gaming the utility function we gave it, instead of the one we should have given it. Only there will be no one we'd recognize as a person to read or write that chapter.

Egoism as Defense Against a Life of Unending Heartbreak

Then the Dean understood what had puzzled him in Roark's manner.

"You know," he said, "You would sound much more convincing if you spoke as if you cared whether I agreed with you or not."

"That's true," said Roark. "I don't care whether you agree with me or not." He said it so simply that it did not sound defensive, it sounded like the statement of a fact which he noticed, puzzled, for the first time.

"You don't care what others think—which might be understandable. But you don't care even to make them think as you do?"

"No."

"But that's ... that's monstrous."

"Is it? Probably. I couldn't say."

In this passage from Ayn Rand's The Fountainhead, fictional character Howard Roark demonstrates a very important skill that I really need to learn—that of emotional indifference to arbitrary people's opinions: not the mere immunity of "It's okay that people now disagree with the manifest rightness of my Cause, because I know the forces of Good will win in the end," but the kind of outright indifference that I feel about, let's say, the amount of precipitation in Copenhagen in March 1957. Someone disagrees with the manifest rightness of my Cause? Sure, whatever—hey, did you see the latest Questionable Content?

I say this purely for pragmatic reasons. There's nothing philosophically noble about being narrowly selfish, about devoting the full force of one's attention to questions like "What do I want to study?" or "How am I going to make money?" rather than "Why are my ideological enemies so evil, and what can be done to stop them?" So if there's no inherent reason why scholarship or business are more worthy than activism, why explicitly renounce the activist frame of mind?

Continue reading

Ambition

I will not be rich; I will not be famous; who can say but that in a year's time, my desiccated corpse will be found abandoned in the most desolate of wastelands?—but! By the stars which may one day yet still be ours, I will understand!

Two Views of the Monotone Sequence Theorem

If a sequence of real numbers (an) is bounded and monotone (and I'm actually going to say nondecreasing, without loss of generality), then it converges. I'm going to tell you why and I'm going to tell you twice.

If our sequence is bounded, the completeness of the reals ensures that it has a least upper bound, which we'll call, I don't know, B, but there have to be sequence elements arbitrarily close to (but not greater than) B, because if there weren't, then B couldn't be a least upper bound. So for whatever arbitrarily small ε, there's an N such that aN > B – ε, which implies that |aNB| < ε, but if the sequence is nondecreasing, we also have |anB| < ε for nN, which is what I've been trying to tell you—

twice; suppose by way of contraposition that our sequence is not convergent. Then there exists an ε such that for all N, there exist m and n greater or equal to N, such that |aman| is greater or equal to ε. Suppose it's monotone, without loss of generality nondecreasing; that implies that for all N, we can find n > mN such that anam ≥ ε. Now suppose our sequence is bounded above by some bound B. However, we can actually describe an algorithm to find sequence points greater than B, thus showing that this alleged bound is really not a bound at all. Start at a1. We can find points later in the sequence that are separated from each other by at least ε, but if we do this ⌈(Ba1)/ε⌉ times, then we'll have found a sequence point greater than the alleged bound.