The idea of *total boundedness* in metric space (for every ε, you can cover the set with a finite number of ε-balls; discussed previously on *An Algorithmic Lucidity*) is distinct from (and in fact, stronger than) the idea of mere boundedness (there's an upper bound for the distance between any two points in the set), but to an uneducated mind, it's not immediately clear *why*. What would be an example of a set that's bounded but not totally bounded? *Wikipedia* claims that the unit ball in infinite-dimensional Banach space will do. Eric Hayashi made this more explicit for me: consider sequence space under the ℓ^{∞} norm, and the "standard basis" set (1, 0, 0 ...), (0, 1, 0, 0, ...), (0, 0, 1, 0, 0, ...). The distance between any two points in this set is one, so it's bounded, but an open 1-ball around any point doesn't contain any of the other points, so no finite number of open 1-balls will do, so it's not totally bounded, which is what I've been trying to tell you this entire time.

# Dreams

Friend of the blog Alicorn tweets:

Why is the word "dreams" used to describe both pseudorandom nocturnal hallucinations and also heartfelt aspirations for real life?

A cynic might reply: because both the nocturnal hallucinations and the heartfelt aspirations are, for the most part, *composed of lies*. How many people, what proportion of the time, will actually lift a finger (or open a book, or make a telephone call) to work towards *actually achieving* what they believe to be heartfelt aspirations?

# My Favorite Mnemonic

(From Leonard Gillman and Robert H. McDowell's calculus text.)

The number *e* to twelve decimal places is 2.718281828459; it's easy to remember because four plus five equals nine, and 1828 is the year after Beethoven died.

# Don't Resent That No One Cares

It's tempting to be resentful that other people don't value your time the way you do. You complain at every opportunity: "Why, why, *why* do I get socially rewarded for working on this-and-such random chore that doesn't even *help* anyone, when *obviously* my great masterpiece (in progress, *in potentia*, coming soon) on such-and-this is so much more valuable?!"

But I think it's better not to be resentful and not to complain, mostly because it doesn't work. Other people *don't* care about your great masterpiece on such-and-this. They really don't. Maybe someone, somewhere will care after it's done, but it's not reasonable to expect anyone's support *in advance*—or, alternatively and isomorphically, it *is* reasonable, but given that there's nothing you can do to force people to be reasonable, *reasonableness* is not the correct criterion to be paying attention to.

# The Word *Melancholy*

It sounds like a kind of dog bred to harvest *Cucurbitaceae*-family fruits.

# The Sliding False Dichotomy of Idealism and Cynicism

The *Television Tropes & Idioms* wiki has a page on the Sliding Scale of Idealism Versus Cynicism. Of course I understand why such a page exists, but part of me can't help but protest that it's not really a sliding scale. One of the most charming things about my native subculture is that we have heaps of *both*: cynicism in the style of "Humans are selfish, weak-willed hypocrites; the reasons people say they do things aren't always or even usually the real reasons, and even introspection itself is untrustworthy," and idealism in the style of "But knowing what we do now, we shall use the power of Reason to remake the world in accordance with our Values!"

# Colon-Equals

Sometimes I think it's sad that the most popular programming languages use "=" for assignment rather than ":=" (like Pascal). Equality is a symmetrical relationship: "*a* equals *b*" means that *a* and *b* are the same thing or have the same value, and this is clearly the same as saying that "*b* equals *a*". Assignment isn't like that: putting the value *b* in a box named *a* isn't the same as putting the value *a* in a box named *b*!—surely an asymmetrical operation deserves an asymmetrical notation? Okay, so it is an extra character, but any decent editor can be configured to save you the keystroke.

I'd like to see the colon-equals assignment symbol more often in math, too. For example, shouldn't we be writing lower indices of summation like this?—

—the rationale being that the text under the sigma *isn't* asserting that *j* *equals* zero, but rather that *j* is *assigned* zero as the initial index value of what is, in fact, a for loop:

sum = 0; for (int j=0; j<=n; j++) { sum += f(j); } return sum;

**[RETRACTED]** Introducing the Fractional Arithmetic Derivative

**[RETRACTED]**

[**NOTICE**: *The conclusion of this post is hereby retracted* because it turns out that the proposed definition of a "fractional arithmetic derivative" doesn't actually make sense. It fails to meet the basic decideratum of corresponding with an iterated arithmetic derivative.

*E.g.*, consider that 225″ = (225′)′ = ((3

^{2}·5

^{2})′)′ = (2·3·5

^{2}+ 3

^{2}·2·5)′ = (150 + 90)′ = 240′ = (2

^{4}·3·5)′ = 4·2

^{3}·3·5 + 2

^{4}·5 + 2

^{4}·3 = 480 + 80 + 48 = 608. Whereas, under the proposed definition we would

*allegedly*equivalently have 225

^{(2)}= (2!·3

^{0}·5

^{2}+ 3

^{2}·2!·5

^{0}) = 50 + 18 = 68. I apologize to anyone who read the original post (??) who was thereby misled. The original post follows (with the erroneous section struck through).]

# Book Notes I

Did you know that putting adorable foxes on the cover of your book will make it sell more copies??

Speaking of books with animals on the cover, is it wrong to mentally associate specific programming languages with specific colors based on the O'Reilly books?

Toni Morrison has a book titled *What Moves at the Margin*, and based on the title I keep hoping that it's a treatise on microeconomic theory, but that's probably not actually true.

# Idiot or Alien? Incompetence or Evil?

When you encounter someone who expresses a political or social opinion that you find absolutely abhorrent, it is instructive to consider the extent to which this person is making a *mistake*, and the extent to which they simply have different values from you. Is this opinion something that they would immediately relinquish, if only they knew they knew the true facts of which they are now ignorant?—or is it reflective of some quality essential to their agency, a basic motive far too sacred to be destroyed by the truth?

(Of course, it is also instructive to consider whether *you're* making a mistake. But that is not the subject of this post.)

Some would say that it is useless to consider such questions, that human cognition doesn't separate cleanly into beliefs and values, and that even if such a thing could be done, it is futile for any present-day human to consider the matter, given our ignorance of our own psychology. And yet, the question still *seems* to make sense to me. If I can't know, I can guess. And I don't guess the same thing every time.

# Facial Hair Is Gross

I often go a couple days without bothering to shave, but never much longer, because the stubble quickly becomes intolerable: I end up compulsively touching my face out of what I want to describe as a mildly horrified perverse fascination, perhaps of the same kind that would motivate picking at a scab, or poking a tumor.

# The Parity Decomposition Trick

Earlier this year, Robert Hasner showed me something that I assume everyone else ("everyone else") already knows, but which *I* didn't know: every function on ℝ can be decomposed into the sum of an even function and an odd function—

# Blades

What is a *vector* in Euclidean space? Some might say it's an entity characterized by possessing a *magnitude* and a *direction*. But scholars of the geometric algebra (such as Eric Chisolm and Dorst *et al.*) tell us that it's better to decompose the idea of *direction* into the two ideas of subspace *attitude* (our vector's quality of living in a particular line) and *orientation* (its quality of pointing in a particular direction in that line, and not the other). On this view, a vector is an *attitudinal oriented length element*. But having done this, it becomes inevitable that we should want to talk about attitudinal oriented *area* (volume, 4-hypervolume, *&c.*) elements. To this end we introduce the *outer* or wedge product ∧ on vectors. It is *bilinear*, it is *anticommutative* (swapping the order of arguments swaps the sign, so **a**∧**b** = –**b**∧**a**), and that's all you need to know.

Suppose we have two vectors **a** and **b** in Euclidean space and also a basis for the subspace that the vectors live in, **e**_{1} and **e**_{2}, so that we can write **a** := a_{1}**e**_{1} + a_{2}**e**_{2} and **b** := b_{1}**e**_{1} + b_{2}**e**_{2}. Then the claim is that the outer product **a**∧**b** can be said to represent a generalized vector (call it a *2-blade*—and in general, when we wedge *k* vectors together, it's a *k*-blade) with a subspace attitude of the plane that our vectors live in and a magnitude equal to the area of the parallelogram spanned by them. Following Dorst *et al*., let's see what happens when we expand **a**∧**b** in terms of our basis—

# Character Entity Reference

In addition to *—* and *–*, there should also be an *&rdash;* HTML character entity reference which specifies the Unicode high voltage sign U+26A1 (⚡).

# The Threshold

Supposedly the method of pomodoros is a great technology for overcoming procrastination: you work in twenty- or twenty-five-minute timed blocks, each of which are *atomic*, indivisible: you have to work through the block, and if you let yourself wander away to something else, then it doesn't count. Katja Grace explains why this is a good idea:

While working, there are various moments when it would be easier to stop than to continue, particularly if you mostly feel the costs and benefits available in the next second or so, and if you assume that you could start again shortly [...] Counting short blocks of continuous time working pretty much solves this problem for me. [...] [A]t any given moment there might be a tiny short term benefit to stopping for a second, but there is a huge cost to it. In my case this seems to remove stopping as an option, in the same way that a hundred dollar price on a menu item removes it as an option without apparent expense of willpower.

# Supermarket Notes II

I bought cookie dough, on the thought that maybe I should bake cookies and offer them to people at the University; if they were to ask what the occasion was, I could say, "It seemed like a whimsical thing to do, and I'm a whimsical person." But I'm not sure I'll actually do it.

I used to work for a different store in this chain, the one on Ygnacio Valley. The stores are numbered (internally; the numbers aren't secret, but it's the sort of thing you don't notice unless you work for the company), and the store on Ygnacio Valley is number 1701, which I remember thinking was a very significant number, but I don't remember anyone else agreeing with me, probably because if I told anyone, then they hadn't been a *Star Trek* fan.

# It's Not Whether You Win or Lose

It's how close you come to doing the Right Thing at each and every one of the uncounted millions of decision points that make up *your life*, with how you play in any particular game only constituting a tiny fraction of these, and it being not at all clear that choosing to play a game just then is closer to the Right Thing than any number of non-game-playing actions you might have chosen instead, but didn't.

# Forgetting to Take an Average

It seems as if my outlook on life varies drastically with mood. In the moments when I feel brave and ambitious, I rarely seem to remember that *it won't last*: that in a week or a day, the moment will be gone and I'll feel weak and scared again—and of course it goes conversely, too.

We don't have the technology or the wisdom to redesign our own emotions. If the moments of weakness-and-fear aren't going away, and if neither mood is exactly a *belief* that could be destroyed by the truth, then it seems like it would at least be useful to *remember*, if for no other reason than to avoid wasting cognition devising plans and expectations that aren't sufficiently robust to ordinary emotional variation.

# The True Secret About Conjugate Roots and Field Automorphisms

In the study of the elementary algebra, one occasionally hears of the conjugate roots theorem, which says that if *z _{0}* is a root of a polynomial with real coefficients, then its complex conjugate is also a root. Or if you prefer, nonreal roots come in conjugate pairs. It also works in the other direction: if nonreal roots of a polynomial come in conjugate pairs, then the polynomial has real coefficients, because the purely imaginary parts cancel when you do the algebra: (

*x*– (

*a*+

*bi*))(

*x*– (

*a*–

*bi*)) =

*x*

^{2}–

*x*(

*a*+

*bi*) –

*x*(

*a*–

*bi*) + (

*a*

^{2}– (

*bi*)

^{2}) =

*x*

^{2}– 2

*ax*+

*a*

^{2}+

*b*

^{2}.

There's also this idea that conjugation is the unique nontrivial "well-behaved" *automorphism* on ℂ, a map from ℂ to itself that respects addition and multiplication: the sum (respectively product) of the conjugates is the conjugate of the sum (respectively product). The complex numbers are *symmetrical* around the real axis in a way that they're not around the imaginary axis: while *i* and –*i* are different from *each other*, you can't "tell which is which" because they *behave* the same way. Contrast to 1 and –1, which *do* behave differently: if someone put either 1 or –1 in a box, but they wouldn't tell you which, but they *were* willing to tell you that "The number in the box squares to itself," then you could figure out that the number in the box was 1, because –1 doesn't do that.

The existence of these two ideas (the conjugate roots theorem and conjugation-as-automorphism) can't possibly be a coincidence; there must be some sense in which nonreal roots of real-coefficient polynomials come in conjugate pairs *because* the polynomial "can't tell" "which is which". But it would be unsatisfying to just say this much and nothing more ("*Theorem*: That can't possibly be a coincidence. *Proof* ...??"); we want to say something much more general and precise. And in fact, we can—

# Recursion Is Boring

What the utter novice finds brilliant and fascinating, the slightly-more-experienced novice finds obvious and boring.

When you're trying to think of cool things to do with a system, one of the obvious things to try is to abuse self-reference for all the world as if you were Douglas fucking Hofstadter—but it's not cool, precisely because it is so obvious, and you're not Douglas Hofstadter.

Once I made a Git repository and a Mercurial repository living in the same directory, tracking each other endlessly, one going out of date the moment you committed to the other ...

But that's not interesting.

You can run Emacs inside a terminal, and you can run a terminal inside Emacs—in fact, you can run two (*M-x term*, *M-x ansi-term*). Therefore you can run two instances of Emacs within Emacs. Each of those Emacsen could run some natural number of other nested Emacsen, and therefore (to a certain perverse sort of mind) could be said to *represent* that natural number, which I presume could be determined programmatically (via recursion). Two-counter machines are Turing-complete. So, in principle, if you didn't run out of memory, you could build a computer out of instances of Emacs running on your computer ...

But that's boring.