Discontinuous Linear Functions?!

We know what linear functions are. A function f is linear iff it satisfies additivity f(x + y) = f(x) + f(y) and homogeneity f(ax) = af(x).

We know what continuity is. A function f is continuous iff for all ε there exists a δ such that if |xx0| < δ, then |f(x) − f(x0)| < ε.

An equivalent way to think about continuity is the sequence criterion: f is continuous iff a sequence (xk) converging to x implies that (f(xk)) converges to f(x). That is to say, if for all ε there exists an N such that if kN, then |xkx| < ε, then for all ε, there also exists an M such that if kM, then |f(xk) − f(x)| < ε.

Sometimes people talk about discontinuous linear functions. You might think: that’s crazy. I’ve seen many linear functions in my time, and they were definitely all continuous. f(x): ℝ → ℝ := ax is continuous for any a ∈ ℝ. T(x⃗): ℝ² → ℝ² := \begin{pmatrix} a & b \\ c & d \end{pmatrix} \boldsymbol{\vec{v}} is continuous no matter what the entries in the matrix are. Stop being crazy!!

Actually, it’s not crazy. It’s just that all the discontinuous linear functions live in infinite-dimensional spaces.

Take, say, the space C1(ℝ) of continuously differentiable functions ℝ → ℝ with the uniform norm. (The uniform norm means that the “size” of a function for the purposes of continuity is the least upper bound of its absolute value.) If you think of a vector in the n-dimensional ℝn as a function from {1...n} to ℝ, then you can see why a function from a continuous (not even countable) domain would be infinite-dimensional.

Consider the sequence of functions (fk) = (\frac{\sin kx}{k})_{k=1}^{\infty} in C1(ℝ). The sequence converges to the zero function: for any ε, we can take k := \lceil \frac{1}{\varepsilon} \rceil and then \frac{\sin kx}{k} \le \frac{1}{\lceil \frac{1}{\varepsilon} \rceil} \le \frac{1}{\frac{1}{\varepsilon}} = \varepsilon.

Now consider that the sequence of derivatives is (\frac{k \cos kx}{k})_{k=1}^{\infty} = (\cos kx)_{k=1}^{\infty}, which doesn’t converge. But the function D: C1(ℝ) → C0(ℝ) that maps a function to its derivative is linear. (We have additivity because the derivative of a sum is the sum of the derivatives, and we have homogeneity because you can “pull out” a constant factor from the derivative.)

By exhibiting a function D and a sequence (fk) for which (fk) converges but (D(fk)) doesn’t, we have shown that the derivative mapping D is a discontinuous linear function, because the sequence criterion for continuity is not satisfied. If you know the definitions and can work with the definitions, it’s not crazy to believe in such a thing!

The infinite-dimensionality is key to grasping the ultimate sanity of what would initially have appeared crazy. One way to think about continuity is that a function not having any “jumps” implies that it can’t have an “infinitely steep” slope: a small change in the input can’t correspond to an arbitrarily large change in the output.

Consider a linear transformation T on a finite-dimensional vector space; for simplicity of illustration, suppose it’s diagonalizable with eigenbasis {u⃗j} and eigenvalues {λj}. Then for input x⃗ = Σj cju⃗j, we have T(x⃗) = Σj cjλju⃗j: the eigencoördinates of the input get multiplied by the eigenvalues, so the amount that the transformation “stretches” the input is bounded by maxjj|. The linearity buys us the “no arbitrarily large change in the output” property which is continuity.

In infinite dimensions, linearity doesn’t buy that. Consider the function T(x1, x2, x3, …) = (x1, 2x2, 3x3, ...) on sequences with finitely many nonzero elements, under the uniform norm. The effect of the transformation on any given dimension is linear and bounded, but there’s always another dimension that’s getting stretched more. A small change in the input can result in an arbitrarily large change in the output, by making the change sufficiently far in the sequence (where the input is getting stretched more and more).

April Is Separability Month

It is now April! Did you know that April is one of the months in which every compact metric space is separable?

Proof. Let it be April, and let M be a compact metric space. Because M is compact, it is totally bounded, so for all n∈ℕ, we can cover M with finitely many open balls of radius 1/n. The centers of all such balls are a countable set which we can call C. But C is dense, because an arbitrary point p∈M is a limit point of C: an ε-neighborhood of p must contain the center of one the balls in our covering of M with ε/2-balls. Thus M contains a countable dense subset.

Lipschitz

—and the moment or more than a moment when the dam breaks, when the damned break and the void inside their skulls is filled (the atmosphere rushing in quickly, but not so quickly that one couldn't sense its motion) with the terror that is knowledge of the specter of continuity: that there have never been, and can never be, any miracles.

For to be saved is only to be some distance in the initial conditions from being damned, some lesser distance from being half-damned ... some δ-distance from being ε-damned. And the complement of the shadow we cast on the before-time contains its limits.

Missing Books I

Someone should write a combined novel/textbook about a mathematician-princess's quest to understand the true nature of continuity and change. When her father dies, she'll have the opportunity to be Queen regnant, but she'll quickly marry some guy instead so she can be a Queen consort and continue her research without being distracted with boring politics.

Two Views of the Monotone Sequence Theorem

If a sequence of real numbers (an) is bounded and monotone (and I'm actually going to say nondecreasing, without loss of generality), then it converges. I'm going to tell you why and I'm going to tell you twice.

If our sequence is bounded, the completeness of the reals ensures that it has a least upper bound, which we'll call, I don't know, B, but there have to be sequence elements arbitrarily close to (but not greater than) B, because if there weren't, then B couldn't be a least upper bound. So for whatever arbitrarily small ε, there's an N such that aN > B – ε, which implies that |aNB| < ε, but if the sequence is nondecreasing, we also have |anB| < ε for nN, which is what I've been trying to tell you—

twice; suppose by way of contraposition that our sequence is not convergent. Then there exists an ε such that for all N, there exist m and n greater or equal to N, such that |aman| is greater or equal to ε. Suppose it's monotone, without loss of generality nondecreasing; that implies that for all N, we can find n > mN such that anam ≥ ε. Now suppose our sequence is bounded above by some bound B. However, we can actually describe an algorithm to find sequence points greater than B, thus showing that this alleged bound is really not a bound at all. Start at a1. We can find points later in the sequence that are separated from each other by at least ε, but if we do this ⌈(Ba1)/ε⌉ times, then we'll have found a sequence point greater than the alleged bound.

Subscripting as Function Composition

Dear reader, don't laugh: I had thought I already understood subsequences, but then it turned out that I was mistaken. I should have noticed the vague, unverbalized discomfort I felt about the subscripted-subscript notation, (ank). But really it shouldn't be confusing at all: as Bernd S. W. Schröder points out in his Mathematical Analysis: A Concise Introduction, it's just a function composition. If it helps (it helped me), say that (an) is mere syntactic sugar for a(n): ℕ → ℝ, a function from the naturals to the reals. And (ank) is just the composition a(n(k)), with n(k): ℕ → ℕ being a strictly increasing function from the naturals to the naturals.

Bounded but Not Totally Bounded, Redux

Theorem. An open set in real sequence space under the ℓ norm is not totally bounded.

Proof. Consider an open set U containing a point p. Suppose by way of contradiction that U is totally bounded. Then for every ε > 0, there exists a finite ε-net for U. Fix ε, and let m be the number of points in our ε-net, which net we'll denote {Si}i∈{1, ..., m}. We're going to construct a very special point y, which does not live in U. For all i ∈ {1, ..., m}, we can choose the ith component yi such that the absolute value of its difference from the ith component of the ith point in the net is strictly greater than ε (that is, |yiSi,i| > ε) but also so that the absolute value of its difference from the ith component of p is less than or equal to ε (that is, |yipi| ≤ ε). Then for j > m, set yj = pj. Then |yp| ≤ ε, but that means there are points arbitrarily close to p which are not in U, which is an absurd thing to happen to a point in an open set! But that's what I've been trying to tell you this entire time.

Bounded but Not Totally Bounded

The idea of total boundedness in metric space (for every ε, you can cover the set with a finite number of ε-balls; discussed previously on An Algorithmic Lucidity) is distinct from (and in fact, stronger than) the idea of mere boundedness (there's an upper bound for the distance between any two points in the set), but to an uneducated mind, it's not immediately clear why. What would be an example of a set that's bounded but not totally bounded? Wikipedia claims that the unit ball in infinite-dimensional Banach space will do. Eric Hayashi made this more explicit for me: consider sequence space under the ℓ norm, and the "standard basis" set (1, 0, 0 ...), (0, 1, 0, 0, ...), (0, 0, 1, 0, 0, ...). The distance between any two points in this set is one, so it's bounded, but an open 1-ball around any point doesn't contain any of the other points, so no finite number of open 1-balls will do, so it's not totally bounded, which is what I've been trying to tell you this entire time.

Straight Talk About Precompactness

So we have this metric space, which is this set of points along with a way of defining "distances" between them that behaves in a basically noncrazy way (points that are zero distance away from "each other" are really just the same point, the distance from one to the other is the same as the distance from the other to the one, and something about triangles).

Let's say (please, if you don't mind) that a sequence of points (xn) in our space is fundamental (or maybe Cauchy) iff (sic) for all positive ε, there's a point far enough along in the sequence so that beyond that point, the distance from one point to the next is less than ε. Let's also agree (if that's okay with you) to say that our metric space is sequentially precompact iff every sequence has a fundamental subsequence. If, furthermore, the precompact space is complete (all fundamental sequences actually converge to a point in the space, rather than leading up to an ætherial gap or missing edge), then we say it's compact. It turns out that compactness is an important property to pay attention to because it implies lots of cool stuff: like, compactness is preserved by homeomorphisms (continuously invertible continuous maps), and continuous functions with compact domains are bounded, and probably all sorts of other things that I don't know (yet). I'm saying sequentially precompact because I'm given to understand that while the convergent subsequences criterion for compactness is equivalent to this other definition (viz., "every open cover has a finite subcover") for metric spaces, the two ideas aren't the same for more general topological spaces. Just don't ask me what in the world we're going to do with a nonmetrizable space, 'cause I don't know (yet).

But anyway, as long as we're naming ideas, why not say that our metric space is totally bounded iff for every ε, there exists a finite number of open (that is, not including the boundary) balls that cover the whole space? We can call the centers of such a group of balls an ε-net. Our friend Shilov quotes his friend Liusternik as saying, "Suppose a lamp illuminating a ball of radius ε is placed at every point of a set B which is an ε-net for a set M. Then the whole set M will be illuminated." At the risk of having names for things that possibly don't actually deserve names, I'm going call each point in an ε-net a lamp. Actually Shilov, and thus likely Liusternik, is talking about closed balls of light around the lamps, not the open ones that I'm talking about. In a lot of circumstances, this could probably make all the difference in the world, but for the duration of this post, I don't think you should worry about it.

But this fear of having too many names for things is really a very serious one, because it turns out that sequential precompactness and total boundedness are the same thing: not only can you not have one without the other, but you can't even have the other without the one! Seriously, like, who even does that?!

Continue reading