Straight Talk About Precompactness

So we have this metric space, which is this set of points along with a way of defining "distances" between them that behaves in a basically noncrazy way (points that are zero distance away from "each other" are really just the same point, the distance from one to the other is the same as the distance from the other to the one, and something about triangles).

Let's say (please, if you don't mind) that a sequence of points (xn) in our space is fundamental (or maybe Cauchy) iff (sic) for all positive ε, there's a point far enough along in the sequence so that beyond that point, the distance from one point to the next is less than ε. Let's also agree (if that's okay with you) to say that our metric space is sequentially precompact iff every sequence has a fundamental subsequence. If, furthermore, the precompact space is complete (all fundamental sequences actually converge to a point in the space, rather than leading up to an ætherial gap or missing edge), then we say it's compact. It turns out that compactness is an important property to pay attention to because it implies lots of cool stuff: like, compactness is preserved by homeomorphisms (continuously invertible continuous maps), and continuous functions with compact domains are bounded, and probably all sorts of other things that I don't know (yet). I'm saying sequentially precompact because I'm given to understand that while the convergent subsequences criterion for compactness is equivalent to this other definition (viz., "every open cover has a finite subcover") for metric spaces, the two ideas aren't the same for more general topological spaces. Just don't ask me what in the world we're going to do with a nonmetrizable space, 'cause I don't know (yet).

But anyway, as long as we're naming ideas, why not say that our metric space is totally bounded iff for every ε, there exists a finite number of open (that is, not including the boundary) balls that cover the whole space? We can call the centers of such a group of balls an ε-net. Our friend Shilov quotes his friend Liusternik as saying, "Suppose a lamp illuminating a ball of radius ε is placed at every point of a set B which is an ε-net for a set M. Then the whole set M will be illuminated." At the risk of having names for things that possibly don't actually deserve names, I'm going call each point in an ε-net a lamp. Actually Shilov, and thus likely Liusternik, is talking about closed balls of light around the lamps, not the open ones that I'm talking about. In a lot of circumstances, this could probably make all the difference in the world, but for the duration of this post, I don't think you should worry about it.

But this fear of having too many names for things is really a very serious one, because it turns out that sequential precompactness and total boundedness are the same thing: not only can you not have one without the other, but you can't even have the other without the one! Seriously, like, who even does that?!

Continue reading

Interpolating Between Vectorized Green's Theorems

Green's theorem says that (subject to some very reasonable conditions that we need not concern ourselves with here) the counterclockwise line integral of the vector field F = [P Q] around the boundary of a region is equal to the double intregral of \frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y} over the region itself. It's natural to think of it as a special case of Stokes's theorem in the case of a plane. We can also think of the line integral as the integral of the inner product of the vector field with the unit tangent, leading us to write Green's theorem like this:

 \oint_{\partial D}\vec{\mathbf{F}}\cdot\vec{\mathbf{T}}\, ds=\iint_{D}(\mathrm{curl\,}\vec{\mathbf{F}})\cdot\vec{\mathbf{k}}\, ds

But some texts (I have Mardsen and Tromba's Vector Calculus and Stewart's Calculus: Early Transcendentals in my possession; undoubtedly there are others) point out that we can also think of Green's theorem as a special case of the divergence theorem! Suppose we take the integral of the inner product of the vector field with the outward-facing unit normal (instead of the unit tangent)—it turns out that

\oint_{\partial D}\vec{\mathbf{F}}\cdot\vec{\mathbf{n}}\, ds=\iint_{D}\mathrm{div\,}\vec{\mathbf{F}} ds

—which suggests that there's some deep fundamental sense in which Stokes's theorem and the divergence theorem are really just mere surface manifestations of one and the same underlying idea! (I'm told that it's called the generalized Stokes's theorem, but regrettably I don't know the details yet.)

Continue reading

The Derivative of the Natural Logarithm

Most people learn during their study of the differential and integral calculus that the derivative of the natural logarithm ln x is the reciprocal function 1/x. Indeed, sometimes the natural logarithm is defined as  \int_1^x \frac{1}{t}\,dt. However, on observing the graphs of ln x and 1/x, the inquisitive seeker of knowledge can hardly fail to notice a disturbing anomaly:

y=ln(x) y=1/x

The natural logarithm is only defined for positive numbers; no part of its graph lies in quadrants II or III. But the reciprocal function is defined for all nonzero numbers. So (one cannot help oneself but wonder) how could the latter be the derivative of the former? If the graph of the natural logarithm isn't there to be differentiated in the left half of the plane, how could its derivative be defined in that region?
Continue reading