Epiphenomenal Coordinates

In the study of elementary linear algebra, unwary novices are often inclined to think of a vector as an ordered list of real numbers; to them, linear algebra is then conceived of as the study of multiplying matrices with column vectors. But this is a horribly impoverished perspective; we can do so much better for ourselves with a bit of abstraction and generality.

You can think of arrows or lists of numbers if you want or if you must, but the true, ultimate meaning of a vector space is ... well, anything that satisfies the vector space axioms. If you have things that you can "add" (meaning that we have an associative, commutative binary operation and we have inverse elements and an identity element with respect to this operation), and you can "multiply" these things by other things that come from a field (the "vectors" in the space and the "scalars" from the field play nicely together in a way that is distributive &c.), then these things you that you have are a vector space over that field, and any of the theorems that we prove about vector spaces in general apply in full force to the things you have, which don't have to be lists of real numbers; they could be matrices or polynomials or functions or whatever.

Okay, so it turns out that n-dimensional vector spaces are isomorphic to lists of n numbers (elements of the appropriate field), but that's not part of our fundamental notion of vectorness; it's something we can prove

Continue reading

Eigencritters

Say we have a linear transformation A and some nonzero vector v, and suppose that Av = λv for some scalar λ. This is a very special situation; we say that λ is an eigenvalue of A corresponding to the eigenvector v.

How can we find eigenvalues? Here's one criterion. If Av = λv for some unknown λ, we at least know that Av – λv equals the zero vector, which implies that the linear transformation (A – λI) maps v to zero. If (A – λI) maps v to zero, then it must have a nontrivial kernel, which is to say that it can't be invertible, and this happens exactly when its determinant is zero, because the determinant measures how the linear transformation distorts (signed) areas (volumes, 4-hypervolumes, &c.), so if the determinant is zero, it means you've lost a dimension; the space has been smashed infinitely thin. But det(A – λI) is a polynomial in λ, and so the roots of that polynomial are exactly the eigenvalues of A.