The main focus of the next several chapters is on the mathematical framework that underlies linear systems arising in physics, engineering and applied mathematics. Roughly speaking, we are making a generalization from the theory of linear transformation on finite dimensional vector space to the theory of linear operators on infinite dimensional vector spaces as they occur in the context of homogeneous and inhomogeneous boundary value and initial value problems.
The key idea is linearity. Its geometrical imagery in terms of vector space, linear transformation, and so on, is a key ingredient for an efficient comprehension and appreciation of the ideas of linear analysis to be developed. Thus it is very profitable to repeatedly ask the question: What does this correspond to in the case of a finite dimensional vector space?
Here are some examples of what we shall generalize to the infinite dimensional vector case:
I. Solve each of the following linear algebra problems
1. ``Homogeneous problem''These are archetypical problems of linear algebra. (If 1. has a non-trivial solution, then 2. has infinitely many solutions or none at all, depending on , and 3. has none.)
2. ``Inhomogeneous problem''
3. ``Inverse of ''
More generally we ask: For what values of do the following have a solution (and for what values do they not):
Of greatest interest to us is the generalization in which is (part of) a differential operator with in general non-constant coefficients.
As we know from linear algebra, these three types of problems are closely related, and consequently this must also be the case for our generalization to linear differential equations, ordinary as well as partial. In fact, these three types are called
1. Homogeneous boundary or initial value problems;
2. Inhomogeneous problems;
3. Green's function problems.
II. There is another idea which we shall extend from the finite to the infinite dimensional case. Consider the eigenvalue equation
Let us suppose that there are enough eigenvectors to span the whole vector space, but that at least one eigenvalue is degenerate, i.e., it has more than one eigenvector. In that case, the vector space has an eigenbasis, but it is not unique. Eigenvectors, including those used for a basis, derive their physical and geometrical significance from eigenvalues. Indeed, eigenvalues serve as labels for eigenvectors. Consequently, the lack of enough eigenvalues to distinguish between different eigenvectors in a particular eigensubspace introduces an intolerable ambiguity in our physical and geometrical picture of the inhabitants of this subspace.
In order to remedy this deficiency one introduces another matrix, say , whose eigenvectors are also eigenvectors of , but whose eigenvalues are nondegenerate. The virtue of this introduction is that the matrix recognizes explicitly and highlights, by means of its eigenvalues, a fundamental physical and geometrical property of the linear system characterized by the matrix .
This explicit recognition is stated mathematically by the fact that commutes with
In general, the matrix is not unique. Suppose there are two of them, say , which highlights property and , which highlights a different property of the system. Thus
Consequently, for hermitian matrices, the matrix is characterized by two alternative orthonormal eigenbases, one due to , the other due to , and there is a unitary transformation which relates the two bases.
The matrix does not determine the choice of eigenbasis. Instead, this choice is determined by which of the two physical properties we are told to (or choose to) examine, that of or that of .
In the extension of these ideas to differential equations, we shall find that
III. A further idea which these notes extend to infinite dimensions is that of an inhomogeneous four-dimensional system,
which is overdetermined: the matrix is 4 4, but singular with a one-dimensional null space.
The extension consists of the statement that (a) this equation is a vectorial wave equation which is equivalent to Maxwell's field equation, (b) the four-dimensional vectors and are 4-d vector fields, and (c) the matrix has entries which are second order partial derivatives.
One solves this system using the method of eigenvectors and eigenvalues. The eigenvectors have entries which are first order derivatives. The nonzero eigenvalues are scalar (D'Alembertian) wave operators acting on scalar wave functions. For Maxwell's equations there are exactly three of them, and they are the scalars from which one obtains the three respective types of Maxweell fields,