Theorem 5.2 Let S =
{u1, u2, ..., un} be an ordered basis
for a finite dimensional vector space V with an inner product.
Let
cij = (ui, uj) and C = [cij].
Then
(a) C is a symmetric matrix.
(b) C determines (v,w)
for every v and w in V.
Def 5.2 A vector space with an inner product is called an inner product space. If the space is finite dimensional, it is called a Euclidean space.
Theorem 5.3 Cauchy -
Schwarz
Inequality
If u, v are vectors in an inner product
space V, then |(u,v)| . ||u|| ||v||.
Corollary 5.1 Triangle
Inequality
If u, v are vectors in an inner product
space V, then ||u+v|| . ||u|| + ||v||.
Def 5.3 If V is an inner product
space,
we define the distance between two
vectors u and v in V as
d(u,v)
= ||u-v||.
Def 5.4 Let V be an inner product
space.
Two vectors u and v in V are orthogonalif
(u,v) = 0.
Def 5.5 Let V be an inner product space.
A set S of vectors is called orthogonal
if any two distinct vectors in S are othogonal. If, in
addition,
each vector in S is of unit length, then S is
called
orthonormal.
Theorem 5.4 Let S = {u1, u2,
..., un} be a finite, orthogonal set of nonzero
vectors
in an inner product space V. Then S is linearly
independent.
Theorem 5.5 Let S = {u1,
u2, ..., un} be an ortonormal basis for a
Euclidean
space V and let v be any vector in V. Then
v = c1u1+ c2u2 + ... + cnun
,
where ci = (v,ui), i=1, 2, ..., n.
Theorem 5.6 Gram-Schmidt
Process
Let V be an inner product space
and
W . {0} an m-dimensional subspace of V. Then there exists
an
ortonormalbasis T = {w1, w2, ..., wm}
for W.
Theorem 5.7 Let V be an
n-dimensional
Euclidean space, and let S = {u1, u2, ...,
un} be an orthonormal basis for V.
If v = a1u1+ a2u2
+ ... + anun and w = c1u1+
c2u2 + ... + cnun , then
(v,w) = v = a1b1+ a2b2
+ ... + anbn .
Theorem 5.8 QR
Factorization
If A is an m x m matrix with linearly
independent columns, then A can be factored as A =
QR,
where Q is an m x n matrix whose columns form an
orthonormal
basis for the column space of A and R is an n x
n nonsingular upper triangular matrix.
Theorem 5.9. Let W be a subspace of an inner
product
space V. Then:
(a)
Def 5.6 Let W be a subspace of an inner product space
V A vector u in V is orthogonal
to
W if it is orthogonal to every vector in W. The set of all vectors in V
that are orthogonal to all vectors in Wis called the
orthogonal
complement of W in V (W⊥).
Theorem 5.9. Let W be a subspace of an
inner
product space V. Then:
(a) W⊥
is a subspace of V.
(b) W
∩ W⊥ = {0}.
Theorem 5.10 Let W be afinite dimensional
subspace of an inner product space V. Then
V = W ⊕ W⊥.
Theorem 5.11 If W is a finite dimensional
subspace of an inner product space V, then (W⊥)⊥
= W.
Theorem 5.12 If A is a given m x n
matrix,
then
(a) The
null space of A is the orthogonal complement of the row space of A.
(b) The
null space of A⊥ is the orthogonal complement of the column
space of
A.
Note: let {w1, w2,
...,
wn} be a basis for a subspace W of an inner product
space
V. Then the orthogonal projection
onto
W of vector v in V is:
projwv = [(v,
w1)/(w1,w1)] w1 + [(v, w2)/(w2,w2)]
w2 + ... + [(v, wn)/(wn,wn)]
wn .
Theorem 5.13 Let W be a finite dimensional
subspace of the inner product space V. Then, for vector v belonging to
V, the vector in W closest to v is projwv.
That is, ||v - w||, for w belonging to W, is minimized by w = projwv.
Theorem 5.14 If
A is an m x n matrix with rank n, then ATA
is nonsingular and the linear system Ax = b has a unique
least
squares solution given by
x = (ATA)-1 ATb.
Def 6.1 Let V, W be vector spaces. A function L:V
→ W is a linear transformation
of V into W if for every u, v in V and real number c:
(a) L(u+v)
= L(u) + L(v),
(b) L(cu)
= cL(u)
If V = W then L is also called a linear
operator.
examples: reflection, projection, dilation,
contraction, rotation.
Theorem 6.1 Let L:V → W be a
linear transformation. Then
(a) L(0v) = L(0w).
(b) L(u-v) = L(u) - L(v), for
u, v in V.
Theorem 6.2 Let L:V → W be a
linear transformation of an n-dimensional vector space V into
a vectorspace W. Let S = {v1,v 2, ..., vn}
be a basis for V. If v is any vector in V, then L(v) is
completely
determined by {L(v1), L(v2), ..., L(vn)}.
Theorem 6.3 Let L: Rn → Rm
be a linear transformationand consider the natural
basis {e1,e2, ..., en} for Rn
. Let A be the mxn matrix whose j'th column is L(e2).
The matrix A has the following property: If x = [x1 x
2
... xn]T is any vector in Rn
, then
L(x) =
Ax.
(1)
Moreover, A is the only matrix satisfying equation (1). It is called
the standard matrix representing
L.
Definition 6.2 A linear transformation is called one-to-one,
if L(u) = L(v) implies u = v.
Definition 6.3 Let L:V → W be a linear
transformation of a vector space V into a vectorspace
W. The kernel of L, ker L, is the
subset
of V consisting of all v of V such that L(v) = 0.
Theorem 6.4 Let L:V → W be a
linear transformation of a vector space V into a vectorspace
W. Then
(a) ker L is a subspace of V.
(b) L is one-to-one if and only
if ker L = {0v}.
Corollary 6.1 If L(x) = b and L(y) =
b, then x - y belongs to ker L, i.e. any two
solutions
to L(x) = b differ by an element of the kernel of L.
Def 6.4 Let L:V → W be a
linear transformation of a vector space V into a vectorspace W,
then
the range or image
of V under L, denoted by range
L, cinsists of those vectors in W that are images under L of
some vector in V.
w in range L iff there exists a vector v ∈ V such
that L(v) = w.
L is called onto if im L
= W.
Theorem 6.5 Let L:V → W be a
linear transformation of a vector space V into
a vectorspace W, then range L is a subspace of W.
Theorem 6.6 Let L:V → W be a
linear transformation of an n-dimensional vector space V
into a vectorspace W, then
dim ker L + dim range L = dim V.
Corollary 6.2 Let
L:V → W be a linear transformation of a vector space
V into a vectorspace W, and dim V = dim W, then
(a) If L is
one-to-one,
then L is onto.
(b) If L is onto,
then L is one-to-one.
Def A linear transformation L:V → W if a vector space V to a vector space W is invertible if it is an invertible function, i.e. if there a unique function L : W → V such that L ⋅ L-1 = Iw and L-1 ⋅ L = Iv, where Iv = identity on V and Iw = identity on W.
Theorem 6.7 A linear transformation
L:V
→ W is invertible if and only if L is one-to-one
and onto. Moreover, L-1 is a linear
transformation
and (L-1)-1 = L.
Theorem 6.8 A linear
transformation
L: V → W is one-to-one if and only if the image of
every linearly in dependent set of vectors is a linearly independent
set
of vectors.
Theorem 6.9
Let
L: V
→ W be a linear transformation of an n-dimensional vector space V into
an m-dimensional vector space W (n &ne 0, m &ne 0) and
let S = {v1,v 2, ..., vn}
and T = {w1, w2, ...,
wm} be ordered bases for V and W, respectively. Then
the mxn matrix A whose j'th column is the coordinate vector [L(vj)]T
of L(vj) with respect to T has the following
property: [L(vj)]T =A[x]S
for every x in V.
Theorem 6.10 Let U
be the vector space of all linear transformationsof an n-dimensional
vector space V into an m-dimensional vector space W,
n &ne 0 and m &ne 0, under the operations + and *.
Then U is isomorphic to the vector space Mmn of all
mxn matrices.
Theorem 6.11 Let V1
be an n-dimensional vector space, V2 be an m-dimensional
vector space, and V3 a p-dimensional vector space with
linear transformantions L1 and L2
such that
L1: V1 → V2, L2: V2
→ V3. If the ordered bases P, S, and T are chosen for V1,
V2, and V3, respectively, then M(L1
˙ L2) = M(L1) M(L2).
Definition 6.6 If A and B are nxn matrices, then B is similar to A if there is a nonsingular P such that B = P-1 A P.
Theorem 6.14 Let V be any
n-dimensional
vector space and let A and B be any
nxn
matrices. Then A and B are similar if and only if
A and B represent the same linear transformation
L:V &rarr V with respect to two ordered bases for V.
Theorem 6.15 If A
and B are similar nxn matrices, then rank A = rank B.
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Definition 7.1 Let L : V &rarr V
be a linear transformation of an n-dimensional vector space into
itself. The number λ is called an eigenvalue
of L if there exists a nonzero vector x in V such
that L(x) = λ x.
Every nonzero vector x satisfying this equation is then
called an eigenvector of
L
associated with the eigenvalue λ.
Definition 7.2 Let A = {aij}be an nxn
matrix. Then the determinant of the matrix λ In
- A =
λ - a11 | -a12 | ... | -a1n |
-a21 | λ - a22 | ... | -a2n |
... | ... | ... | ... |
-an1 | -an2 | ... | λ - ann |
is called the characteristic
polynomial
of A. The equation p(λ)
= det(λ In - A) = 0
is called the characteristic
equation
of A.
Theorem 7.1 Let A be an nxn matrix. The eigenvalues of A are the roots of the characteristic polynomial of A.
Definition 7.3 Let L : V &rarr V be a linear transformation of an n-dimensional vector space into itself. We say that L is diagonalizable, or can be diagonalized, if there exists a basis S for V such that L is represented with respect to S by a diagonal matrix D.
Theorem 7.2 Similar matrices have the
same eigenvalues.
Theorem 7.3 Let L : V &rarr
V be a linear transformation of an n-dimensional vector space into
itself.. Then L is diagonalizable if and only if V has
a basis S of eigenvectors of V.
Moreover, if D is a diagonal matrix representing
L with respect to S, then the entries on the main diagonal
are the eigenvalues of L.
Theorem 7.4 An nxn matrix A
is similar to a diagonal matrix D if and only if A
has n linearly independent eigenvectors.
Moreover, the elements on the main diagonal of D are the
eigenvalues of A.
Theorem 7.5 If the roots of the
characteristic
polynomial of an nxn matrix A are all different from each
other
(i.e., distinct), then A is diagonalizable.
Theorem 7.6 All roots of the
characteristic
polynomial of a symmetric matrix are real numbers.
Theorem 7.7 If A is a
symmetric
matrix, then the eigenvectors that belong to distinct eigenvalues
of
A are orthogonal.
Definition 7.4 A real square matrix A is called orthogonal if A-1 = A, i.e. if ATA = In.
Theorem 7.8 The nxn
matrix
A is orthogonal if and only if the columns (rows) of A form an orthonormal
set.
Theorem 7.9 If A is a
symmetric
nxn matrix, then there exists an orthogonal matrix P such
that
P-1AP = PTAP = D. The eigenvalues of A
lie on the
main diagonal of D.