Linear Algebra, Notes 2
Def 3.3 Let A = [aij] be an nxn
matrix.
Let Mij be the (n-1)x(n-1) submatrix of A obtained
by
deleting the ith row and the jth column of A. The determinant
det(Mij)
is called the minor of
aij.
Def 3.4 Let A = [aij] be an
nxn
matrix. The cofactor
Aij
of aij is defined as Aij =
(-1i+j)
det(Mij).
Theorem 3.10 Let A = [aij]
be an
nxn matrix. Then
-
det(A) = ai1Ai1 +
ai2Ai2 +
... +
ainAin
(expansion along ith row)
-
det(A) = a1jA1j +
a2jA2j +
... +
anjAnj
(expansion along jth column)
Theorem 3.11 Let A =
[aij] be
an nxn matrix, then
-
ai1Ak1 + ai2Ak2 + ... +
ainAkn
= 0 for i ≠ k
-
a1jA1k + a2jA2k + ... +
anjAnk
= 0 for j ≠ k
Def 3.5 Let A = [aij] be an nxn
matrix.
The nxn matrix adj A, called the adjoint
of A,
is the matrix whose (i,j)th entry is the cofactor
Aji
of aji.
Theorem 3.12 Let A = [aij]
be an
nxn matrix, then A(adj A) = (adj A)A = det(A)In.
Corollary 3.4 Let A be an nxn matrix
and det(A)
≠ 0, then A-1 = 1/(det A) * (adj A).
Theorem 3.13 Cramer's
Rule Let A be an nxn
matrix
and Ax = b and det A ≠ 0, then the system has the unique
solution
x1
= (det
A1)/(det A), x2 = (det
A2)/(det
A), ... , xn = (det
An)/(det A),
where Ai is
the
matrix obtained
from A by replacing the ith column of A by b.
Def 4.4 A real vector space
is a set V of elements on which we have two operations + and * defined
with these properties:
(a) if u, v are
elements in V , then u+v is in V (closed under +):
(i) u+v =
v+u for all u, v in V
(ii) u+(v+w) = (u+v)+w for u, v, w in V
(iii) there exists an element 0 in V such
that u+0 = 0+u = 0 for u in V.
(iv) for each u in V there exists an element -u in V
such that u+(-u) = (-u)+u = 0.
(b) If u is any elemnt in V and c is a
real number, then c*u (or cu) is in V (V is closed under scalar
multiplication).
(i)
c*(u+v) = c*u + c*v for any u,v in V, c a real number
(ii) (c+d) * u = c*u + d*u for any
u in V, c, d real numbers
(iii) c * (d*u) = (cd) * u for any
u in V, c, d real numbers
(iv) 1*u = u for any u in V
Theorem 4.2 If V is
a vector space, then
(a) 0*u = u for
any u in V
(b) c*0 = 0 for any scalar c
(c) if c*u = 0, then either c = 0 or u = 0.
(d) (-1)*u = -u for any vector u in V
Def 4.5 Let V be a vector space and W a
nonempty subset of V. If W is a vector space with respect to the
operations in V, then W is called a subspace
of V.
Theorem 4.3 Let V
be a vector space and let W be a nonempty subset of V.
Then W is a subspace of V if and only if the
following conditions hold:
(a) if u,
v are in W, then u+v is in W
(b) if c is a real number and u is any vector
in W, then c*u is in W.
Definition 4.6 Let v1, v2, ...,
vn be vectors in vector space V. A vector
V is a linear
combination of v1, v2, ..., vn,
if
v = a1v1+ a2v2
+ ... + anvn
Definition 4.7 If S = {v1, v2,
..., vn} is a set of vectors in a vector space
V, then the set of all vectors in V that are linear combinations
of the vectors in S is denoted by span S
or span
{v1, v2, ..., vn}.
Theorem 4.4
Let S = {v1, v2, ..., vk}
be a set of vectors in a vector space V. Then span S
is a subspace of V.
Definition 4.8 Let S be a set of vectors in a
vector space V. If every vector in V is a linear
combination of the vectors in S, then S is said to span V,
or V is spanned by the set S; that is, span S =
V.
Definition 4.9 The vectors v1, v2,
..., vn in vector space V are said to be linearly dependent,
if there exist constants a1, a2, ..., an,
not all zero such that
a1v1+ a2v2
+ ...+ anvn= 0.
Otherwise v1, v2, ..., vn
are linearly
independent if, whenever a1v1+ a2v2
+ ...+ anvn= 0,
a1= a2 = ... = an= 0.
Theorem 4.5 Let S
= {v1, v2, ..., vk} be a
set of n
vectors in Rn. Let A be a matrix whose
columns (rows)
are elements of S. Then S is linearly independent if
and only if det(A) ≠0.
Theorem 4.6
Let S1 and S2 be finite
subsets of a vector space and let S1 be a subset
of S2. Then the following are true:
(a) If S1
is linearly dependent, so is S2.
(b) If S2
is linearly independent, so is S1.
Theorem 4.7 The nonzero
vectors v1, v2,
..., vn in a vector space V are linearly
dependent if and only if one of the vectors vj
(j ≥ 2) is a linear combination of the preceding vectors v1,
v2,
..., vj-1.
Definition 4.10 The vectors in a vector
space V are said to form a basis
for V if
(a) v1, v2,
..., vk span V and
(b) v1, v2, ..., vk are
linearly independent.
Natural
(standard) basis in Rn: {(1 0 ... 0)T, (0
1 0 ... 0)T, ... , (0 ... 0 1)T}.
Theorem 4.8
If S = {v1, v2,
..., vn} is a basis for the vector space V, then every
vector in V can be written in one and only one way as a
linear combination of the vectors in S.
Theorem 4.9 Let S
= {v1, v2,
..., vn} be a set of nonzero vectorsin a vector space V and
let W = span S. Then some subset of S is a basis for W.
Theorem 4.10
If S = {v1, v2,
..., vn} is a basis for vector space V and T = {w1,
w2,
..., wr}is a linearly independent set of vectors in V, then
r ≤ n.
Corollary 4.1 If S
= {v1, v2,
..., vn} and T = {w1, w2,
..., wm} are bases for a vector space V, then n = m.
Definition 4.11 The dimension of
a nonzero vector
space V (dim V) is the number of vectors in a basis for V.
The
dimension of the trivial vector space {0} is 0.
Definition 4.12 Let S be a set of vectors in a vector
space V. A subset T of S is called a maximal
independent subset of S if T is a linearly independent set of
vectors
that is not properly contained in any other linearly independent subset
of S.
Corollary 4.2 If
the vector space V has
dimension n, then a maximal
independent subset of vectors in contains n vectors.
Corollary 4.3 If a
vector space V has dimension
n, then a minimal spanning set (if it does not properly contain any
other set spanning V) for V contains n vectors.
Corollary 4.4 If a
vector space V has
dimension n, then any subset of m>n vectors must
be linearly dependent.
Corollary 4.5 If a
vector space V has
dimension n, then any subset of m<n vectors cannot
span V.
Theorem 4.11 If S
is a linearly independent set of
vectorsin a finite dimensional vector space V, then there is a
basis for V that contains S.
Theorem 4.12 Let
V be an n-dimensional vector
space.
(a) If S = {v1, v2,
..., vn} is a linearly independent set of vectors
in V, then S is a basis for V.
(b) If S = {v1, v2,
..., vn} spans V, then S is a basis
for V.
Theorem 4.13
Let S be a finite subset of the
vector space V that spans V. A maximal
independent subset T of S is a basis for
V.
Definition 4.13 Let (V, +, *) and
(W, (+), (*)) be real vector spaces. A one-to-one function
L mapping V onto W is called an isomorphism
of V onto W if
(a) L(u + v) = L(u) (+) L(v), for u, v
in V;
(b) L(c * u) = c (*) L(u) for u
in V, c a real number.
Theorem 4.14
If V is an n-dimensional real vector space, then
V is isomorphic to Rn
Theorem 4.15
(a) Every vector space V is isomorphic to
itself.
(b)
If V is isomorphic to W, then
W is isomorphic to V.
(c) If U
is isomorphic to V and V is isomorphic to
W, then U is siomorphic to W.
Theorem 4.16 Theo
finite dimensional vector spaces are isomorphic if and only if their
dimensions are equal.
Corollary 4.6
If V is a finite dimensional vector space that is
isomorphic to Rn , then dim V = n.
Definition 4.14 Let A be an mxn matrix. The
rows of A, considered as vectors in Rn,
span a subspace of Rn called the row space.
Similarly, the columns of A, considered as vectors in Rm,
span a subspace of Rm called the column space
of A.
Theorem 4.17
If A and B are two mxn row(column)
equivalent matrices, then the row(column) spaces of A
and B are equal.
Def 4.15 The dimension of of the row(column)
space
of A is called the row (column)
rank.
Theorem 4.18 The row and column rank
of the m x n matrix A are equal.
Theorem 4.19 If A is an m x n
matrix,
then rank A + nullity A = n.
Theorem 4.20 If A is an m x n
matrix,
then rank A = n if and only if A is row equivalent to
In.
Corollary 4.7 A is nonsingular if and
only
if rank A = n.
Corollary 4.8 If A is an m x n
matrix,
then rank A = n if and only if det(A) &ne 0
Corollary 4.9 The homogeneous system Ax =
0, where A is n x n, has a nontrivial solution if and only if
rank
A < n.
Corollary 4.10 Let A be an n x n matrix.
The
linear system Ax = b has a unique solution for every n x 1
matrix b if and only if rank A = n.
Theorem 4.21 The linear system Ax =
b has a solution if and only if rank A = rank [A|b], that is if
and
only if the ranks of the coefficient and augmented matrices are equal.
The following are equivalent for an n x n matrix A:
-
A is nonsingular
-
Ax = 0 has only the trivial solution.
-
A is row (column) equivalent to In.
-
For every vector b in Rn, the system Ax = b has a unique
solution.
-
A is a product of elementary matrices.
-
det A &ne 0.
-
The rank of A is n.
-
The nullity of A is zero.
-
The rows of A form a linearly independent set of vectors in
Rn.
-
The rows of A form a linearly independent set of vectors in
Rn.
Def 5.1 V a real vector space. An inner
product on V is a function: V x V â.. R satisfying:
(i) (u,u) . 0.
(ii) (u,v) = (v,u) for
u,v in V
(iii) (u+v,w) = (u,w) +
(v,w),
for u,v,w in V
(iv) (cu,v) = c (u,v),
for c in R, u,v in V
Theorem 5.2 Let S =
{u1, u2, ..., un} be an ordered basis
for a finite dimensional vector space V with an inner product.
Let
cij = (ui, uj) and C =
[cij].
Then
(a) C is a symmetric matrix.
(b) C determines (v,w)
for every v and w in V.
Def 5.2 A vector space with an inner product is
called
an inner product space. If the
space
is finite dimensional, it is called a Euclidean space.
Theorem 5.3 Cauchy -
Schwarz
Inequality
If u, v are vectors in an inner product
space V, then |(u,v)| . ||u|| ||v||.
Corollary 5.1 Triangle
Inequality
If u, v are vectors in an inner product
space V, then ||u+v|| . ||u|| + ||v||.
Def 5.3 If V is an inner product
space,
we define the distance between two
vectors u and v in V as
d(u,v)
= ||u-v||.
Def 5.4 Let V be an inner product
space.
Two vectors u and v in V are orthogonalif
(u,v) = 0.
Def 5.5 Let V be an inner product space.
A set S of vectors is called orthogonal
if any two distinct vectors in S are othogonal. If, in
addition,
each vector in S is of unit length, then S is
called
orthonormal.
Theorem 5.4 Let S = {u1,
u2,
..., un} be a finite, orthogonal set of nonzero
vectors
in an inner product space V. Then S is linearly
independent.
Theorem 5.5 Let S = {u1,
u2, ..., un} be an ortonormal basis for a
Euclidean
space V and let v be any vector in V. Then
v = c1u1+ c2u2 + ... +
cnun
,
where ci = (v,ui), i=1, 2, ..., n.
Theorem 5.6 Gram-Schmidt
Process
Let V be an inner product space
and
W . {0} an m-dimensional subspace of V. Then there exists
an
ortonormalbasis T = {w1, w2, ...,
wm}
for W.
Theorem 5.7 Let V be an
n-dimensional
Euclidean space, and let S = {u1, u2, ...,
un} be an orthonormal basis for V.
If v = a1u1+
a2u2
+ ... + anun and w =
c1u1+
c2u2 + ... + cnun , then
(v,w) = v = a1b1+
a2b2
+ ... + anbn .