Schaum's series-Linear algebra

March 30, 2017 | Author: shrekkie29 | Category: N/A
Share Embed Donate


Short Description

Download Schaum's series-Linear algebra...

Description

SCHAUM'S OUTLINE SERIES THEORV AND PROBLEMS OF

SCHAVM'S OUTLINE OF

THEORY AXD PROBLEMS OF -v

LINEAR

ALGEBRA

BY

SEYMOUR LIPSCHUTZ,

Ph.D.

Associate Professor of Mathematics

Temple University

SCHAIJM'S OIJTUl^E SERIES McGRAW-HILL BOOK COMPANY New

York, St. Louis, San Francisco, Toronto, Sydney

•ed

Copyright © 1968 by McGraw-Hill, Inc. All Rights Reserved. Printed in the United States of America. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher.

37989

8910 8HSH 754321

liX)mOM

^Q^fv^oiA

Preface Linear algebra has in recent years become an essential part of the mathematical background required of mathematicians, engineers, physicists and other scientists. This requirement reflects the importance and wide applications of the subject matter. This book is designed for use as a textbook for a formal course in linear algebra or as a supplement to all current standard texts. It aims to present an introduction to linear algebra which will be found helpful to all readers regardless of their fields of specialization. More material has been included than can be covered in most first courses. This has been done to make the book more flexible, to provide a useful book of reference, and to stimulate further interest in the subject.

Each chapter begins with clear statements of pertinent definitions, principles and theorems together with illustrative and other descriptive material. This is followed by graded sets of solved and supplementary problems. The solved problems serve to illustrate and amplify the theory, bring into sharp focus those fine points without which the student continually feels himself on unsafe ground, and provide the repetition of basic principles so vital to effective learning. Numerous proofs of theorems are included among the solved problems. The supplementary problems serve as a complete review of the material of each chapter.

The

three chapters treat of vectors in Euclidean space, linear equations and These provide the motivation and basic computational tools for the abstract treatment of vector spaces and linear mappings which follow. A chapter on eigenvalues and eigenvectors, preceded by determinants, gives conditions for representing a linear operator by a diagonal matrix. This naturally leads to the study of various canonical forms, specifically the triangular, Jordan and rational canonical forms. In the last chapter, on inner product spaces, the spectral theorem for symmetric operators is obtained and is applied to the diagonalization of real quadratic forms. For completeness, the appendices include sections on sets and relations, algebraic structures and polynomials over a field. first

matrices.

wish to thank many friends and colleagues, especially Dr. Martin Silverstein and Tsang, for invaluable suggestions and critical review of the manuscript. also want to express my gratitude to Daniel Schaum and Nicola Monti for their very I

Dr. I

Hwa

helpful cooperation.

Seymour Lipschutz Temple University January, 1968

CONTENTS Page Chapter

1

VECTORS IN Introduction.

product.

Chapter

2

R"

AND

C"

1

Vectors in R«.

Norm

Vector addition and scalar multiplication. and distance in R". Complex numbers. Vectors in C«.

Dot

LINEAR EQUATIONS Introduction.

18

Linear equation.

System of linear equations. Solution of a system of linear equations. Solution of a homogeneous system of linear equations.

Chapter

3

MATRICES

35

Introduction.

Matrices. Matrix addition and scalar multiplication. Matrix multiplication. Transpose. Matrices and systems of linear equations. Echelon matrices. Row equivalence and elementary row operations. Square matrices. Algebra of square matrices. Invertible matrices. Block matrices.

Chapter

Chapter

4

5

VECTOR SPACES AND SUBSPACES Introduction.

Examples of vector

linear spans.

Row

BASIS

63

Subspaces.

Linear combinations, space of a matrix. Sums and direct sums. spaces.

AND DIMENSION

86

Introduction. Linear dependence. Basis and dimension. Dimension and subspaces. Rank of a matrix. Applications to linear equations. Coordinates.

Chapter

B

LINEAR MAPPINGS

121

Mappings. Linear mappings. Kernel and image of a linear mapping. Singular and nonsingular mappings. Linear mappings and systems of linear equations. Operations with linear mappings. Algebra of linear operators. Invertible operators.

Chapter

7

MATRICES AND LINEAR OPERATORS Matrix representation of a linear operator. Matrices and linear mappings.

Introduction. Similarity.

Chapter

8

150

Change of

basis.

DETERMINANTS Introduction.

171

Permutations.

Determinant. Properties of determinants. Minors and cofactors. Classical adjoint. Applications to linear equations. Determinant of a linear operator. Multilinearity and determinants.

Chapter

9

EIGENVALUES AND EIGENVECTORS Introduction.

Polynomials of matrices and linear operators. Eigenvalues and eigenvectors. Diagonalization and eigenvectors. Characteristio polynomial, Cayley-Hamilton theorem. Minimum polynomial. Characteristic and minimum polynomials of linear operators.

197

CONTENTS Page Chapter

10

CANONICAL FORMS

222

Invariance. Invariant direct-sum decomPrimary decomposition. Nilpotent operators, Jordan canonical positions. form. Cyclic subspaces. Rational canonical form. Quotient spaces. Introduction.

Chapter

11

Triangular form.

LINEAR FUNCTION ALS AND THE DUAL SPACE

249

Linear functionals and the dual space. Dual basis. Second dual Annihilators. Transpose of a linear mapping.

Introduction. space.

Chapter

12

BILINEAR, QUADRATIC

AND HERMITIAN FORMS

261

Bilinear forms. Bilinear forms and matrices. Alternating bilinear forms. Symmetric bilinear forms, quadratic forms. Real symmetric bilinear forms.

Law

Chapter

IB

Hermitian forms.

of inertia.

INNER PRODUCT SPACES

279

Cauchy-Schwarz inequality. OrthogoGram-Schmidt orthogonalization process. Linear nality. Orthonormal sets. functionals and adjoint operators. Analogy between A(V) and C, special operators. Orthogonal and unitary operators. Orthogonal and unitary matrices. Change of orthonormal basis. Positive operators. Diagonalization and canonical forms in Euclidean spaces. Diagonalization and canonical forms in Introduction.

Inner product spaces.

Spectral theorem.

unitary spaces.

Appendix A

SETS AND RELATIONS Sets,

elements.

Set

operations.

315 Product

sets.

Relations.

Equivalence

relations.

Appendix B

ALGEBRAIC STRUCTURES Introduction.

AppendixC

320

Rings, integral domains and

fields.

Modules.

POLYNOMIALS OVER A FIELD Introduction.

INDEX

Groups.

Ring of polynomials.

Notation.

327 Divisibility.

Factorization.

331

chapter

Vectors

in R^

and

1

C

INTRODUCTION In various physical applications there appear certain quantities, such as temperature and speed, which possess only "magnitude". These can be represented by real numbers and are called scalars. On the other hand, there are also quantities, such as force and velocity, which possess both "magnitude" and "direction". These quantities can be represented by arrows (having appropriate lengths and directions and emanating from some given reference point O) and are called vectors. In this chapter we study the properties of such vectors in some detail.

We (i)

begin by considering the following operations on vectors.

The resultant u + v of two vectors u obtained by the so-called parallelogram law, i.e. u + V is the diagonal of the parallelogram formed by u and v as shown on the right. Addition:

and V

(ii)

is

Scalar multiplication: The product kn of a real number fc by a vector u is obtained by multiplying the magnitude of u by A; and retaining the same direction if or the opposite direction if k

3, 7, 2fc)

and solve for

0,

-

-

= 1 2 + 2 5fe (-5) + 1*7 = 2-6 (-4)'3 + 3fc'(-l) + + + U'V = 12 - 3fe - 12 + 7 + lO/c = 0, u'v



fc

Prove Theorem

12

vw

wv

=

i;

O.

(Mi,M2

ku

Since

=

(ku)

U'V =



(ku^, ku^,

V

MjDi

+

•"



=

ku^Vi

M2''^2

.,

.

.

+

+ •

u'U =



fcM2'y2

+

iff

DISTANCE AND NORM IN

(iv)

M'M

-

0,

= -2

kGK,

=

and u-u •



iff

u =





.









Mj

=







=

+

=

ku^V^

'"l"!

+

''2'*2

HU^V^

+

'

"

+ M2'y2

+

'

d(u,v)

=

V(l

- 6)2 +

(7

(ii)

d(u,v)

=

V(3

- 6)2 +

(-5

(iii)

d(u,v)

=

V(5

- 2)2 +

(3



'^n^n)

=

*=(«* '

and since the sum of nonnegative real numbers

i,

2

2

I

for each

that

i,

=-

J. ,.2

I

m

is, iff

=

Find the norm

d{u, v)

is

non-

n 0.

=

d(u, v)

VW -

+ 5)2 = V25 + - 2)2 +

+ 1)2 +

(-2

+ 0)2 +

(-4

=

\/l69

+ 1)2 = V9 +

(4

=

+ •••+(«„ - vj^

v{)^

=

144

?;

49

+

+ 7)2 +

v = (6, -5); (2,-1,0,-7,2).

u=

(i)

(1, 7),

.

13 25

(-1

=

a/83

- 2)2 =

\/47

=

6 where

fc

||m||

")

= V'U

1>n^n

« = (2, fc, 1, -4) and v = (3, -1, 6, -3). (d(u, i;))2 = (2 - 3)2 + (fe + 1)2 + (1 - 6)2 + (-4 + 3)2 = fe2 + 2fe + 28 = 2, -4. + 2fc + 28 = 62 to obtain

Find k such that

fc2





(i)

solve

1-

-I

Find the distance d{u, v) between the vectors u and v where: (iii) m = (5,3,-2,-4,-1), t; = (6,2,-1); (ii) «=(3,-5,4),

Now

1.13.

+

Mn''^n

In each case use the formula

1.12.

fc

ku^),

Since wf is nonnegative for each negative,

Furthermore,

1.11.

0,

tt





(iv)

==

= -l

k

(iii)



(iii)

10

5'2fc



(ii)

-5k -

0,

W„). = (yi.'y2. •'"n). W = (^1,^2. u + v = (mi + Vi, M2 + "2. •••.**„ + I'm). + (U„ + Vn)Wn (u + v)'W = (Ml + Vi)Wi + (% + '"2)^2 + = UiWi + ViWi + U2W2 + 1'2M'2 + + M„W„ + V„W„ = (MiWi + M2W2 + M„w„) + y„w„) + (viWi + V2W2 + + = U'W + VW

M =

Since

(i)

k.

=

For any vectors u,v,w G R" and any scalar

1.2:

+ v)'W = U'W + (fcM)-^ = k{u'v)

(ii)

Let



{u

(i)

=

(-3) -4

of the vector

In each case use the formula

u

||m|1

if

(i)

=

y/u^

(2,

+

4.

m2

-7), .

. .

+

(ii)

^2

u=

,

= ^53

(i)

IHI

=

V22

+

(-7)2

= V4 +

49

(ii)

11^11

=

V32

+

(-12)2

+

= V9 +

(_4)2

u=

144

+

16

= V169 =

13

(3,

-12, -4).

+

,

VECTORS IN

10

1.14.

Determine & such that

Now

Show

solve

that

fc2

||m||

By Theorem

1.16.

+

30

^ 0,

=

1.3

For any vectors u

+

12

and

=

||m||

3,

u=

ifi

=



[CHAP.

1

{l,k,-2,5).

+

(-2)2

=

fc

and u'u

O,

+

fc2

and obtain

39

wu —

1.2,

Prove Theorem

=

AND

R"

= VS^ where u =

||tt||

I|m|I2

1.15.

.

=

52

+

A;2

30

-3.

0.

=

m

iff

Since

0.

=

||m||

yjii-u,

the result follows.

(Cauchy-Schwarz):

=

{u\,

.

.

and v —

m„)

.

(vi,

.

\u-v\

in B",

.,Vn)

.

^

]\u\\ \\v\\

n

We

shall prove the following stronger statement:

If M = or V = 0, then the inequality reduces to need only consider the case in which m # and v Furthermore,

-

\U'V\

+

IMjI)!







Thus we need only prove the second

Now

for any real

numbers

w,

+

^

M„V„|

=

and y

|mj|/||m||

=

3/

G



R,

(x

^

— j/)2 =

But, by definition of the {2)

norm i



2xy

+

y^

2

any

i,

^^'

IWP

It'iP

IMP

IMP

2 is,

ki^il ,1

II

,1

-

11

or, equivalently,

(1)

IMP

2kP

IMI IHI that

x^

2

^

2 kiVil

=

\u„vj

of a vector, ||m|| = 2m,^ = kiP and and using \u(Vi\ = ImjI |i;j|, we have

2M

HvH.

and is therefore true. Hence we where ||m|| # and ||i;|| # 0.

+

•••

I|m||

3/2

in (1) to obtain, for

Ifil/HvH

with respect to

+

a;2

IHI IHI

summing

+

\UiVi\

— i.e.

0,

^

|Mt'"tl

inequality.

2xy Set X

— j^

2



\u'v\

= S^f =

||i;||

IMI^

IMP

IMP

IblP

2|v,-|2.

Thus

1

IMI IHI Multiplying both sides by

1.17.

||m||

we

H'wH,

obtain the required inequality.

Prove Minkowski's inequality:

For any vectors u-{ui,...,Un) and v = If

IIm

Now

+ vjI = JMj

+ V(| —

jwjl

\\u

+

+

|i)j|

v\\2

=

2(«i +

=

2 ki 2 ki +

=

But by the Cauchy-Schwarz inequality

2M+fj|KI ^ Thus Dividing by

||M

+ f||2 ^

||m-|- v||,

we

numbers

for any real

(see

+

i;||

IHI

.

in R",

.,Vn)

mj, Vj

+

G

||m

+ vH

=^

||tt||

i'iP

2

^

2

Hence

R.

2k +

+ Vjj

\ui

Vil \ui\

ki +

ki

Vil

+ Vil

(kil

+ M)

Ivjj

preceding problem),

and

Ik+^IIIMI |Im

=

i'

-

-6x - 12y +

2L2:

—21,1:

L3

and

2L2

- 'iw = -2 — w = 3 - 5w =

1

CHAP.

LINEAR EQUATIONS

2]

Thus the

21

original system has been reduced to the following equivalent system:

+

2x

iy

-

z

+

2v

+ 2w =

5z

-

8v

+ 2w = -17

32

+

V

1

— 5w =

1

Here

Observe that y has also been eliminated from the second and third equations. the

unknown

unknown

z plays the role of the

Xj

above.

We note that the above equations, excluding the first, form a subsystem which has fewer equations and fewer unknowns than the original system (*). We also note that: if

(i)

an equation Oa;i + + and has no solution; •



=

Oa;„



b ¥=0

5,

occurs, then the system is incon-

sistent

(ii)

+ Oa;„ an equation Oaji + without affecting the solution.

if







=

occurs, then the equation can be deleted

Continuing the above process with each new "smaller" subsystem, we obtain by induction that the system (*) is either inconsistent or is reducible to an equivalent system in the following form aiiXi

+

ai2X2

+

ttisccs

Cl'2uXj,

+

+ a2.j,+ lXi„ +

Ciri^Xj^

where

1

< ^2 <







< ^r and where an

1

+

+ar,j,+ ia;j^+i

+







+

(iinX„

=

bi

+

a2nXn

=

&2

+

arnX„

=

br

,

^

the leading coefficients are not zero:

¥ 0,

a2i2

^

^'



>

^^^r

^

^

(For notational convenience we use the same symbols an, bk in the system (***) as (*), but clearly they may denote different scalars.)

we

used

in the system

Definition:

(***) is said to be in echelon form; the unknowns Xi which are termed do not appear at the beginning of any equation {iy^lyjz, ., jr)

The above system

.

.

free variables.

The following theorem

Theorem

2.2:

applies.

The solution of the system two cases: (i)

(ii)

(***) in echelon

form

is

as follows.

r — n. That is, there are as many equations as unknowns. system has a unique solution.

There are

Then the

there are fewer equations than unknowns. Then we can arbitrarily assign values to the n — r free variables and obtain a solution of the system.

r

reduce the following system by applying the operations -* — — Lg 2Li + L4, and then the operations L3 2Lt + Lg and I/4 and L4 -» -21/2 + L^:





X X

2x 2x

+ + + +

2y Sy

5y ey

+ — +

Sz

4

z

11

4z 2z

=

+

a;

-

2]/

y

13

y

22

2y a;

+

2?/ 2/

+ + +

32

+

Az 2z

= = =

4z 2z

7

+

2y 2/

+

Sz 4z 2z

5

14

8z 3z

X

4

= = =

= = = =

4 7

2

4 7 2

Observe first that the system is consistent since there is no equation of the form = 6, with 6 # 0. Furthermore, since in echelon form there are three equations By the third equation, in the three unknowns, the system has a unique solution. Substitut2 = 1. Substituting z = 1 into the second equation, we obtain y = Z. Thus a; = 1, y = Z ing z = 1 and y — Z into the first equation, we find x = 1. and z = 1 or, in other words, the 3-tuple (1, 3, 1), is the unique solution of the system.

Example

2.6:

We

reduce th© "following system by applying the operations L2 —5Li + L3, and then the operation L3 -^ — 2L2 + L^:

L3



X

+ + +

2x 5x

2y

2z

42/

3z

lOy

8z

+ Zw = + 4w = + llw =

X

2

+

2y

-

2z

5

z

12

2z

x

+

2y

+ Zw = - 2w = — 4w =

-2z + Zw = z - 2w =

2 1

2 1

2

-2Li

2y

-

2z z

+

L2 and

+ Zw = - 2w = =

2 1

CHAP.

LINEAR EQUATIONS

2]

23

The system is consistent, and since there are more unknowns than equations in echelon form, the system has an infinite number of solutions. In fact, there are two free variables, y and w, and so a particular solution can be obtained by giving y and w any values. For example, let w = 1 and y = —2. Substituting w = 1 into the second equation, we obtain z = 3. Putting w = X, z = 3 and 2/ = — into the first equation, we find a; = 9. Thus a; = 9, y — —2, z = 3 and w = 1 or, in other words, the 4-tuple (9, -2, 3, 1) is a particular solution of the system.

We

Remark:

find the general solution of the system in the above example as follows. Let the free variables be assigned arbitrary values; say, y = a and w = b. Substituting into the second equation, we obtain 2 = 1 + 26. Putting y = a, z = l + 2b and w = h into the first equation, we find a; = 4 - 2a + &. Thus the general solution of the system is

w-h a;

— 2a + b, = a, 2 = 1+ 26, w — h (4 — 2a + 6, a, 1 + 26, 6), where a and 6 are arbitrary num-

=

4

i/

other words, Frequently, the general solution and w (instead of a and 6) as follows:

or, in

bers.

X

We

=

A

— 2y +

w,

z

=

1

in

is left

+ 2w

or

terms of the free variables y

— 2y + w,y,l + 2w, w)

(4

will investigate further the representation of the general solution of a

system of linear equations in a later chapter. Example

2.7:

Consider two equations in two unknowns: a^x

+

6jj/

=

Cj

OgX

+

622/

=

C2

According to our theory, exactly one of the following three cases must occur: (i)

The system

is

inconsistent.

(ii)

The system

is

equivalent to two equations in echelon form.

(iii)

The system

is

equivalent to one equation in echelon form.

When

linear equations in two unknowns with real coefficients can be represented as lines in the plane R^, the above cases can be interpreted geometrically as follows:

The two

lines are parallel.

(ii)

The two

lines intersect in a unique point.

(iii)

The two

lines are coincident.

(i)

SOLUTION OF A HOMOGENEOUS SYSTEM OF LINEAR EQUATIONS we

begin with a homogeneous system of linear equations, then the system is clearly = (0, 0, ., 0). Thus it can always it has the zero solution be reduced to an equivalent homogeneous system in echelon form: If

consistent since, for example,

aiiXi

+

.

ai2X2

+

a2i2Xj2

+

a2.J2+lXj2+l

drj^Xj^

Hence we have the two

+ amXn = + a2nXn =

+

aisXs

1

+

({r.i^+lXj^+l

.

+







+

drnXn

^

possibilities:

(i)

r

= w.

Then the system has only the zero

(ii)

r



-^21

+ fill + -^21

Ai2

+ Bm\

Am2 + Bm2

A22

+ Bi2 + B22

The case of matrix multiplication

is less obvious but partitioned into blocks as follows

-Zl

^

JJ

U12

...

C/ip\

C/22

...

U2P

Vmi

...

Umpj

^^^

.





still

Aln

.

+

Bin

such that the number of columns of each block Uik block Vkj. Then

Amn + Bm true.

^

y

I

Wn

...

Wm

W-n

W22

...

Wzn

Ui2V2i

+





+

suppose matrices

is,

7i2

...

Vin\

V2I

V22

...

V2:

\Vj,l

F22

...

Fpn/

equal to the

is

That

/Fa I

'Wn

Wa = UnVn +

where

.

I

\Am\

U and V are

Am + Em

...

number

of rows of each

UipVpj

of the above formula for UV is straightforward, but detailed and lengthy. as a supplementary problem (Problem 3.68).

The proof is left

Solved Problems

MATRIX ADDITION AND SCALAR MULTIPLICATION 3.1.

Compute:

-{I ^^ (i)

2

-3

-5

1

[1 -1 -

Add

1)

/3 -5

4^\

-1

^

/3

-1\

-2

-3y

-

I2

J

6

5\ (iii)

-{I

2

-3

-5

6

corresponding entries:

n 2\Q -5

-5 I

-0

+

3

(0 +

2

/I

6

-

-2 -^)

(I

-

2-5

-3

+

(ii)

The sum

(iii)

Multiply each entry in the matrix by the scalar —3:

is

4-333

4-- l^ -1 --3y

6

1-2

-5 +

2

not defined since the matrices have different shapes.

0/1

2

-3N

^

-

-

'

'

'

_ -

/ '

-3 -12

-6

9

15-18

-5 -1 -4

It

CHAP.

««

3^.

MATRICES

3]

T

.

Let

-5

A = 72 .

-2 -3\

/I



1\

0-4)'^ = (0-1

(3

47

/O

-2\

1

= (l-l-lj-F^"'' (v)

X

the second is 2

5A-7;

is

l-0 + 6'(-l) \ + 5-(-l)y

/

^V

1

(iii)

l'4 + 6-2 V(-3)-4 + 5.2

^

-1/

2

is

[CHAP. 3

C=

and

(bfl,)

Furthermore,

(e^).

AB = S =

let

and

(sj^)

BC = T =

(t,,).

Then

=

Sjfc

+

ajiftifc

+

at2b2k







+

2

=

at„6mfc

Oyftjj.

3=1 n

=

hi

Now

S by

multiplying

{AB)C

C,

+

i.e.

+

+

bj„c„i

=

(AB) by C, the element in the

ith

^ji'^ii

bjiCn







is

+

»ilCll

SJ2C21

+

On the other hand, multiplying of the matrix A{BC) is





A

+



by

2

=

Si„C„i

A

i.e.

+

«i2*2!

+





+



3.12.

Prove Theorem Let

and

A=

(tty),

F = AC =

n

m

fc=l

j=l

Ith

{"'ifijkiOkl

=22

»ij*jl

column of the matrix

in the tth

m

2

=

aim*ml

Since the above sums are equal, the theorem

row and

by BC, the element tn

»il*ll

fcjfcCfci

=22

StfcCfcl

k=l

T,

2 lc=l

row and

fth

column

n

««(6jfcCfci)

proven.

is

A{B + C) = AB + AC. B = (6jfc) and C = (Cj^). Furthermore,

3.2(ii):

(fik)-

D = B + C=

let

(dj^),

E = AB =

(ej^)

Then djk

=

6jfc

+

e*

=

aii6ik

+

ai2*'2fc

/ifc

=

Ojl^lfc

+



fem(ami. ^^l^ml

+





• .





O-mn)

+ kmO'mn)

obtain the desired result.

CHAP.

4.30.

VECTOR SPACES AND SUBSPACES

4]

A=

Prove: Let

and

let

B—

79

be an echelon matrix with distinguished entries aij^, a^^, be an echelon matrix with distinguished entries bik^, &2fc2»

(an)

(&«)



****** \

Olj

^

b-i

/

a2i.

4i

^

:]c

A =



4i

9|i

V

v

ifc

^

Osfc



.

.

., ttrj,,

bsk;.

.

B a,v,

A and B have the same row B are in the same position: ji —

Suppose of

32 =

fci,

Clearly

and



s

CiO

+

C2O

i4

We

1.

Since the

+



let

space.

+



cji

# 0.

A =

.

Ji



ij

row space of B, we Cj. But this contradicts the fact that and similarly fei — ij. Thus j'l = fcj.

k^,

and

only prove the theorem when r — 1 Then the j^th. column of B is zero. have by the preceding problem, Oi^^ =

if

the distinguished

for scalars

Hence

A

B = 0, and so we need = k^. Suppose ii < k^.

the

is in

of

.

A obtained by deleting the first row of A, and let B' be the obtained by deleting the first row of B. We prove that A' and B' have the same The theorem will then follow by induction since A' and B' are also echelon matrices. A' be the submatrix of

submatrix of

row



and only show that

if

first

row of

first

element a^

Now

=

Then the distinguished entries kz, ., jr = kr, and r = s.

space.

B

.,a„) be any row of A' and let R^, ...,B,n ^^ the rows of B. Since R is in Let R = («!, 02, .,d^ such that R — diRi + ^2^2 + + dmRm- Since the row space of B, there exist scalars d^, A is in echelon form and R is not the first row of A, the ^ith entry of R is zero: aj = for Furthermore, since B is in echelon form, all the entries in the fcjth column of B are i = ;j = fej. .

.

.

except the

= Now

6ifc

#

and so

=

but 62^1

61^.^ ¥= 0,

first:

di

=

0.

>

^>

=

Ofcj



.

+

difeifcj

B

Thus

is

^rnkj

=

dgO





Thus

0.

+





+



d„0

d,b.

a linear combination of R^,

.

.

.,Bm and so

is in

row

the

space of B'. Since R was any row of A', the row space of A' is contained in the row space of B'. Similarly, the row space of B' is contained in the row space of A'. Thus A' and B' have the same row space, and so the theorem is proved.

4.31.

Prove Theorem 4.7: Let A = {ckj) and B = (&«) be row reduced echelon matrices. Then A and B have the same row space if and only if they have the same nonzero rows. we

Suppose exist scalars

A

and

Cj,

.

.

B

. ,

have the same row space, and suppose

c^

1

but

Cfc

=

for

A;

#

+

CiRi

where the Ri are the nonzero rows of B.

=

R

¥=

+

c^Rs

««i

But by the preceding problem

C2R2

+





The theorem



is

proved

=

6y. is

i.e.

Cl&ljj

the

+

first

¥=

row

Then there

of A.

i,

W if

we show

«262Ji

nonzero entry of R.

+







+

a distinguished entry of

the only nonzero entry in the ijth column of B. Thus from oy = 1 and 6y = 1 since A and B are row reduced; hence Cj

suppose k

the ith

that

R —

R^,

or

t.

Let ay be the distinguished entry in R,

Now

is

such that

R Cj

Thus

Obviously, if A and B have the same nonzero rows then they have the same row space. only have to prove the converse.

and

b^j.

a„.

"«fc

is

=

+

Hb2j^

+

(1)

and Problem

B

and, since

(2)

=

+

4.29, (2)

C,b,j.

we

B

obtain

is

Oy^

row reduced,

=

Cjfty..

it is

However,

1.

the distinguished entry in R^. Cibij^

By

By

C,b,j^

(i)

and Problem

4.29, (S)

VECTOR SPACES AND SUBSPACES

80

B

Since



"ijfc

A

row reduced,

is

and the theorem

4.32.

=

row reduced, a^^

is

bj^j^ is

the only nonzero entry in the i^th column of B; hence by

Furthermore, by the preceding problem

OkHj,^-

is

Thus

0.

[CHAP. 4

=

c^b^j^

a distinguished entry of

is

a,.j

and, since

b^j^

=

1,

c,,

=

A

R = R^

Accordingly

0.

(3),

and, since

proved.

Determine whether the following matrices have the same column space: /l

3

5\

1

4

3

\1

1

9/

A = Observe that the same

A

row

space.

A'

=

112

-2 -3 -4

B =

,

3^

12

7

\

17y

and B have the same column space if and only if the transposes A* and B* have Thus reduce A' and J5' to row reduced echelon form: 1

4

3

to

1

-2

1

to

1

1

1

-2

to

0/ 'l

=

B*

-3 \s -4

12

2

to

1

17/

Since A* and B* have the same row space,

4.33.

Let

jR

defined,

.

. ,



RB = = = = Thus

and

7

1

-2

B

to

0/

\0

have the same column space.

B a matrix for which RB is defined. Show that RB is a rows of B. Furthermore, if A is a matrix for which AB is

show that the row space of AB is contained in the row space of B. i? = (aj, ttg, a^) and B = (6y). Let 5i, ...,B^ denote the rows

Suppose .

A

-2

to

be a row vector and

linear combination of the

Bi,

2

\0

-2 -4/

.

.



. ,

B

of

and

Then

columns.

its

{R'B^.R'B^, ...,R'B^)

+ 02621 +

(aibii

ai(6ii, 612,

+

a^Bi

6i„)

. ,

.

+

a252







ttm^ml. ai*12

+ 02*22 +

+ a2(b2i, 622 + amBm

&2n)



+



+ am&m2. + am(&ml.

• .







+ 02*2n ^

«l&ln 6m2.







.

1"

am&mn)

bmn)

a linear combination of the rows of B, as claimed.

fijB is

By Problem result each

.

^-

row

3.27, the

of

AB

is

AB are RiB where i?j is the tth row row space of B. Thus the row space of

rows of in the

of A.

AB

is

Hence by the above contained in the row

space of B.

SUMS AND DIRECT SUMS 4.34. Let U and W be subspaces (i)

(ii)

of a vector space V.

W are contained in

U and U+W

\&

JJ

Show

that:

+ W;

the smallest subspace of V containing C/ and PF: C/ + W^ = L(C7, W).

U

and W, that

is,

U+W

is

the

linear span of (i)

a subspace of V and so d &W. Hence m = m + OS J7+W. U +W. Similarly, W is contained in U + W. Since 17 + W is a subspace of V (Theorem 4.8) containing both U and it must also contain the linear span of V and W: L{JJ,W) (ZU + W. On the other hand, if v GU +W then v — u + w = lu+lw where uGU and w &W\ Let M

G

By

[/.

Accordingly, (ii)

U

hypothesis TF contained in

is

is

Tl',

hence v

U+

W

is (Z

a linear combination of elements in

UuW

L(U, W).

The two

inclusion relations give us the required result.

and

so

belongs to L{'U,W).

Thus

CHAP.

4.35.

VECTOR SPACES AND SUBSPACES

4]

U

Suppose

W are subspaces of a vector space

and

Show

W.

{Wj) generates

^U +W.

Let V

that {Ui,W}),

= u-\-w where u G U and w G W. Since {mJ generates and since {Wj} generates W, w is a linear combination of Wj's:

Mj's;

=

u

w =

and so

4.36.

= u+w =

V {mj,

w^} generates

Prove Theorem 4.9: if and only ii (i) V

Since such a

On

-

V

(1)

uG U

u-u'GU

U

Let

Note

+

a2Ui

+





+







+

a„Mj

+

K'Wj^,



+

a^Mj

?7,

tt

is

K bj e K G

ttj

,

+

biW,

b2Wj

+





+



fc^Wj

V

is

sum

the direct

UnW

=

of its subspaces

U

and

vGU,

1;

GV

G W;

V - U



and

unique and

+ v where OGU, UCiW = {0}.

v

w

GW

UnW

= {0}. Let vGV. Since V = U + W, and u + w. We need to show that such a sum is unique. and w' G W. Then

V =

u

so

UnW

- w 6 W; hence by u — u' — 0, w' — w = is

=

v

(2)

Accordingly,

0.

+W

W

{0}.

can be uniquely written in the form v = u + Thus, in particular, V = U + W. Now suppose v G UnW. Then:

Then any

V

a

TF.

w S W such that v = = u' + w' where u' G U u + w = u' + w' and

first

is

=

{{a,

b,c): a

the yz plane.) JJnW'

that

=

b

=

=

u'

w'

—w

{0},

and

so

®

TF.

C7



u

=

u',

w =

w'

=

= c

=

b

=

c}

and

W

=

{(0, 6, c)}

Show that R^ = U ® W. v = (a, b, c) G UnW implies that = a = 0, 6 = 0, which implies

for

{0},

and a

=

c

(0, 0, 0).

We also claim that (a, a, a)

imply R3

4.38.

+

and w'

a

where

62WJ2



and

and PT be the subspaces of R^ defined by

(Note that PF

V

+

must be unique, v

U =

i.e.

+

where

Thus such a sum for f G

4.37.

b^Wj^

and

Suppose also that v

But

i7

the other hand, suppose

there exist

+

W

for «

sum

ciiUi

The vector space = U+ and (ii)

+

V

+

ttiMj

U ® W. w G W.

Suppose V = where uG U and

ffi^j

U

V, and that {im} generates generates U + W.

{Wj),

Then v

linear combination of

Thus

U

{Vn}

i.e.

81

=

S

R^

and

C7

= U + W. For if = (a, 6, c) S RS, then v = (a, a, - a, c- a) GW. Both conditions, UnW = {0} 1;

a)

(,0,b

+ (0,b-a,c- a) = U + W,

and R3

TF.

C7

W

be the Let V be the vector space of ti-square matrices over a field R. Let U and Show that subspaces of symmetric and antisymmetric matrices, respectively. iff anti-symmetric M*, and is symmetric ifi V = U ® W. (The matrix

M

M

M* = -M.)

We We

first

claim that

that

is, -J^CA

V = U + W. Let A be any arbitrary w-square matrix. A = ^(A + A*) + i(A - At) |(A + A') G 17 and that ^(A - A«) G W. For {^{A+At)y = i(A+A«)' = i(A« + A«) = ^{A+A')

show that

+ A')

symmetric.

is

(^(A-At))' that

is,

We implies

J(A

— A')

is

M

Furthermore,

= i(A-A')'

:-

i(At-A) = -^(A-A'^)

antisymmetric.

UnW = {0}. M = Hence

next show that

= -M

Note that

or

0.

Suppose

I7nW =

MGUnW. {0}.

Then

Accordingly,

M = M'

and M^

V= U®W.

= -M

which

VECTOR SPACES AND SUBSPACES

82

[CHAP. 4

Supplementary Problems VECTOR SPACES 4.39.

Let

y be

the set of infinite sequences V defined by

(a^, a^,

.

.

K

in a field

.)

with addition in

V

and scalar multi-

plication on

+

(«!, a2. •••)

where 4.40.

k

aj, bj,

G K. Show

V

that

Let V be the set of ordered pairs on V defined by

+

(a, 6)

Show

V

that

(c,

satisfies all of the

.

.

.

+

02+

62,

numbers with addition

in

=

)

(ai

6i,

ka2,

(fcaj,

.

.

.)

.

.

.

a vector space over K.

is

of real

(a, 6)

= {a+

d)

=

(6i, &2. •••)

k(ai, 02,

c,b

+ d)

and

V

=

k{a, b)

axioms of a vector space except [AfJ

:

and scalar multiplication

{ka, 0)

=

lu

u.

Hence

[^4] is not a

consequence of the other axioms.

4.41.

4.42.

Let V be the set of ordered pairs (a, b) of real numbers. Show that with addition in V and scalar multiplication on V defined by: (i)

{a,b)

(ii)

(c, 6)

(iii)

(a, b)

(iv)

(a, 6)

Let

V be

real field

+ + + +

= = = =

(c,d) (c,

d)

(c,

d)

(c,

d)

4.43.

4.44.

4.45.

(a

+ d,b + c) + c,b + d) and

(0, 0)

and

k(a, b)

and

k(a, b)

k{a, b)

and

(ac, bd)

-

k{a, b)

= =

is

z^, Z2,

Wi,

+

=

(wi, W2)

(«i

+ Wi,

not a vector space over

R

(ka, kb); (a, 6);

{ka, kb);

=

(ka, kb).

the set of ordered pairs (zi, z^) of complex numbers. Show that R with addition in V and scalar multiplication on V defined by (zj, Z2)

where

{a

V

«2

+ '"'2)

and

V

a vector space over the

is

=

k{zi, 22)

{kzi, kz2)

W2 ^ C and k GB,.

Let y be a vector space over K, and let F be a subfield of K. Show that V is also a vector space over F where vector addition with respect to F is the same as that with respect to K, and where scalar multiplication by an element k G F is the same as multiplication by k as an element of K.

Show

that [A4], page 63, can be derived from the other axioms of a vector space.

W

be vector spaces over a field K. Let V be the set of ordered pairs (u, w) where u Let U and uGU, w G W}. Show that y is a vector space over belongs to U and w to W: V = {{u, w) with addition in V and scalar multiplication on V defined by

K

:

(u,

where u, u' and W.)

G

w)

+

U, w,w'

(u',

GW

=

w')

and

(u

k

G

+ u',w + w') K.

and

(This space

V

w)

k(u,

is called

=

(ku,

kw)

the external direct

sum

of

U

SUBSPACES 4.46.

Consider the vector space that TF is a subspace of V (i)

(ii)

4.47.

Let (iii)

Problem

4.39, of infinite sequences (a^, 02,

.

.

.)

in a field K.

Show

all

Determine whether or not (ii) a ^ (i) a = 26; where k^ S R.

W

in

W consists of all sequences with as the first component; W consists of sequences with only a finite number of nonzero components.

which:

4.48.

V if:

W b

is

^

W

a subspace of RS if (iv) a (iii) ab = 0;

c;

=

consists of those vectors b

-

c;

(y)

a

^

6^;

(vi)

(a, b, c)

kia + kib

G RS for + kgfi = 0,

be the vector space of w-square matrices over a field K. Show that T^ is a subspace of V if antisymmetric (A« = -A), (ii) (upper) triangular, (i) of all matrices which are diagonal, (iv) scalar.

y

consists

CHAP.

4.49.

VECTOR SPACES AND SUBSPACES

4]

AX = B

Let

be a nonhomogeneous system of linear equations in that the solution set of the system is not a subspace of K".

Show 4.50.

V be the V in each

Let of

83

W consists of

(i)

that

all

^ M,

|/(a;)|

W consists of W consists of W consists of W consists of

(ii)

(iii)

(iv)

(v)

R

vector space of all functions from the real field of the following cases.

bounded functions. (Here /

bounded

is

Show

into R.

if

W

that

K.

a subspace

is

Af

there exists

field

GR

such

G R.)

Va;

/(— «)

=



1.

G

R.)

any number of subspaces of a vector space

V

is

even functions.

all

continuous functions.

all difTerentiable

R -* R

f{x), Va;

all

all

R -^ R

:

n unknowns over a

(Here /

:

is

even

if

functions.

integrable functions

say, the interval

in,



a;

(The last three cases require some knowledge of analysis.)

4.51.

4.52.

Discuss whether or not R^

Prove Theorem

The

4.4:

is

a subspace of R^.

intersection of

a subspace

of y.

4.53.

U

Suppose

UcW

and

W

V

are subspaces of

UuW

for which

is

also a subspace.

Show

that either

WcU.

or

LINEAR COMBINATIONS 4.54.

4.55.

Consider the vectors u

(1,

-3,

and v

2)

=

(2,

-1,

in R3.

1)

—4) as a linear combination of u and

v.

as a linear combination of u and

v.

(i)

Write

(1, 7,

(ii)

Write

(2,

(iii)

For which value of k

(iv)

Find a condition on

—5,

4)

a linear combination of u and vt

is (1, k, 5)

a, b

and

a linear combination of u and

c so that (a, 6, e) is

Write m as a linear combination of the polynomials (i)

4.56.

=

M

=

B

Write

+ 8« - 5,

3*2

where:

(ii)

M

=

4n

is not possible. Otherwise, after n of the above steps, we obtain ., w„ which implies that w„+i is a linear combination of Wj, This w„}. ., the generating set {wi, contradicts the hypothesis that {wj} is linearly independent.

Lastly,

we show

that

.

.

5.15.

.

.

Prove Theorem 5.3: Let 7 be a finite dimensional vector space. V has the same number of vectors.

Then every

basis of

Since •) is another basis of V. Suppose {ei,e2, ...,e„} is a basis of V, and suppose {fufi, dependent by the •} must contain n or less vectors, or else it is generates V, the basis {fi.fz, } contains less than n vectors, then preceding problem. On the other hand, if the basis {fiJ^, ej is dependent by the preceding problem. Thus the basis {fiJz, ••} contains exactly n {ej •

{e,}



vectors,

5.16.

and so the theorem

is





true.

Prove Theorem 5.5: Suppose {vi, ...,Vm} is a maximal independent subset of a S which generates a vector space V. Then [vi, ..,Vm} is a basis of V.

set

.

weS. Then, since {vj} is a maximal independent subset of S, {Ui, ...,«„, w} is dependent. By Problem 5.10, w is a linear combination of the i^j, that is, w G L(Vi). Hence linearly is mS C L(Vi). This leads to V = L{S) C L(vi) C V. Accordingly, {vJ generates V and, since it dependent, it is a basis of V. Suppose

CHAP.

^

5.17.

AND DIMENSION

BASIS

5]

V

Suppose

generated by a finite set S. S is a basis of V.

is

Show

101

V

that

of finite dimension and, in

is

particular, a subset of

Method 1. Of all the independent subsets of S, and there is a finite number of them one of them is maximal. By the preceding problem this subset of S is a basis of V.

S

since

is finite,

Method 2. If S is independent, it is a basis of V. If S is dependent, one of the vectors is a linear combination of the preceding vectors. We may delete this vector and still retain a generating set. We continue this process until we obtain a subset which is independent and generates V, i.e. is a basis of V.

^

5.18.

Prove Theorem

V

Let

5,6:

be of

finite

(iii)

A linearly independent set with n elements is a basis. {ei,

(i)

Since

(ii)

Suppose

.

.

e„} is a basis of V.

,

.

{v^,

.

.

.

,

By of (iii)

5.4,

V

is

dependent by

is

(ii),

an independent set T with n elements Thus,

T

Whe a subspace

5.7:

dim W^n. In

particular, if

V

Since

is

W

is

Let

basis

.,v^} as a subset.

But every basis of

part of a basis.

W = n,

dim

V

of an 7i-dimensional vector space V.

contains

Then

W=V.

then

In particular, if {w^, . , w„} is a basis of W, then since = V when dim = n. a basis of V. Thus .

Prove Theorem

.

W

U nW

{^1

and

W respectively.

Note that B has exactly basis of U-\-W. Since

U +W.

7111,

Thus

Uj}

and

...,«„_,}

n elements

V—

5.6(ii),

W

= n and m, dim we can extend {«{}

v„M)i, ..., w„_,}

{vi, ...,

Let

{Vi,

+n—r

{v^,

By Theorem

. ,

.

Vr, Ml,

B -

set with

dim{Ur\W).

Suppose dim

is

.

an independent

W-

+ W) = dim U + dim

dim(C7

5.8:

it is

W

a subspace of both U and W. dim (Ur\W) = r. Suppose {v^, vj is a basis of UnW. to a basis of U and to a basis of W; say,

Observe that

generates

.

W — n.

it is also

U

.

of dimension n, any w + 1 or more vectors are linearly dependent. Furthermore, since consists of linearly independent vectors, it cannot contain more than n elements.

Accordingly, dim

are bases of

n elements and every

a basis.

is

Prove Theorem

a basis of

5.4.

generated by a set of the form

the preceding problem, a subset of S^ is a basis. But S contains contains n elements. Thus S is a basis of V and contains {vi,

elements.

Lemma

{vi, ...,Vr,e,^, ...,ei^_^}

V

By n

or more vectors

By Lemma

independent.

v,.} is

+1

any m

...,«„} generates V,

{ei,

S =

5.20.

Then:

n.

(ii)

Suppose

5.19.

dimension

Any set of n + 1 or more vectors is linearly dependent. Any linearly independent set is part of a basis,

(i)

...,V„

Ml,

elements.

U

generates

it suffices to

.

..,«„_„ Wi,

.

.

.,

W„-r}

Thus the theorem is proved if we can show that B is a and {v^, w^) generates W, the union B = {vj, Uj, w^}

show that

B

is

independent.

Suppose aiVi

where

Oj, bj,

+







+

a^v^

c^ stte scalars.

V

+

61M1

+





+

6„_,.M„_,.

+

Ci^i

+







+

c„_,.m;„_,.

=

(1)

Let

=

aiVi

+







+

a^Vr

+

61M1

+







+ bn-^u^_^

(2)

AND DIMENSION

BASIS

102

By

(1),

we

[CHAP.

have that

also

^n — r *^n — r

Since {vi,Uj} cU, v e Now {Vi) is a basis of

Thus by

(^)

U

hy

UnW

c„_, = 0.

+

= 0,

j,

Uj} is a basis of

61

v

TF,



+



d^V^

+

+

CiWi



W

+

aiVi

a,

and since

so there exist scalars di,

W

€: .

. ,

.

by

d,

vGUnW.

Accordingly,

(5).

which v

fo""

=

dj^i

+

= 0,

.

.

6to_^

,

.

U







and so

+

a^Vr

+

+

bjUi









+

d^v^.

=

C„_rW„_r

+



Cx

= 0,

.

.

«!

= 0,

.

.

.



b^n-rUm-r

Hence the above equation forces

independent.

is

+



Hence the above equation forces

and so is independent. a basis of Substituting this into (1), we obtain

{vi, Wfc} is

But {f

{w J c

(2);

and

(S)

we have diVi

But

5

.

= 0.

Since the equation {1) implies that the is proved.

and

aj, 6^

c^ are all 0,

B=

{vi, Uj,

w^}

is

independent

and the theorem

5.21.

Prove Theorem Let

A

The row rank and the column rank

5.9:

_

A

jail

ai2



space:

*m)

S;:

"T"

/CjjOj.

+

^2r"r

+

l^mr^r

are scalars. Setting the ith components of each of the above vector equations equal to w: , obtain the following system of equations, each valid for i = 1,

we

=



r and that the following r vectors form a basis for the

Then each of the row vectors

where the

«in ^2n

I

[O'ml

Bm

equal.

m X w matrix:

be an arbitrary

Let Ri, B2

any matrix are

of

1,

.

.

.

. ,

^12621

+





+

^Ir^ri

+

*'22&2i

+





+

k2rbri

+

fcm2*2i

+

'

"

+

k^rbr

«li

=

^^ll^li

+

ttgi

=

fe21*li

«m«



^mlbii

'

.

w:

In other words, each of the columns of

A

is

a linear combination of the r vectors

I'l

.

CHAP.

AND DIMENSION

BASIS

5]

Thus the column space of the matrix — row rank.

A

103

has dimension at most

r, i.e.

column rank



Hence, column

r.

rank

Similarly (or considering the transpose matrix A*)

row rank and column rank are

the

obtain

row rank — column rank.

Thus

AND DIMENSION

BASIS 5.22.

we

equal.

Determine whether or not the following form a basis for the vector space (i)

(1, 1, 1)

and

(1,

(1, 2, 3), (1, 0,

(ii)

and

(2, 1,

and

(ii).

(i)

-1,

-1),

5) (3,

-1,

0)

(iii)

(1, 1, 1),

(1, 2, 3)

and

(2,

(iv)

(1, 1, 2),

(1, 2, 5)

and

(5, 3, 4)

1)

-2)

No; for a basis of R3 must contain exactly 3 elements, since R^

The vectors form a basis if and only if they are independent. rows are the given vectors, and row reduce to echelon form:

(iii)

-1,

R^:

is

of dimension

3.

Thus form the matrix whose

to

The echelon matrix has no zero rows; hence the three vectors are independent and

so

form a

basis for R^.

Form

(iv)

the matrix whose rows are the given vectors, and

row reduce

to echelon form:

to

The echelon matrix has a zero row, i.e. only two nonzero rows; hence the three vectors are dependent and so do not form a basis for R3.

5.23.

Let

W be the subspace of R* generated by the vectors

(1,

—3, —5). (i) Find a basis and the dimension of W. to a basis of the whole space R*. (3, 8,

(i)

Form

/l 1

-2

2

-18

4

14

The nonzero rows space, that (ii)

is,

of

(1,

—2,

5,

W. Thus,

—3) and

3\ 3

5

-9 7-9 7

(0, 7,

in particular,

—9,

2)

dim

-3), (2, 3, 1, —4) and Extend the basis of

5,

W

(ii)

row reduce

the matrix whose rows are the given vectors, and

to

—2,

to echelon form:

1

-2

5

-3

7

-9

2

to

of the echelon matrix

W=

form a basis of the row

2.

We seek four

independent vectors which include the above two vectors. The vectors (1, —2, 5, —3), —9, 2), (0, 0, 1, 0) and (0, 0, 0, 1) are independent (since they form an echelon matrix), and so they form a basis of R* which is an extension of the basis of W.

(0, 7,

5.24.

Let

W be the space generated by the polynomials V2

=

2«3 -st^'

+ Qt-l

Vi

=

2t^

~5P + 7t +

5

Find a basis and the dimension of W. The coordinate vectors of the given polynomials [vi]

[v^]

= =

(1,-2,4,1)

(2,-3,9,-1)

relative to the basis {f3,

K] = K] =

(1,0,6,-5) (2,-5,7,5)

t^, t,

1} are respectively

AND DIMENSION

BASIS

104

Form

[CHAP.

row reduce

the matrix whose rows are the above coordinate vectors, and 1

to



I

-2

4

1

1

1

-3

to

_

5

to echelon form:

The nonzero rows (1, —2, 4, 1) and (0, 1, 1, —3) of the echelon matrix form a basis of the space generated by the coordinate vectors, and so the corresponding polynomials

_

«3

W=

form a basis of W. Thus dim

5.25.

+

2*2

+

4t

and

1

+

t^

-

t

3

2.

W of the system

Find the dimension and a basis of the solution space x

+ 2y + 2z-

s

+

St

=

x

+ 2y +

3z

+

s

+

i

=

+

8z

+

s

+

5t

=

+

Sx

6y

Reduce the system to echelon form:

+

X

+

2y ^

2z z

2z

The system

in echelon

solution space PT is

y

(i)

+ +

s

2s 4s

+ 3t = -2t = - 4« =

+

x

+ 2zz +

2y

or

s

2s

+ -

3t 2t

= =

form has 2 (nonzero) equations in 5 unknowns; hence the dimension of the 3. The free variables are y, s and t. Set

—2 =

5

= l,s = 0,t-0,

(ii)

=

J/

0, s

=

=

t

1,

y

(iii)

0,

=

0,

s^O, t-l

to obtain the respective solutions vi

The

5.26.

=

(-2,

=

V2

1, 0, 0, 0),

-2,

(5, 0,

Find a homogeneous system whose solution -2,

0, 3), (1,

set

-1, -1,

W

M

„ _

1

-2 -1 -1

4

-2

/I

3\

r

-1 -2

2

2x

\Q

y

\x

3

1

IO-25II0

I

+

first

0, 2, 0, 1)

-2, 5)}

rows are the given vectors and whose

/I

\

,.0

1

-2

3

-1

1

1

\^\(iQ2x + y + z-hx-y + w^

2

-3x + w/

z

y

(-7,

generated by

is

4), (1, 0,

whose Method 1. Let v = (x, y, z, w). Form the matrix last row is v; and then row reduce to echelon form: /I

=

W.

set {v^, V2, f 3} is a basis of the solution space

{(1,

^3

1, 0),

\0

v&WH

W

and only if the addihas dimension 2. Thus original first three rows show that row does not increase the dimension of the row space. Hence we set the last two entries to obtain the required homogeneous system in the third row on the right equal to

The

tional

Method

2.

erators of

We know

=

that v

W: (x, y, z,

The above vector equation

w)

in

2x

+

y

5x

+

y

(x,y,z,w)

=

r(l,

unknowns

G

-2, r,

+

P7 0, 3)

s

and

=0

z

—w = if

+

and only 8(1,

t is

if

-1, -1,

d

is ,

4)

+

a linear combination of the gen-





t(l, 0,



-2,

,.v

5)

equivalent to the following system:

CHAP.

AND DIMENSION

BASIS

5]

= X = y -2r -s -s-2t = z 3r + 4s + 5t = w r

+

+

s

Thus V

W

E.

ii

+

°^

and only

if

= = 2x + y = z = w — 3x

+ t 8 + 2t -s-2t 8 + 2t

r

t

8

5x

y

y

U

W be the following subspaces

and

+c+d =

{{a,b,c,d): b

Find the dimension and a basis of

We

2x 5x

+ + +

y y y

+z —w

^^^

i.e. if

the required homogeneous system.

is

U =

(i)

x 2x

+ z =0 —w =

Remark: Observe that the augrmented matrix of the system

Let

= = = =

t

2t

°^

b

(1)

a

=

=

c

1,

+

0,

+

c

and

a, c

d

=

W

U,

seek a basis of the set of solutions

The free variables are

of R*:

0},

(i)

=

W,

(ii)

or

{{a,b,c,d): a

+b =

=

0, c

2d}

UnW.

(iii)

of the equation

(a, 6, c, d)

=

d

the transpose of the matrix

(1) is

M used in the first method. 5.27.

+ +

s 8

the above system has a solution,

+ +

+

r

oe

2x

The above

105

0-a +

b

=

=

+

c

+

=

d

Set

d. 0,

=

a

(2)

0, c

1,

d

0,

=

a

(3)

0, c

=

0,

d

=

1

to obtain the respective solutions

=

^•l

The (ii)

We

set {v^, v^,

v^

seek a basis of the set of solutions

+

a

c

The

free variables are 6 and d.

(a, 6, c, d)

= =

6

(0,-1,1,0),

dim U =

a basis of U, and

is

=

V2

(1,0,0,0),

=

1^3

(0,-1,0,1)

3.

of the system

+ 6 = - 2d =

a or

2d

c

Set

=

6

(i)

d

=

0,

(2)

6

1, 0, 0),

V2

=

1,

=

0,

=

d

1

to obtain the respective solutions

=

Vy

The (iii)

set {Ui,

U r^W

v^

is

W, and dim

a basis of

consists of those vectors

ditions defining

W,

i.e.

(-1,

W=

(a, b, c, d)

2.

which satisfy the conditions defining

U

and the con-

the three equations

6+c+d=0 =

c

The free variable is d. Set d = 1 of VnW, and Aim (V nW) = \.

=0

a+b

=0

0+6

5.28.

(0, 0, 2, 1)

or

6

+

v

=

c+d

2d

c

to obtain the solution

-

2d

= =

(3, -3, 2, 1). Thus {v}

is

Find the dimension of the vector space spanned by: (i)

(ii)

(1, (3,

-2,3, -1) and -6,

+

3,

-9) and (-2,

+

(iii)

t^

2f2

+

3i

(iv)

i»-2f2

+

5 and

1

-2, 3)

(1, 1,

and i^

+

4,

2t^

34

-2,

+

_

4

(v)

6)

4*^

+

/ 6f

+

2

and

{\

^J 1

l\

^^^^

(^-i -i^

(vii)

3 and

-3

^

/-3 -3

a basis

AND DIMENSION

BASIS

106

[CHAP.

5

W

of dimension 2 if they are independent, and of dimension nonzero vectors span a space they are dependent. Recall that two vectors are dependent if and only if one is a multiple of the other. Hence: (i) 2, (ii) 1, (iii) 1, (iv) 2, (v) 2, (vi) 1, (vii) 1.

Two

1 if

5.29.

V

Let

Show that be the vector space of 2 by 2 symmetric matrices over K. 3. (Recall that A = (ay) is symmetric iff A = A* or, equivalently, an = an.)

dim 7 =

arbitrary 2 by 2 symmetric matrix is of the form (Note that there are three "variables".) Setting (i)

we

=

a

1,

=

6

=

a

(ii)

0,

6

0,

-. =

= (;:)•

show that {E^, E^, E^) (1)

is

a basis of V, that

A

For the above arbitrary matrix

A =

=

c

1,

where

1

a, 6, c

G A.

^

=

a

(iii)

0,

6

0,

=

0, c

=

l

is,

^\

(2)

Suppose xEi

Ki

=

o)

+

+

yE^

+ ^(i +

xEi Accordingly, {EijEz.Ea}

=

zE^

where

0,

K2

+

bE2

V

and

(2)

is

independent.

is,

suppose

cEs

is

+

yE2

{1,

t^

«,

.

.

.

,

scalars.

That

Oj

zEs

=

we

=

x

=

x

obtain

implies

zj

\y 0,

y

0,

=

y

=

0,

\0

0, z

z

=

0-

In other words,

=

independent.

«"->, «»},

Thus dim V = n +

unknown

x, y, z are

1)

V

and so the dimension of

Let V be the space of polynomials in is a basis of V:

of degree

- 1,

{1, 1

(ii)

t

(1

- tf,

V

is 3.

^ n. Show that each .

.

.

,

(1

- «)"-s

(1

of the following

- «)"}.

1.

...,t"-i and t». Furthermore, Clearly each polynomial in y is a linear combination of l,t, none is a linear combination of the preceding poly1, t, ., t"-i and t» are independent since t«} is a basis of V. nomials. Thus {1, t .

(ii)

(::)

generates

(1)

-

+

J)

{El, E2, E3} is a basis of

Thus

(i)

it

+

aEi

Setting corresponding entries equal to each other,

(i)

that

we have

in V,

("'

-• =

(?;)

{^1, E^, Eg} generates V.

Thus

5.30.

=

(

obtain the respective matrices

-.

We

=

0, c

h\

a

I

A =

An

.

form a basis of (Note that by (i), dim V = w+ 1; and so any m-l- 1 independent polynomials (1 - t)» is of degree higher than the Now each polynomial in the sequence 1, 1 - t w + 1 polypreceding ones and so is not a linear combination of the preceding ones. Thus the (1 — t)» are independent and so form a basis of V. nomials 1, 1 — t, V.)

.

5.31.

.

. ,

the vector space of ordered pairs of complex numbers over the real (see Problem 4.42), Show that V is of dimension 4.

Let

V be

We

claim that the following

is

Suppose u e V. Then v = («, w) where o, 6, c, d are real numbers. Then V

Thus

B

=

R

a basis of V:

B =

generates V.

field

{(1, 0), z,

ail, 0)

w

+

(i,

0), (0, 1), (0, t)}

are complex numbers, and so v

6(t, 0)

+

c(0, 1)

+

d(0,

t)

= (a+bi,e + di)

where

+

CHAP.

BASIS

5]

The proof

complete

is

if

we show

+

xi(l, 0)

where

Xi, x^, x^,

X4

5.32.

x-^

=

B

is

X2(i, 0)

+

that

107

Suppose

independent.

+

a;3(0, 1)

=

0:4(0, i)

G R. Then {xi

Accordingly

AND DIMENSION

0, a;2

=

+ x^,

Xs

=

as

0,

Let Y be the vector space of with 1 as the i^-entry and

+ x^x) = 0, a;4

=

Xx

-'r

x^i

«3

-V

x^i

and so

(0, 0)

and so

B

is

Q

independent.

m x w matrices over a field K. Show

elsewhere.

= —

Let E^ G V be the matrix Thus is a basis of 7.

that {E-,^

dimV = mn. We

need to show that {By} generates

A=

Let

Now

(tty)

Thus {By}

is 0.

is

and

2

=

ajjii^ij

Thus

asy

=

where the 0,

i

=

1,

.

independent.

is

A = 2

be any matrix in V. Then

suppose that

the y-entry of independent.

V

.

.,

«{.-

w,

Hence {By} generates V.

o,^E^^.

The y-entry of

are scalars. j

=

m.

1

2 «ij^ij

is ««.

and

By

are

Accordingly the matrices

a basis of V.

Remark: Viewing a vector in K» as a 1 X w matrix, we have shown by the above result that the usual basis defined in Example 5.3, page 88, is a basis of X" and that dim K" = w.

SUMS AND INTERSECTIONS 5.33.

Suppose sion 6.

W

are distinct 4-dimensional subspaces of a vector space and Find the possible dimensions of TJV^W. TJ

Y

of dimen-

W

are distinct, V -VW properly contains 17 and W; hence dim(f7+W)>4. Since V and But dim(?7+W) cannot be greater than 6, since dimV = 6. Hence we have two possibilities: = 5, or (ii) dim (U + PF) = 6. Using Theorem 5.8 that dim(f7+ T^) = dim U (i) dim(U+T7) — dim (Un TF), we obtain dim

W

That

5.34.

Let

is,

J]

5

=

4

+

4

-dim(f/nW)

or

dim(t7nW)

=

3

(ii)

6

=

4

+

4

-dim(?7nW)

or

dim(t/nTF)

=

2

the dimension of

and

{(1, 1, 0,

TJ

r\'W must be either 2 or

3.

W be the subspaces of R* generated by

-1),

respectively. (i)

(i)

(1, 2, 8, 0), (2, 3, 3,

Find

(i)

-1)}

dim (C/ +

TF),

and

{(1, 2, 2,

(ii)

dim(C7nW).

-2),

(2, 3, 2,

-3),

(1, 3, 4,

-3)}

TJ-^W is the space spanned by all six vectors. Hence form the matrix whose rows are the given six vectors, and then row reduce to echelon form:

to

to

to

Since the echelon matrix has three nonzero rows, dim

iJJ

-VW)



Z.

AND DIMENSION

BASIS

108

(ii)

[CHAP.

First find dim U and dim W. Form the two matrices whose rows are the generators of respectively and then row reduce each to echelon form:

U

5

and

W

1 1

-1

1

2

2

3

1

3

to

3

-1

-2 -3 -3

-1

1

1

to

3

1

1 0.

and 2

2

2

3

2

1

3

4

to

Since each of the echelon matrices has two nonzero rows, dim V - dim (UnW), 5.8 that dim (V +W) = dim U + dim

5.35.

Let

U

2

+2-dim(!7nW)

let

-2,

-3,

2, 3), (1, 4,

2

and dim

Aim{Ur\W) =

or

W=

2.

Using

we have 1

-1, -2, 9)}

4, 2), (2, 3,

W be the subspace generated by {(1, 3, 0, 2, 1), (1, 5,

Find a basis and the dimension of (i)

1

be the subspace of R^ generated by {(1, 3,

and



W

=

-2

2

to

Theorem

3

2

-1 -2

'l

U +W

is

XJ

(i)

-6,

6, 3), (2, 5, 3, 2, 1)}

+ W,

f/n W.

(ii)

Hence form the matrix whose rows are the

the space generated by all six vectors. and then row reduce to echelon form:

six vectors

1

4

2

3

-2 2 -3 4 -1 -2

1

3

2

1

6

3

1

3

1

5

-6

9 *° '

2-440

1

\0

-1

2

3

1

3

2

-1

2

5

3

2

1

3

-2 -1

1

2

3-2 2 3\ 1-1 2-1 0-3 3-6 0-2 2

/I

3

1

7

-2 -1

-2 -5/ 2

3

2

-1 -2

2 to

2

The

set of

(ii)

a basis

-2

2

6

-6

nonzero rows of the echelon matrix, {(1, 3,

is

to

-2

oiV+W;

-2,

thus dim

(t/

2, 3), (0, 1,

+

TF)

=

-1,

2,

-1),

(0, 0, 2, 0,

-2)}

3.

W respectively.

First find homogeneous systems whose solution sets are U and first rows are the generators of U and whose last row is

whose

(», y, z, s, t)

Form the matrix and then row reduce

to echelon form:

-3x

4a;

+

y

-2 -1

2

3

2

-1

3

-6

2x

+

z

-2x +

3 s

Sx +

t j

CHAP.

AND DIMENSION

BASIS

5]

Set the entries of the third row equal to set is U:

-X + y +

Now

=

z

4a;

Q,

109

homogeneous system whose solution

to obtain the

-

22/

+

=

8

+

-6a;

0,

y

-\-

t

W

form the matrix whose first rows are the generators of and then row reduce to echelon form:

=

and whose

last

row

is

(x, y, z, 8, t)

to

-9aj

+

row equal

Set the entries of the third

—9x + 3y +

=

z

+

3y

z

4x



+

2y



2y

+

=

s

2x

s

to obtain the

to

0,

'Ix



+

y

t

homogeneous system whose solution 2x

0,

-

y

+

=

t

Combining both systems, we obtain the homogeneous system whose solution

=0 =0

—x+y + z -2y -6a; + y -9x + 3y + 4a; — 2j/ 2a; -

+8

4x

+ +

+

+

4z 8z 4z

+ + +

There

is

+ -5y -6y 2y + ^ + 2y

= =

=0 =0

8

+

58

2t

-x

+

-

-

2t

Az

+

s

+

6z

t

iz

+

s

+

Az 8z

— =

t

=

2f

= = =

+

2z

U nW:

=0 =0

8z

=0

y+z

2y

=0

3s s

solution

t

y+z

2y

—x+y+z

=0

s

J/

—x +

t

z

set is

+ +

8

5s 8

=

+ -

2t

one free variable, which is t; hence dim(l7nT^ = 1. Setting t = 2, we obtain the Thus {(1, 4, -3, 4, 2)} is a basis of UnW. a; = 1, 2/ = 4, z = -3, 8 = 4, t = 2.

COORDINATE VECTORS 5^6.

Find the coordinate vector of v relative to the basis where (i) v = (4, -3, 2), (ii) v = (a, 6, c).

{(1, 1, 1), (1, 1, 0), (1, 0, 0)}

In each case set v aa a linear combination of the basis vectors using

=

V

a;(l, 1, 1)

and then solve for the solution vector

+

{x,y,z).

j/(l, 1, 0)

+

unknown

of R^

scalars x, y and

z:

z(l, 0, 0)

(The solution

is

unique since the basis vectors are

linearly independent.)

(i)

(4,-3,2)

= = =

+ j/(l, 1, 0) + z(l, 0, X, x) + {y, y, 0) + (z, 0, 0) (x + y + z,x + y,x)

a;(l, 1, 1)

0)

(a;,

Set corresponding components equal to each other to obtain the system

x

+

y

+

z

=

A,

X + y

=

—3,

a;

=

2

Substitute « = 2 into the second equation to obtain y = —5; then put x = 2, y = —5 into the first equation to obtain z = 7. Thus x = 2, y = -5, z = 7 is the unique solution to the system and so the coordinate vector of v relative to the given basis is [v] = (2, —5, 7).

-

{a, b, c)

(ii)

+

«(1, 1, 1)

Then

=

from which x {e, b — c, a — b).

5JS7.

AND DIMENSION

BASIS

110

y

c,

[CHAP.

+ z{l, 0, 0) = (x + y + z,x + y,x) x + y + z = a, x + y = b, x — c — b — c, z = a—b. Thus [v] — (c,b — c,a—

5

1/(1, 1, 0)

Let V be the vector space of 2 x 2 matrices over R. matrix A relative to the basis

that

b),



[(a, b, c)]

is,

Find the coordinate vector of the

GV

iHri)^{i-iHi

{{I

A

Set

w- -(!-?

I)}

as a linear combination of the matrices in the basis using

scalars x, y,

"I i)*'(:-i)-'(i-i)*

be

L(Mi) 5.63.

f „ are linearly independent vectors.

. ,

.

.,v^-l,w,v^^l,

.

= (a, 6) = 0.

Let V

ad— 5.62.

.

(i)

W be the space generated by the polynomials tt

=

(3

+

2*2

-

2t

+

1,

t)

Find a basis and the dimension of W.

=

t*

+

3*2

-

«

+

4

and

w =

2*3

+

t2

-

7t

-

7

CHAP.

5.69.

AND DIMENSION

BASIS

5]

X X 3x

+ + +

Sy 5y 5y

+ + +

= =

2z z

8z

X

2y

2x

+ -

2x

3y

+ -

2z

y

+

z

(i)

5.70.

W of each homogeneous system:

Find a basis and the dimension of the solution space

-

X 2x

= =

7z

+ 2x+ X

+ 2y ~2z + 2s - t = + 2y - z + 3s - 2t = + 4y ~ Iz + s + t =

W of each homogeneous system: + + +

x 2x 2x

-

2y 4y iy

z

2z 2z

V =

{1, 1

(ii)

{1

-3),

2, 5,

V,

(i)

U

W

0},

W,

(ii)

(iii)

-2,

(1,

=

and

W are

subspaces of



and that dim

U be

and

let

W be

5.78.

Let {t3

V

dim

dim

-3, -1, -4),

{U+W),

+ 4*2 - t + 3,

Let

U

and

let

(1, 4,

+t"-l +

•••

b

=

2c}

that

t«}

VnW

^

{0}.

U=

4,

dim

W=5

and dim

U-

1,

dim

W=2

and

-1, -2, -2),

-2,

3), (2, 8,

-1, -6, -5),

V-

7.

Find the

UlW.

Show

that

(2, 9, 0,

-5, -2)}

+ 5*2 + 5^

t3

Find

(i)

dim

(1, 3,

-1, -5, -6)}

dim (t/n VT).

(ii)

be the vector space of polynomials over R.

respectively.

5.79.

d,

the subspace generated by {(1, 6, 2,

(i)

=

the subspace of Rs generated by {(1, 3,

Find

a

U nW.

Let U and P7 be subspaces of R3 for which Rs = r; w'.

Let

-2)}

Determine whether or not each of the

n.

Show

subspaces of R3.

V

{{a,b,c,d):

©

5.77.

1, 2,

VnW.

of degree

t

l

possible dimensions of

5.76.

generated by

is

+ t+t2, l+t+t2 + t3, ..., l + t+t2+ t+ fi, t^ + t», ..., t"-2 + t"-i, t»-i + t"}.

+ t,

+ t,

Suppose

-3,

(2,

Let V be the vector space of polynomials in following is a basis of V: (i)

W

set

6-2c + d =

{(a,b,c,d):

SUMS AND INTERSECTIONS 5.74. Suppose V and W are 2-dimensional 5.75.

-1),

0, 3,

Find a basis and the dimension of 5.73.

+ 3s-4t = - s + 5t = + 4:S -2t -

W be the following subspaces of R*:

V and

Let

-2,

5z

(ii)

Find a homogeneous system whose solution {(1,

5.72.

y

= =

2z

(iii)

(i)

5.71.

+ +

4:y

(li)

Find a basis and the dimension of the solution space X

117

3*3

(f/

+

+

10(2

W'),

- 5t + 5} (ii)

Let

and

U and W be the subspaces generated by {t^ + U^ + &,t^ + 2t^-t + 5, 2*3 + 2*2 - 3* + 9}

i\m.(VnW).

be the subspace of RS generated by {(1,

W be the

-1, -1, -2,

0), (1,

-2, -2,

0,

-3),

(1,

-1, -3,

2,

-4),

(1,

-1, -2, -2,

1)}

subspace generated by {(1,

-2, -3,

0,

-2),

(1,

(i)

Find two homogeneous systems whose solution spaces are

(ii)

Find a basis and the dimension of

U r\W.

-1, -2,

U

2,

-5)}

and W, respectively,

BASIS

118

AND DIMENSION

[CHAP.

5

COORDINATE VECTORS 5.80.

Consider the following basis of B^: {(2, 1), (1, -1)}. Find the coordinate vector of vSU^ v = (a,b). to the above basis where: (i) i; = (2,3); (ii) v = (4,-1), (iii) (3,-3); (iv)

5.81.

- t, (1 - t)^, In the vector space V of polynomials in t of degree - 3, consider the following basis: {1, 1 basis if: (i) v = 2 - 3t + t* + 2t^; (1 _ t)3}. Find the coordinate vector of v S y relative to the above (ii)

5 82

i;

= 3 - 2t - ^2;

(iii)

v

= a + bt + ct^ + dt^. X

In the vector space PF of 2

2 symmetric matrices over R, consider the following basis:

AGW

Find the coordinate vector of the matrix

-

5.83.

=

(4

5 bywriting

^

x

6

under a mapping

fix)

as illustrated in the preceding example. Example

6.3:

Consider the 2 X 3 matrix

A =

R2 as column vectors, then

A

V

Av,

\-*

'1

-3

5'

c,2

4

-1,

determines the that

T{v)

is,

:

Thus

V

if

then

,

= Av =

T{v)

(

-2/

Every

Remark:

mxn

defined

matrix

A

-3

/I

1

write the vectors in R^ and

mappmg T RS -> R2 v& R3 - Av,

3\

=

we

If

4 -_i

2

|

K

field

^

\-2/

^

over a

^^

5\/

defined

<

determines the mapping

by

-10 12

T X" -» K™ :

by

v v^ Av written as column vectors. For convenience are where the mapping by A, the same symbol used for the above the we shall usually denote vectors in if"

and

li:"'

matrix. Example

6.4:

Example

6.5:

real field R. be the vector space of polynomials in the variable t over the for any polynomial / G V, where, D:V-^V mapping a defines derivative the Then we let D(f) = df/dt. For example, D(3t^ - 5t + 2) = 6t - 5.

Let

V

Let

V

Then

preceding example). be the vector space of polynomials in t over R (as in the V -* R where, for any to 1 defines a mapping the integral from, say,

^

polynomial

f&V, we

^(/)

let

^(3(2-5* + Note that this

map Example

6.6:

map

= =

2)

r

(3t^-5t

from the vector space example is from V into

is

in the preceding

f:A^B

Consider two mappings

For example,

f(t) dt.

f

V

+ 2)dt

=

i

into the scalar field R,

illustrated below:

-©—^-©

0.

Let a G A; then /(a) G B, the domain of g. under the mapping g, that is, g(f{a))- This

a

K

Hence we can obtain the image of

g{f(a))

{9°f){a)

theorem

first

Theorem

6.1:

Let

tells

=

denoted by

g(f(o))

f.A^B, g.B^C and h.C^D. Then If a

G A,

ho{gof)

=

(hog)of.

then

{ho{gof)){a)

=

h{igof){a))

=

h{g{f{a)))

({h°g)of){a)

=

{hog){f{a))

=

Hg{f{a)))

Thus iho{gof)){a) = {ihog)of){a) for every a G A, and

Remark:

is

us that composition of mappings satisfies the associative law.

We prove this theorem now. and

/(a)

map

and from A into C is called the composition or product of / and g, is the mapping defined by (g°f):A-^C words, other In g°f.

Our

whereas the

itself.

g.B^C

and

:

so ho{gof)

=

{hog)of.

of a G A Let F:A-^B. Some texts write aF instead of F{(i) for the image and F:A-*B functions under F. With this notation, the composition of text. in this G-.B^C is denoted by F o G and not by G o F as used

CHAP.

LINEAR MAPPINGS

6]

We

123

next introduce some special types of mappings.

Definition:

A

mapping f:A-*B

if

different elements of

is

or, equivalently.

A

Definition:

GB

is

A

have distinct images; that

if

a

is

implies

v^ a'

if /(a)

mapping f:A-^B

every b

said to be one-to-one (or one-one or 1-1) or injective

=

6.7:

Let h(x)

/:R^R, g:B,-*R = x^. The graphs

f(x)

=

and

fc

:

of these

a

=

maps

said to be onto (or: /

the image of at least one a

A mapping which is both one-one and onto is Example

/(a) ¥- f{a')

implies

f{a')

is,

a'

A

onto B) or surjective

if

G A.

said to be bijective.

R -» B

be defined by f(x)

mappings

2=^

g(x)

=

-

2'',

g{x)



x^

-x

and

follow:

h(x)

X'

=

a;2

The mapping / is one-one; geometrically, this means that each horizontal line does not contain more than one point of /. The mapping g is onto; geometrically, this means that each horizontal line contains at least one point of g. The mapping h is neither one-one nor onto; for example, 2 and —2 have the same image 4, and —16 is not the image of any element of R. Example

Example

6.8:

6.9:

Let A be any set. The mapping /:A-»A defined by f{a) = to each element in A itself, is called the identity mapping on 1^ or 1 or /. Let

f:A-*B.

We

call

g-.B^A f°g =

the inverse of

Ifl

/,

g°f =

and

a,

i.e.

A

and

written /~i,

which assigns is denoted by

if

1a

We

emphasize that / has an inverse if and only if / is both one-to-one and onto (Problem 6.9). Also, if then /"'(ft) = a where a is the unique element of A for which f(a) = 6.

6GB

LINEAR MAPPINGS Let V and U be vector

A

mapping F:V -* U is called a spaces over the same field K. linear mapping (or linear transformation or vector space komomorphism) if it satisfies the following two conditions: (1)

For any

v,wGV,

(2)

For any

kGK

+ w) =

+ F{w). and any vGV, F{kv) = kF{v). F{v

F{v)

In other words, F:V^ U is linear if it "preserves" the two basic operations of a vector space, that of vector addition and that of scalar multiplication. Substituting k = into (2) we obtain F{0) the zero vector into the zero vector.

=

0.

That

is,

every linear mapping takes

[CHAP.

LINEAR MAPPINGS

124

Now

for any scalars a,b

gK

eV

and any vectors v,w

we

6

by applying both

obtain,

conditions of linearity,

=

F{av + bw)

More

aiGK

any scalars

generally, for

+

F{av)

=

F{bw)

+ bF{w)

aF{v)

viGV we

and any vectors

obtain the basic

property of linear mappings: F{aiVi

+ anV„) = aiF{vi) + a^FiVi) + condition F{av + hw) = aF{v) + bF{w)

+ aiVi +

We linear









remark that the mappings and is sometimes used as

Example

6.10:

Let

A

mX%

be any

completely characterizes

We

claim that

T

y->

previously, A determines a (Here the vectors in K" and K""

Av.

For, by properties of matrices,

is linear.

+ w) - A(v + w) = Av + Aw =

T(v

=

T(kv)

and

anF(Vn)

As noted

matrix over a field K. by the assignment v

are written as columns.)

+



its definition.

T:&^K^

mapping



A(kv)

= kAv =

+

T{v)

T{w)

kT{v)

where v,w G K^ and kE. K.

In comment that the above type of linear mapping shall occur again and again. finite-dimensional one from mapping linear every that show fact, in the next chapter we of the above type. vector space into another can be represented as a linear mapping

We

Example 611-

Let

We

z) B» be the "projection" mapping into the xy plane: F(x, y, Then b', c'). = {a', w = and b, c) v (a, Let show that F is linear.

i^

R^

:

-^

F{v and, for any

fc



Example 612-

Example

6.13:

is,

=

F{,ka, kb, ke)

F.V^U

=

k(a, b, 0)

any

v,weV + w) =

F

is

=

linear.

=

+

We

F(v)

F

call

v,wGV

and any

+

+

the zero

Consider the identity mapping

Thus

G

be the mapping which assigns and any kG K, we have

/(av

6.15:

=

(ka, kb, 0)

kF(v)

linear.

Let

for any

Example

=

=

(x

:

Thus 6.14:

is

6', 0)

by F(x,y) Let F R2 ^ R2 be the "translation" mapping defined zero vector Observe that F(0) = F(0,0) = (1,2) ^ 0. That is, the linear. not onto the zero vector. Hence F is

F{v

Example

F

{x, y, 0).

R,

F(kv)

That

+ w) = F{a + a',b + 6', c + c') = {a + a', b + = (o, b, 0) + (a', b', 0) = F(v) + F(w)

=

F(kv)

and

F(w)

mapping and

to every

[7

which maps each we have

a,beK, = av + bw =

+

veV.

=

=

al{v)

+

vGV

l,y

+ 2)

not mapped

fcO

shall usually denote it

I.V^V

bw)

is

Then, for

=

kF(v)

by

into itself

0.

Then, .

bl(w)

/ is linear.

R JtV^B

t over the real field be the vector space of polynomials in the variable mtegral mappmg the and D:V-^V mapping differential Then the any and 6.5 are linear. For it is proven in calculus that for

V

Let

defined in

u,v

SV

Examples 6.4 and fe e R,

dv du + v) dt- = dt+dl

d{u

that

is,

D(u + v)=D(u) + D(v)

f

that

{u(t)

+

is,

J{u +

•»)

=

SM + SM

^""^

=

and D(ku)

v(t)) dt

f

and

j^du

djku)

,

=

ku{t)dt

=

and ^(few)

k D(u); and u(t) dt

\

k

=

"dt

dt

\

+

u{t) dt

k^iu).

\

also,

v(t) dt

CHAP.

LINEAR MAPPINGS

6]

Example

F:V ^ U

Let

6.16:

be a linear mapping which

mapping F"! ping

is

125

:

C/ ->

y

We

exists.

will

both one-one and onto. Then an inverse 6.17) that this inverse map-

is

show (Problem

also linear.

When we investigated the coordinates of a vector relative to a basis, we also introduced the notion of two spaces being isomorphic. We now give a formal definition. Definition:

A

linear

mapping

Example

Let

6.17:

y

F:V^U U

vector spaces V, V onto U.

is

called

an isomorphism

are said to be isomorphic

be a vector space over

K

its

K

[v]^,

coordinate vector relative to the basis {e},

is

one-to-one.

if it is

there

n and

of dimension

Then as noted previously the mapping v

if

let {e^,

.

The

an isomorphism of

is

.

.,e„} be a basis of V.

eV

which maps each v into an isomorphism of V onto K". i.e.

Our next theorem it tells

gives us an abundance of examples of linear mappings; in particular, us that a linear mapping is completely determined by its values on the elements

of a basis.

Theorem

V and U be vector spaces over a field K. Let {vi,'i;2, .,Vn} be a basis V and let Ui, Ui, .,Un be any vectors in V. Then there exists a unique linear mapping F:V-^U such that F{vi) = Ui, F{v2) = ..., F{vn) = th. Let

6.2:

.

of

.

.

.

112,

We

emphasize that the vectors Mi, zt„ in the preceding theorem are completely armay be linearly dependent or they may even be equal to each other. .

.

.

,

bitrary; they

KERNEL AND IMAGE OF A LINEAR MAPPING We

begin by defining two concepts.

Definition:

Let of

F:V-^U

be a linear mapping.

image points

{uGU

Im F =

The kernel of F, written KerF,

OGU:

Theorem

6.3:

Example

F:V->U

Let of

6.18:

is easily

U

:

is

=u

F{v)

for some v

is

proven (Problem

F{v)

=

V

which map into

0}

6.22).

be a linear mapping. Then the image of /?" is a subspace of V.

/^ is

a subspace

and the kernel of

Let F:R3-^H^ be the projection mapping into the xy plane: F(x, y, z) — (x, y, 0).

entire

Clearly the image of

F

is

the

xy plane:

l

|

Im

F =

{(a, 6, 0)

KerF =

:

o, b

F

is

{(0, 0, c):

c

Note that the kernel of

G R} the z axis:

G R}

since these points and only these points into the zero vector = (0, 0, 0).

map

the set

G V}

the set of elements in

F = {vGV:

Ker The following theorem

The image of F, written Im F,

in U:

||llill

l

||



»

=

(a, 6, c)

[CHAP.

LINEAR MAPPINGS

128

Theorem

The dimension

5.11:

AX

6

of the solution space

W of the homogeneous system of linear

=

is

n-r where

equations rank of the coefficient matrix A. is

n

the

number

of

unknowns and r

is

the

OPERATIONS WITH LINEAR MAPPINGS We are able to combine linear mappings in various ways to

obtain new linear mappings. These operations are very important and shall be used throughout the text. field K. Suppose F:V-*U and G:V-^U are linear mappings of vector spaces over a F(v) assigns + G{v) to We define the sum F + G to he the mapping from V into U which

^^^'

(F + G){v)

=

F{v)

+

Giv)

mapping from Furthermore, for any scalar kGK, we define the product kF to be the into U which assigns k F{v) to i; e F: ikF)iv) = kF{v) show that if F and G are linear, then i^^ + G and kF are vectors v,w GV and any scalars a,h GK,

We

{F

{kF)(av

and

F+G

Thus

and kF are

kF{av

Let

The space

in the

V

above theorem

is

usually denoted by [/)

comes from the word homomorphism. dimension, we have the following theorem.

Hom

6.7:

Suppose dim 7

G:U^W

=

m

W

In the case that

GoF

and

U ^ n. Then dim Hom(V,

mapping Recall that the composition function Goi?' is the whenever linear is that = show {GoF){v) G{Fiv)). and any scalars a,b GK, for any vectors v,w

We

V

U

are of finite

= mn. spaces over the same field K, and that F:V-*U

and dim

are vector V, U and are linear mappings:

Now suppose that and

b(kF){w)

applies.

Hom(7,

Theorem

=

of all and U be vector spaces over a field K. Then the collection and addition of operations linear mappings from V into U with the above K. scalar multiplication form a vector space over

6.6:

Here

+ bF{w)) a(kF)(v) +

k{aF{v)

linear.

The following theorem

Theorem

have, for any

F{av

+ bw) = akF{v) + bkF(w)

= =

+ bw)

We

+ bw) + Giav + bw) aF{v) + bF{w) + aG(v) + bG{w) a{Fiv) + G{v)) + b(F{w) + G{w)) a(F + G){v) + b{F + G){w)

= = = =

+ G){av + bw)

also linear.

F

from and

G

V

U)

into

W

are linear.

defined by have,

We

gV

{GoF)iav

That

is,

V

+ bw)

G o F is linear.

= =

+ bw)) = G{aF{v) + bFiw)) = aiGoF){v) + b{GoF)(w) aG{F{v)) + bGiF{w))

G{Fiav

CHAP.

LINEAR MAPPINGS

6]

The composition

of linear

129

mappings and that of addition and scalar multiplication are

related as follows:

Theorem

6.8:

U and W be vector spaces over K. Let F, F' be linear mappings from U and G, G' linear mappings from U into W, and let k&K. Then:

Let V,

V

into

= GoF + GoF' + G')oF = GoF + G'oF

(i)

Go(F +

(ii)

(G

(iii)

k{GoF)

F')

= {kG)oF =

Go(kF).

ALGEBRA OF LINEAR OPERATORS Let F be a vector space over a field K. We novir consider the special case of linear mappings T:V^V, i.e. from V into itself. They are also called linear operators or linear transformations on V. We will write AiV), instead of Horn (V, V), for the space of all such

mappings.

By Theorem 6.6, A{V) is a vector space over K; it is of dimension n^ if V is of dimension Now if T,SgA{V), then the composition SoT exists and is also a linear mapping from V into itself, i.e. SoTgA(V). Thus we have a "multiplication" defined in A{V). (We shall write ST for SoT in the space A{V).) n.

We

is

(i)

F{G + H) =

(ii)

(G

(iii)

k{GF)

K

over a field is a vector space over in which an operadefined satisfying, for every F.G,H and every kGK,

GA

FG + FH + H)F = GF + HF

If the associative (iv)

X

A

remark that an algebra

tion of multiplication

=

{kG)F

=

G(kF).

law also holds for the multiplication,

i.e. if

for every

F,G,H gA,

{FG)H = F{GH)

A

then the algebra is said to be associative. Thus by Theorems 6.8 and 6.1, A{V) is an associative algebra over with respect to composition of mappings; hence it is frequently called the algebra of linear operators on V.

K

Observe that the identity mapping /

:

7 -> F

belongs to A{V).

V(x)

we can form

=

tto

+

aix

+

+

a2X^





+

any T G A{V), use the notation

Also, for

we have TI - IT - T. We note that we can also form "powers" of T^^ToT,T^ = ToToT, .... Furthermore, for any polynomial

T;

we

aiGK

a„x",

the operator p{T) defined by p{T)

=

aol

+

aiT

+ a^T^ +



+



a„r"

(For a scalar kGK, the operator kl is frequently denoted by simply k.) In particular, = 0, the zero mapping, then T is said to be a zero of the polynomial p{x).

if

V{T)

Example

6.21:

Let

T R3 ^ R3 be :

defined

by

=

T(x,y,z)

(0,x,y).

Now

if

{a,b,c)

is

any element

of R3, then:

{T +

and

T^(a, b, c)

is

= =

(0, a, b)

T^0,

+

a, b)

(a, b, c)

=

= (a,a+b,b + c)

T{0, 0, a)

see that rs = 0, the zero mapping from a zero of the polynomial p{x) = v?.

Thus we

T

I)(a, b, c)

V

=

(0, 0, 0)

into itself.

In other words,

»

LINEAR MAPPINGS

130

[CHAP.

6

INVERTIBLE OPERATORS

A

T -.V^ V

linear operator

e A{V)

exists r-i

Now T

is

such that

invertible if

suppose over,

T

is

nonsingular,

V

has

dimF

= =

assuming

Then

ImT -V,

and so

i.e.

A

6.9:

i.e.

finite

6.22:

if

there

Thus in particular, if T is if it is one-one and onto. map into itself, i.e. T is nonsingular. On the other hand, Ker T = {0}. Recall (page 127) that T is also one-one. More-

dimension,

we

T

=

T)

6.4,

dim(Imr) + dim({0})

dim (Im T)

V; thus

is

Theorem

have, by

=

dim (Im T) +

T

Hence T

onto.

is

is

both one-one and onto

have just proven

linear operator

vertible if

Example

i.e.

I.

dim(Imr) + dim (Ker

We

has an inverse,

if it

can

the image of

is invertible.

Theorem

TT-^ = T-^T =

and only

gV

invertible then only

said to be invertible

is

and only

T:V-*V if it is

on a vector space of finite dimension nonsingular.

is

in-

Let T be the operator on R2 defined by T(x, y) = (y, 2x-y). The kernel of T is hence T is nonsingular and, by the preceding theorem, invertible. We now Suppose (s, t) is the image of {x, y) under T\ hence (a;, y) find a formula for T-i. T(x,y) = (s,t) and T-'^(s, t) = (x, y). We have is the image of (s, «) under r-i; {(0, 0)};

T(x, y)

-

2x

(y,

— y) =

Solving for x and y in terms of given by the formula T~^(s,

is

The finiteness of the dimensionality of the next example. in Example

6.23:

Let

V

V

T(ao

and

T is

and

=

we

t,

obtain

+ it,

(|s

y

so

in the preceding

+ ajn) =

+ ait-\

increases the exponent of

nonsingular.

However, T

t

a^t

=

a;

2x-y =

s,

is

theorem

+

Uit^

+

term by

1.

not onto and so

is

in each is

=

+

li,

2/

=

t

s.

Thus T^'

s).

be the vector space of polynomials over K, and

defined by

i.e.

s t)

and

(s, t)

let





is

necessary as seen

T

be the operator on

+

a„s-Zt) image U of F, (ii) kernel W Find a basis and the dimension of the :

(i)

(i)

The images of the following generators of R* generate the image F(l, 0,0,0) F(0,

Form

1, 0, 0)

=

(1,1,1)

F(0, 0,1,0)

(-1,

F(0, 0,

the matrix whose rows are the generators of

to

Thus

0, 1)

{(1, 1, 1), (0, 1, 2)} is



.

a basis of V; hence dim

U

0, 1)

to

=

of F.

of F:

(1,2,3) (1,

-1, -3)

and row reduce to echelon form:

«

C/

= =

V

2.

for

ku

F-i-{u-\-u')

f',

v,v'BV

Then

F-Mm + m') = and thus F"'

is

O;



is

u'.

F{v)

definition of the inverse

F-^ku) =

the

+ a„Vn = 0. Then aiT(vi) + azTiv^) +



is also linear.

Since

F{v

all



.

and

CHAP.

LINEAR MAPPINGS

6]

(ii)

We

seek the set of

(x, y, s, t)

such that F{x,

= (x-y + s +

F(x, y,s,t)

t,x

y, s,

137

=

t)

(0, 0, 0), i.e.,

+ 2s-t,x + y + Bs-St) =

Set corresponding components equal to each other whose solution space is the kernel of F:

(0, 0, 0)

form the following homogeneous system

to

W

X



y+s+t

x

+

y

-

X

= + 3s-3t =

+

X

=

2s

The free variables are

Thus which

6.19.

Let

and

s

(a)

s

=

—1,

f

=

(6)

s

=

0, t

=

t;

hence

to obtain the solution

=

{x

+ 2y — z,

y

+ z,

2t

=

+

C/

dim IT

=

2

+

2

=

4,

+ y — 2z)

X

of T,

The images of generators of R^ generate the image

U

of T:

=

r(i,o,o)

(i)

=

r(o,i,o)

(1,0,1),

U

1

1

{(1, 0, 1), (0, 1,

1

1

1

1

-2

-1)}

is

-1 1-1

T(x, y,z)

=

{x

=

+ 2y - z,

y

i,

-2)

to echelon form:

-1

1

-1

a basis of U, and so dim T(x,y,z)

(-1,

1

to

1

1

seek the set of {x,y,z) such that

=

1

'\

°

to

W of T.

kernel

and row reduce

1

(" 2

(ii)

r(o, o, i)

(2,i,i),

the matrix whose rows are the generators of 1

We

dim

(Observe that

U

Thus

-

s

0),

image

Form

+

be the linear mapping defined by

Find a basis and the dimension of the

(ii)

y

to obtain the solution (1, 2, 0, 1).

l

-1, 0), (1, 2, 0, 1)} is a basis of W. the dimension of the domain R* of F.)

T:W^W

or

—1,

(2, 1,

{(2, 1, is

=

y

T{x, y, z)

(i)

y+s+t

+ s - 2t = 2y + 2s - 4t = dim W = 2. Set

or

t



U=

(0,0,0),

+ z,

X

2.

i.e.,

y -2z)

-\-

=

(0, 0, 0)

Set corresponding components equal to each other to form the homogeneous system whose solution space is the kernel of T:

W

X

+

2y y

a;

+

y



z

=

+ z = — 2z =

x

+

+ —y —

a

W

= 1. is z; hence dim a basis of W. (Observe that dim sion of the domain R3 of T.)

The only free variable {(3,

6.20.

—1,

1)} is

Find a linear map Method

F

:

R^

-

y

or

a

2y

=

z

X

= =

z

z

Let

U+

+

2y

-

z

y

+

z

or

z



dim

1;

W

=

then y = 2 + 1 =

—1 and 3,

which

^ R* whose image is generated by (1, 2, 0, —4) and

= = a;

is

=

3.

Thus

the dimen-

(2, 0,

—1, —3).

1.

Consider the usual basis of R^: e^ = (1, 0, 0), eg = (0, 1. 0), eg = (0, 0, 1). Set F(ei) = (1, 2, 0, -4), F(e2) = (2, 0, —1, —3) and F{eg) = (0, 0, 0, 0). By Theorem 6.2, such a linear map F exists and is unique. Furthermore, the image of F is generated by the F(ej); hence F has the required property. We find a general formula for F(x, y, z): F(x, y,

z)

= = =z

F{xei

+ ye^ + zeg) =

x(\, 2, 0, (x

+

-4)

2y, 2x,

+

xFie-^)

2/(2, 0,

—y, —4x

+

-1, -3)

— 3y)

yF{e2)

+

+

2^(63)

2(0, 0, 0, 0)

'

LINEAR MAPPINGS

138

Method

A

a 4 X 3 matrix

whose columns consist only of the given vectors; say, 2

1

Recall that

6.21.

Let

V

A

determines a linear

-1 -1 -4 -3 -3

map A R3 ^ B^ whose image

map

seek the set of

^\

(^

Fr'

such that

:)

-

C

_

/x \s

X 2x 2s

/-2s \-2s Thus

2x

The free variables are y and (a)

y



(6)

y



^^"^{(o

—1,

t

0,



o)'

Prove Theorem is a subspace of

t

=

2x

-2t = 2s =

hence dim

St



Let

and

(ii)

/X

)

2t\

x

y

2s

+

y

-

x

+

W set

To obtain a basis of

=

=

1,

1,

y

=

y

=

—1, s

0, s

2t

= =

t

s

2.

Let

/« \0

or

W—

.

j

3(

_ ~

y

|

t

3s

\

+ 2y-

+

=

Find a basis and the

DC

{I

_

3y\

M

let



q)

2s

to obtain the solution

;)}

6.3:

U

t;

2y

+ +

(p

-

I)

to obtain the solution x

1

G

+

=

J

and

AM — MA.

by F{A^ =

defined

W of F.

K:

(i)

R

be the vector space of 2 by 2 matrices over

be the linear dimension of the kernel

6.22.

generated by the columns of A.

is

:

satisfies the required condition.

F:V^Y We

2\

2

=

A

A

6

2.

Form

Thus

[CHAP.

=

=

0,

0, t

=

t

=

0;

1.

a basis of T^.

F.V^U

be a linear mapping. Then F is a subspace of V.

(i)

the image of

F

the kernel of

GlmF

and a,b& K. Since u and u' belong to G Im F. Now suppose u, u' Since F(Q) = 0, such that F(v) = u and F(v') = u'. Then the image of F, there exist vectors v,v'

GV

F{av

Thus the image of (ii)

is

+

bw)

Thus the kernel of

=

F is

aF(v)

+

hF(v')

=

au

+

e Im F

bu'

a subspace of U.

G Ker F. Since F(0) = 0, to the kernel of F, F{v) = F(av

6.23.

F

+ bv') -

Now

suppose

=

and F(w)

aF(v)

+

bF{w)

==:

v,wG Ker F 0.

Thus

aO

+

60

=

and a,b e K. Since v and

and so

av

w

belong

+ bw S KerF

a subspace of V.

F:V-^U be W = dim V.

Prove Theorem 6.4: Let V be of finite dimension, and let ping with image U' and kernel W. Then dim U' + dim Suppose dim V = n. Since Thus we need prove that dim U'

W

ia

a.

subspace of V,

= n — r.

its

dimension

is finite;

say,

a linear mapdim

W = r — n.

CHAP.

LINEAR MAPPINGS

6]

Let {wi,

.

.

,

.

We

Wr) be a basis of W.

extend {wj to a basis of V: Wr,Vi, ...,i;„_J

{w'l

B =

Let

The theorem

is

Proof that

proved

B

{Wj, Vj} generates

if

we show

{F{Vi),F(v2), ...,F(v„^r)}

B is a basis of the image u S U'. Then there exists

U' of F.

that

generates U'. Let and since v S V,

&V

v

such that F(v)

+ b^-r'^n-r — since the Wj belong to the kernel = F(aiici + + a^Wf + biv^ + + b„^^v„-r) u = = aiF{wi) + + 6„_^F(i;„_,) + a^(Wr) + b^Fivi) + = OjO + + bn-rF(Vn-r) + a^O + biF(Vi) + = b,F(v,) 6„_,FK_,) —

+

OjWj







+

+

a,Wr

^l'"!

+





u.

Since



Note that F(Wi)

are scalars.

ftj



V

V

where the a„

139

jF'(t')











++









Thus

of F.





Accordingly, the F{Vf) generate the image of F.

B

Proof that

is

+

a^Fivi)

Then F(aiVi of F.

Since

Suppose

linearly independent.

+ 02^2 +





+





+



and so a^Vi + + a„_,T;„_^ a^^^v^-r) there exist scalars 61, 6^ such that •

{wj generates W, a^Vi

or

+

=

a„_ri^K_,.)

=

+



a.2F(v2)

.

+

a2^'2

ail^i

+











+



+

an-r'Un-,



an-r'Wn-r

=

.

.

belongs to the kernel



W

,

+

b^Wi

b^Wi









62^2



+







+

b^Wr

=

fe^w^

(*)

of the W; and

Since {tWj, «{} is a basis of V, it is linearly independent; hence the coefficients Accordingly, the F(v^ are linearly independent. are all 0. In particular, Oj = 0, ., a„_r = 0.

Thus

6.24.

B

is

a basis of

V, and

Vj in (*)

.

so

dim

V —n—r

and the theorem

is

proved.

f:V-*U is linear with kernel W, and that f{v) = u. Show that the "coset" + W = {v + w: w e W} is the preimage of u, that is, f~^{u) — v + W. Suppose v + T^ c/-i(m). We first prove f~Hu)cv + W and We must prove that v'Gf-Hu). Then f(v') = u and so f(v' - v) = f(v') - f{v) = u-u = 0, that is, v'-vGW. Thus Suppose

V

(i).

(ii)

(i)

= V + (v' — v) €. V + W and hence f~Hu) Cv + W. Now we prove (ii). Suppose v' G v+W. Then v' = + w where w G W. = kernel of /, f(w) = 0. Accordingly, f{v') = /(-u + w) = f(v) + f(w) = /(t)) + v' e /-i(m) and so v + Wc f-^(u). v'

Since

1;

SINGULAR AND NONSINGULAR MAPPINGS 6.25. Suppose F:V ^U is linear and that V is of finite dimension. Show image of F have the same dimension if and only if F is nonsingular. nonsingular mappings

By Theorem mension T.

6.26.

if

6.4,

and only

if

T

:

R*

f(v)

=

W m.

is

the

Thus

V and the Determine all

that

^ R^. + dim (Ker/i^). Hence V and ImF KerF = {0}, i.e. if and only if F is

dim F = dim (Im/f) dim (Ker F) = or

have the same dinonsingular.

Since the dimension of R^ is less than the dimension of R*, so is the dimension of the image of Accordingly, no linear mapping T B* -» R^ can be nonsingular. :

Prove that a linear mapping F:V-*U an independent set is independent. Suppose

F is nonsingular

and suppose

F



a^Vy

+

021^2





+

nonsingular

if

and only

if

the image of

We claim that ., v^} is an independent subset of V. Suppose aiF{Vi) + ai are linearly independent. In other words, the image of the independent set {v^, i)„} is independent. .

.

.

,

On

the other hand, suppose the image of any independent set is independent. If v G nonzero, then {v} is independent. Then {F{v)} is independent and so F(v) 0. Accordingly, nonsingular.

#

V F

is is

OPERATIONS WITH LINEAR MAPPINGS 6.27.

F:W^W

Let (x -z,y). (F

W

and G:W^ be defined by F{x, y, z) = {2x, y + z) and G{x, Find formulas defining the mappings F + G,ZF and 2F - 5G.

+ G)(x,

y, z)

=

(3F)(a;, y, z)

(2F

6.28.

- 5G){x,

= =

z) + G(x, y, z) + z) + (x — z,y) =

{y, x)

(2x,

ZF(x,

=

y, z)

= =

y, z)

y

2F(x, y,

2y

(Ax,

3(2*, z)

z)

(-5a;

F°G

The mapping

Show:

ment

-F =

(i)

{Qx,

z)

=

+ 5z,

+

y

2(2a;,

=

-5y)

-

z)

+ 5z,

(-x

=

=

G(F{x,y,z))

G{2x, y

G

not defined since the image of

is

+

- z,

5{x

-2,y

=

y)

+ 2z)

{2x,y

+ z) and

FoG. =

z)

(y

+ z,

2x)

not contained in the domain of F.

is

U);

0, defined by 0{v) = for every v GV, is the zero elethe negative of F G Hom(7, U) is the mapping {-1)F, i.e.

(ii)

(-l)F.

F G Hom

Let

(i)

-

+ 3z)

By

mapping

the zero

Hom(F,

of

G{x,y)

-z,2v + z)

=

5G{x, y,

+ 22) +

(GoF){x,y,z)

6.29.

-

+

y

(3x

and G/R'^W be defined by F(x,y,z) Derive formulas defining the mappings G°F and

.

=

F(x, y,

F:W-^W

Let

y, z)

{V, U).

+ Q){v) =

{F Since

(F

+ 0)(v) =

GV, + 0{v) = eV, F + =

Then, for every

F(v)

for every

v

v

=

+

F{v)

F(,v)

F(v)

F.

For every v G V,

(ii)

+ {-l)F){v) = F{v) + {-l)F{v) = + {-l)F){v) = 0(v) for every vGV, F + [F

Since

{F

F{v)

-

(-l)F

F{v}

=

=

Thus (-l)F

0.

=

0{v) is

the negative

of F.

6.30.

Show

{aiFi

By Thus by

mapping

aiFj,

...,a„GK, and for any

ai,

aiFi{v)

(a^F^iv)

=

+ aJFiiv) +



+

vGV,

ajf'niv)

hence the theorem holds for

a^F^{v);

n =

1.

induction,

Let /^:R3^R2,

+ (I2F2 +

G.W^B?





+ a„F„)(i;) = =

and

G{x, y, z) = {2x + z,x + y) and are linearly independent. Suppose, for scalars

(Here

U) and

+ a„F„)(i;) =

+ a2F2 H

definition of the

(aiFi

6.31.

Hom {V,

that for Fi, ...,F„G

is

a,b,c

the zero mapping.)

(aF

aiFiCv)

HrR^^R^



be defined by

=

i?(a;, y, z)

+ {a^F^ + + a^F^iv) +

(a^F^)(v)

{2y, x).

+ a„F„)(i;) + a„F„(D)









= {x + y + z,x + y), F,G,H G Hom (RS R2)

i^'Cx, i/, 2)

Show

that

G K,

aF + bG + cH = For e^ = (1, 0, 0) G R3, we have

+ bG + cH)(e{) = =

aF(l,

0, 0)

a(l, 1)

+

+

bG(l,

6(2, 1)

+

0, 0)

+

c(0, 1)

{1)

cH(l,

=

(a

0, 0)

+ 2b,a + b + c)

i

CHAP.

LINEAR MAPPINGS

6]

and

0(ei)

=

Thus by

(0, 0).

a

=

Similarly for eg

(aF

(0, 1, 0)

(a

+ 2b, a+b + e) =

+

26

{!),

e

aF(0,

Since

6.32.

+

a (2)

(1)

+

we

obtain

implies

(4),

the mappings F,

Prove Theorem Suppose

mapping

in

elements

Vj

{vi,

.

6G(0,

+

=

+

a

+

6

G

+

1, 0)

=

c

=

6

0,

cH(0,

=

c(2, 0)

(2)

=

=

0(62)

(0,0)

=

6

(5)

=

c

0,

1, 0)

(a+2c, a+6)

+

a

and

=

a

(*)

and

H

are linearly independent.

=

m

and dim

Suppose dim y

6.7:

.

+

6(0, 1)

2e

(5)

and

and

1, 0)

a(l, 1)

Thus Using

and so

(0, 0)

we have

R3,

+ bG + cH){e2) = =

=

141

U = n. Then dim Hom {V, U) -

mn.

.,m„} is a basis of V. By Theorem 6.2, a linear .,v„} is a basis of V and {mj, is uniquely determined by arbitrarily assigning elements of t/ to the basis .

.

Hom {V, V)

We

of V.

define

F^ e Hom {V,U),

i

=

1,

.

.

m,

,

.

j

=

...,n

1,

Uj, and Fij(Vk) -0 for fe # i. That is, Fy maps Vi to be the linear mapping for which Fij{v^ theorem into Mj and the other v's into 0. Observe that {Fy} contains exactly mn elements; hence the is proved if we show that it is a basis of Hom {V, U).

=

Proof that {Fy} generates W2, ..., F(Vm) = Wm- Since w^

Wk =

afclMl

Hom (F, G

U,

+

+

«fc2«*2

We now

Hom (V,

=

compute G(Vk), k

t7) is

m

Thus by

F=

G{v^,)

(1),

=

Proof that {Fy}

=

+

ak2'"-2

fc

=

1,

.

.

is linearly

But

=

0(v^)

=

6.33.

22 = l j

all is

Prove Theorem mappings from k&K. Then:

the ay

=

a basis of

6.8:

V

fcCGoF)

(i)

For every v

=

Oy

G

X

(i)

a linear combination of the Fy, the

F=

that

=

G.

k^i

for

=

OfciJ^)cj(vic)

1

+



and

^^((Vfc)

=

Mj,

t

2 =

3

2 =

Ofci«j

1

»fcnMn

= w^

for each

Accordingly, by Theorem

fe.

Suppose, for scalars ay

6.2,

G

K,

-

»«^«

1

3

GV,

+

Let V,

U

and

17);

ak2M2

Go(F + F') Go(fcF).

is

+ =





1,

+



.

2 —

.

a^jF^j(v^)

=

1

3

2 —

aicjMi

fflfen^n

.,m,

we have

a^i



0,

0^2

=

0,

.

.

.

,

ajj„

=

0.

linearly independent.

hence dim

Hom {V,

U)

=

mn.

W

be vector spaces over K. Let F, F' be linear G, G' be linear mappings from U into W; and let

and let

= 3

hence for k

Hom (V,

= {kG)oF =

(iii)



«ii^ij(^ic)

and so {Fy}

into f7

(i)

=l

afcl^l

Mj are linearly independent;

Thus {Fy}



^(1;^)

independent.

i

In other words,

m,

. ,

w^, F(v2)

.,w,

=

But the

is

.

.

=

Hom (V, U).

2 2 =

i;^,

Fy('Ufc)

3

i=l

For

G

1,

n

for each k.

w^.

=

we show

if

Since

OiiF«('yic)

+

a^iMj

G; hence {Fy} generates

complete

Since

1

3

fc

fflfc„Mn>

n

i=l

=

+





...,m.

l,

22=

=

G(i;k)



F{vi)

u's; say,

n

TTi

proof that {Fy} generates

Suppose

a linear combination of the

2 2 ayFy i=l i=l

G =

Consider the linear mapping

F G Hom {V, U).

Let

U).

it is

Goi?'

+

Goii'';

(ii)

{G

+ G')oF = GoF + G'oF;

o

LINEAR MAPPINGS

142

(Go(F + F'mv) = G{(F + F'){v)) = G{F(v) +

= Since

{G

(F

°

F'){v)

6

F'(v))

+ G{F'(v)) = {G'>F)(v) + {GoF')(v) = {G ° F + G o F'){v) = (G o F + G ° F'){v) for every vGV, Go {F + F') = G°F + G°F'. G(F{v))

&V,

For every v

(ii)

+

[CHAP.

+ G')°F)(v) = {G + G')(F{v)) = G{F{v)) + G'{F(v)) = (Go F)(v) + {G' °F){v) = (G ° F + G' F)(v) + G') ° F}(v) = {G ° F + G ° F')(v) for every v&V, (G + G')° F = G°F + {(G

Since

({G

GV,

For every v

(iii)

=

(k{G°F))(v)

k(G°F){v)

{k{G°Fmv) = k(GoF){v) =

and

G' °F.

=

k{G{F(v)))

=

k(G{F(v)))

=

=

{kG)(F{v))

=

G{kF{v))

(feG°F)(i;)

G{(kF){v))

=

{G°kF){v)

Accordingly, k{G°F) = (kG)oF = G°(kF). (We emphasize that two mappings are shown to be equal by showing that they assign the same image to each point in the domain.)

6.34.

F:V^V

Let (i)

G.U^W

rank {GoF)

rank (GoF)

By Theorem

(ii)

be linear.

rank (GoF)

(ii)

F{V) c U, we also have

Since

(i)

and ^ rank G,

= dim

Hence {GoF):V^W ^ rank F.

G(F{V)) c G(U)

and so

= dim

((GoF)(y))

is linear.

Show

that

^ dim G(V). Then G(?7) = rank G

dim G(F{V))

(G(F(y)))

^ dim

dim (G(F(y))) ^ dim F(y). Hence

6.4,

rank (GoF)

=

=

dim ((Go F)(y))

dim (G(F(y)))



dim F(y)

=

rank

F

ALGEBRA OF LINEAR OPERATORS 6.35.

T be the linear operators on R^ defined by S{x, y) = {y, x) and T{x, y) Find formulas defining the operators S + T,2S- ZT, ST, TS, S^ and T^.

Let S and (0, x).

=

=

S(x,y) + T(x,y) = {y,x) + (0,x) = {y,2x). = 2S(x,y)-3T{x,y) = 2{y,x) - Z((i,x) = (2y,-x). (ST)(x,y) = S{.T(x,y)) = S(f),x) - (a;,0). (TS)(x,y) = T(S(x,y)) = T(y,x) = {0,y). SHx,y) = S{S{x,y)) = S{y,x) = (x,y). Note S^ = I, the identity mapping. THx, y) = T(T(x, y)) = 7(0, x) - (0, 0). Note T^ = 0, the zero mapping.

{S+T){x,y)

(2S-ZT)(x,y)

6.36.

T

Let

be the linear operator on R^ defined by

=

r(3, 1)

(By Theorem

(2,

-4)

and

T{1, 1)

such a linear operator exists and

6.2,

= is

(i)

(0, 2)

Find T{a,

unique.)

b).

In

particular, find r(7, 4). First write

(a, 6)

as a linear combination of (3,1) and (a, h)

Hence

(a, b)

=

{Sx, x)

+

(y, y)

=

=

a;(3, 1)

{Sx

+ y,

+

(1, 1)

using unknown scalars x and

y(l, 1)

(,2)

'Zx

x

+ y)

and so [^

Solving for x and y in terms of a and

X

Now

using

(2), {1)

and

T(a, b)

Thus

m, 4)

=

(7 - 4,

X

+ +

y y

= —

a b

b,

= ^o —

^6

and

y

= -|a + f 6

(3),

= =

+ yT(l, 1) = -4x) + (0, 2y) = = (3, -1).

xT{3, {2x,

20 - 21)

1)

y:

oo(2, (2a;,

+ 2/(0, 2) -4a; + 2y) = (a-b,5b- 3a) -4)

(5)

CHAP.

6.37.

LINEAR MAPPINGS

6]

Let

143

T be the operator on R^ defined by T{x, y, z) = T is invertible. (ii) Find a formula for T~^.

— y,2x + 3y-z).

{2x, 4a;

(i)

Show

that

W of T

The kernel

(i)

is

the set of all T(», y,

W

Thus

=

z)

2a;

Let

=

be the image of

(r, s, t)

T(x, y, z) = of r, s and t,

we

X

find

Let

V

be of if

i.e.,

(0, 0, 0),

(0, 0, 0)

=

(a;,

We

(x, y, z).

-

Sy

W — {0};

under T; then

s, t)

+

2x

0,

Thus

=

z

hence T

is

nonsingular and so by

s, t) under T^k y and z in terms

y, z) is the image of (r, will find the values of x,

and then substitute in the above formula for T~^. From

=

y

^r,

=

2r

T{x, y,

z)

— s,

=

z

= 7r

-y,2x + 3y-z) =

(2a;, 4a;

— Ss —

Thus T~^

t.

{r, s, t)

given by

is

= (^r,2r-s,lr-3s-t)

s, t)

dimension and let T be a linear operator on V. Recall that T if T is nonsingular or one-to-one. Show that T is invertible

finite

and only

invertible if

and only

=

y

(0, 0, 0).

(x, y, z)

and T-^r,

(r, s, f)

-

4x

0,

r-i(r,

6.38.

-y,2x + Sy-z) =

4x

(2a;,

=

y, z)

the solution space of the homogeneous system

is

which has only the trivial solution Theorem 6.9 is invertible. (ii)

such that T{x,

(x, y, z)

T

is if

onto.

is

By Theorem 6.4, dim V = dim (Im T) + dim (Ker T). Hence the following statements are (i) T is onto, (ii) Im T = V, (iii) dim (Im r) = dimV, (iv) dim (Ker T) = 0, Ker T = {0}, (vi) T is nonsingular, (vii) T is invertible.

equivalent: (v)

6.39.

Let

V

be of

dimension and let T be a linear operator on V for which TS = I, S on V. (We call S a right inverse of T.) (i) Show that T is Show that S = T~^. (iii) Give an example showing that the above

finite

for some operator invertible.

(ii)

need not hold (i)

Let

V is

if

V=

dim

of infinite dimension.

By

n.

(ii)

(iii)

Let

V

=

T(p{t))

+

ai

have

S(T(k))

V

= (r-ir)s =

=

=

S(0)

+

+ a2t+

n = rank

= r-»/=

r-i(rs)

over K; say,

= =

a„«""i

T(S{p{t)))

Oo

+

the identity mapping. 0¥'k. Accordingly,

I,

t

and only if T is onto; hence T / = rank TS — rank T — n.

invertible if

have

=

p(t)

ao

+ Oji + Ojf^ +

=

ajt

On

ST

+

and

= •

=

S{p(t))

+ Oit2 + + o„) + — £'('U) = M + Of

W. Let v U

By

= u for some v GV. Hence = EHv) = E(E{v)) = E{u)

I)

We now show that w e TF, the kernel = E(v) - E^v) = E(v) - E(v) -

the image of E.

E{v

- E(v))

of E:

V = U + W.

W. Hence

UnW

next show that E{v) = 0. Thus V

=

-

E(v)

{0}.

=

Let v

and

The above two properties imply that

so

eUnW.

UnW

V = U®

-

W.

Since {0}.

vGU,

E(v)

=

v by

(i).

Since

CHAP.

6.44.

LINEAR MAPPINGS

6]

Show that a square matrix A with Theorem 6.9, page 130.)

is

145

and only

invertible if

if it is

nonsingular.

(Compare

Recall that A is invertible if and only if A is row equivalent to the identity matrix 7. Thus the following statements are equivalent: (i) A is invertible. (ii) A and 1 are row equivalent, (iii) The = and IX = have the same solution space, (iv) = has only the zero soluequations tion, (v) A is nonsingular.

AX

AX

Supplementary Problems MAPPINGS mapping from

6.45.

State whether each diagram defines a

6.46.

Define each of the following mappings / (i)

To each number

let

(ii)

To each number

let / assign its

(iii)

R -> R by

/ assign its square plus

To each number — 3 the number —2.

/:R^R

:

a formula:

3.

cube plus twice the number.

let / assign the

be defined by f(x)

{1, 2, 3} into {4, 5, 6}.

number squared, and

= x^-4x + 3.

Find

to each

number <

3 let / assign

- 2a;),

6.47.

Let

6.48.

Determine the number of different mappings from

6.49.

Let the mapping g assign to each name in the set {Betty, Martin, David, Alan, Rebecca} the of different letters needed to spell the name. Find (i) the graph of g, (ii) the image of g.

6.50.

Sketch the graph of each mapping:

6.51.

The mappings f:A-^B, g:B-^A, h:C-*B, diagram below.

(i)

f(x)

=

^x

(i)

/(4),

(ii)

/(-3),

(iii) /(j/

(iv)/(a!-2).

{o, 6} into {1, 2, 3}.

— 1,

(ii)

F-.B^C

g(x)

=

and

2x^

number

— 4x — 3.

GiA^C

are illustrated in the

Determine whether each of the following defines a composition mapping and, if it does, find domain and co-domain: {\)g°f, {n)h°f, (iii) Fo/, (iv)G°f, {y)g°h, (vi) h°G°g. 6.52.

Let

/:R^R

and

fir

:

R -^ R

defining the composition

6.53.

be defined by f(x)

mappings

(i)

f°g,

(ii)

For any mapping f:A->B, show that 1b° f



=

x^

g°f, f



+ Sx + l (iii)

f°'^A-

g°g,

and g(x) (iv)

f°f.

= 2x-3.

its

Find formulas

LINEAR MAPPINGS

146

6.54.

For each of the following mappings / Sx

- 7,

(ii)

fix)

=



R -> R

:

[CHAP.

formula for the inverse mapping:

find a

(i)

f{x)

6

=

+ 2.

LINEAR MAPPINGS 6.55.

Show (ii) (iii)

jF

6.56.

Show

:

R2

^ R2

R3



R2 defined by F{x,

:

defined

by F(x)

:

defined

by F(x,

=

(iii)

(iv)

F:R2->R

defined by Fix,y)

:

:

V

Let

S :V

->

defined

defined

by Fix)

r(A)

= MA,

Find Tia,

b)

6.60.

Find Tia,

b, c)

(ii)

TiA)

Suppose

W be

Let

defined by

where

d

a, 6, c,

e

R.

(x^, y^).

=

ix

+ l,y + z).

ix, 1).

=

\x-y\.













Show that

over K.

t

+ a^t") = + a„t") =

+

a^t

+

Q

a^t^

ax

+

+



+



+

a^t

T :V -*V and

mappings

the

a„t" + i

+

aj"--^

M

^

is

:

:

^V

T -.V

R3

^R 1) =

where T RS

F:V -*U

+ dy)

- MA

Til, 1,

6.62.

hy, ex

be an arbitrary matrix in V. matrices over K; and let are linear, but the third is not linear (unless -AM, (iii) TiA) =^ + A.

where T R2

6.59.

6.6L

+

nXn

Let V be the vector space ot that the first two mappings (i)

Zx).

be the vector space of polynomials in V defined below are linear:

+ ait + S(ao + ai« +

x).

+ y).

are not linear:

=

=

{z,x

[ax

by Fix, y,z)

Tiaa

6.58.

(2.x,

F

that the following mappings :

=

=

y)

- y,

{2x

y, z)

defined by F(x, y)

(ii)

are linear:

=

defined by F(x, y)

F R2 ^ R2 F R3 ^ R2 F R ^ R2

(i)

6.57.

:

R -> R2 F R2 ^ R2

(iv)

F

that the following mappings

F F

(i)

is linear.

M

defined

is

by

defined

=

r(l, 2)

-1,

(3,

5)

Show

M=

and

=

r(0, 1)

(2, 1,

0):

-1).

by

=

-2)

3,

r(0, 1,

Show

that, for

and

1

vGV,

any

a subspace of V. Show that the inclusion t(w) = w, is linear.

Fi-v)

map

of

= -2

^(O, 0, 1)

=

-Fiv).

W into

i:W cV

V, denoted by

and

KERNEL AND IMAGE OF LINEAR MAPPINGS 6.63.

For each of the following linear mappings F, find a basis and the dimension of and (6) its kernel W: (i) F R3 -> R8 defined by F(x, y, z) = ix + 2y,y-z,x + 2z). F R2 ^ R2 defined by Fix,y) = ix + y,x + y). (ii)

(a)

its

image

U

:

:

F

(iii)

6.64.

V

Let

:

R3

^ R2

defined

by Fix, y,z)

map defined by FiA) the image U of F.

6.65.

Find a linear mapping

F

6.66.

Find a linear mapping

F

6.67.

Let Dif)

6.68.

Let (ii)

ix

+ y,y + z).

be the vector space of 2 X 2 matrices over

linear (ii)

-

V be the = df/dt.

:

:

= MA.

R3

^ RS

R*

^ RS

R

and

let

M

=

f

j

Find a basis and the dimension of

whose image whose kernel

is

is

generated by generated by

vector space of polynomials in t over R. Find the kernel and image of D.

F:V-^U be linear. Show that the preimage of any subspace of U

(i)

is

Let

(i)

(1, 2, 3)

Let

F V -* V :

the kernel TF of

and

(1, 2, 3, 4)

D:V -*V

.

be the

F

and

(4, 5, 6).

and

(0, 1, 1, 1).

be the differential operator:

the image of any subspace of a subspace of V.

y

is

a subspace of

U

and

CHAP.

6.69.

LINEAR MAPPINGS

6]

Each of the following matrices determines a linear map from

'12 A =

(i)

(

2 ^1

r C

Let

C be

->

:

T(a +

or

space over

the conjugate

= a— bi

bi)

itself,

(ii)

where

Show

2

-1

2

-2/

U

field C. That is, T(z) = z where z G C, R. (i) Show that T is not linear if C is viewed as a vector is linear if C is viewed as a vector space over the real field R.

T

Find formulas defining the mappings

6.72.

H

=

by F{x, y, z) and SF — 20.

defined

:

:

W of each map.

and the kernel

e

OPERATIONS WITH LINEAR MAPPINGS 6.71. Let R3 -» R2 and G R^ ^ R2 be iJ'

B =

(ii) I

mapping on the complex

a, 6

that

into R^:

K,*

1^

-1 -3

Find a basis and the dimension of the image 6.70.

147

F+G

(y,x

+ z)

and G(x,

Let R2 - R2 be defined by H(x, y) — (y, 2x). Using the mappings F and problem, find formulas defining the mappings: (i) and °G, (ii) (in) Ho(F + G) and + H°G. :

H

H°F

=

y, z)

G

(2«,

x

- y).

in the preceding

F°H

G°H,

and

HoF

6.73.

Show

Horn

defined

(R2, R2)

G

and

H are linearly independent:

by

Fix, y) = {x, 2y), G{x, y) = {y,x + y), H{x, y) = (0, x). F,G,He Hom (R3, R) defined by F{x, y, z) = x + y + z, G(x, y,z) — y + z, H(x, y, z) =

(ii)

6.74.

that the following mappings F,

F,G,He

(i)

F,G & Rom

For

{V, U),

show

rank (F

that

+

G)

^ rank

x

i^

— z.

+

rank G.

V

(Here

has

finite

dimension.)

6.75.

F :V -^ U

Let

and G:U-*V be linear. Give an example where G°F

nonsingular.

6.76.

Hom (V,

Prove that

Theorem

U) does satisfy page 128.

6.6,

all

Show is

F

that if

G

and

nonsingular but

G

G°F

are nonsingular then

is

is not.

the required axioms of a vector space.

That

prove

is,

ALGEBRA OP LINEAR OPERATORS 6.77.

6.78.

6.79.

Let S and T be the linear operators on R2 defined by S{x, y) — {x + y, Find formulas defining the operators S + T, 5S - ST, ST, TS, S^ and T^. Let

T

p{t)

=

Show (i)

6.80.

that each of the following operators

Suppose



T{x, y)

{x

+ 2y,

3x

T{x, y)

+ Ay).

=

(—y,

x).

Find p(T) where

_ 5f _ 2. = (x-3y- 2z,

T{x, y, z)

sion.

6.81.

be the linear operator on R2 defined by t2

and

0)

y

- 4«,

z),

(ii)

S and T

Show

are linear operators on that rank (ST) = rank (TS)

T on

R^

T{x, y,z)

=

V and that S = rank T.

invertible,

is

{x

is

+ z,x- z,

and

find

a formula for

T~h

y).

nonsingular.

Assume

V

has

finite

dimen-

®

Suppose V = U W. Let Ei and E2 be the linear operators on V defined by Ei(v) = u, = w, where v = u + w, ue.U,w&W. Show that: (i) Bj = E^ and eI = E2, i.e. that Ei and E2E1 = 0. and £?2 are "projections"; (ii) Ei + E2 — I, the identity mapping; (iii) E1E2 =

E2(v)

6.82.

Let El and E2 be linear operators on is

6.83.

the direct

Show that

if

sum

of the image of

the linear operators

V

satisfying

(i),

(ii)

E^ and the image of £2-

S and T

and

(iii)

of Problem 6.81.

Show

that

V

^ — Im £?i © Im £2-

are invertible, then

ST

is

invertible

and (ST)-^

=

T-^S-^.

LINEAR MAPPINGS

148

6.84.

V

Let

Show

have

T be

dimension, and let

finite

TnlmT =

Ker

that

[CHAP.

V

a linear operator on

such that

rank

(T^)

=

rank

6

T.

{0}.

MISCELLANEOUS PROBLEMS 6.85.

T -.K^-^

Suppose

G

vector V 6.86.

6.87.

X"»

a linear mapping. Let {e^, e„} be the usual basis of Z" and let A be columns are the vectors r(ei), Show that, for every ., r(e„) respectively. Av, where v is written as a column vector.

is

.

.

mXn matrix whose

the

=

T(v)

R"-,

Suppose F -.V -* U is linear and same kernel and the same image.

Show

that

F:V -^ U

if

.

.

U-

dim

onto, then

is

.

Show

a nonzero scalar.

is

fc

,

dim V.

that the

Determine

maps

all

F

linear

and kF have the

maps

T:W-*R*

which are onto. 6.88.

Find those theorems of Chapter 3 which prove that the space of w-square matrices over associative algebra over K.

6.89.

Let

T :V ^ U

be linear and

let

W

he a subspace of V.

6.90.

The

wGW.

Tt^-.W^U defined by r^(w) = T{w), for every (iii) Im T^r = T(W). (ii) Ker T^ = Ker T n W.

6.45.

(i)

No,

6.46.

(i)

fix)

to

W

(i)

is

the

an

map

T^^ is linear.

Yes,

(ii)

=

+ 3,

x2

(iii)

/(»)

(ii)

Supplementary Problems

to

No.

=

+ 2a;,

a;3

{"^

=

fix)

(iii)

[-2

- 4xy + 4x2 ^^y + ^x + S,

(iv) a;^

if

»

if

a;

<

(i)

6.48.

Nine.

6.49.

(i)

{(Betty, 4), (Martin, 6), (David, 4), (Alan, 3), (Rebecca, 5)}.

(ii)

Image of g {g o f)

(i)

24,

(ii)

3,

:

A

(iii) j/2

-*

(i)

(/ ° g){x)

(ii)

(f^°/)(a;)

f-Hx)

=

-

A,

No,

(ii)

+ 7)/3,

6.54.

(i)

6.59.

T{a, b)

6.60.

T{a, b,

6.61.

F(v)

+

6.63.

(i)

(a) {(1, 0, 1), (0, 1,

= c)

{-a

=

F{-v)

(iii)

(F o /)

/-!(«)

(ii)

-

=

36

F(v

dim

(ii)

(a) {(1, 1)},

(iii)

(a) {(1, 0), (0, 1)},

o

No,

(iv)

(g o g)(x) /)(a;)

(v)

(goh) :C

a;*

+ Ga;* + 14a;2 + 15x + 5

2c.

-2)},

=

1;

dim

t/

dim

U=

(6) {(1,

=

=

2;

2;

0;

hence F(-v)

(6) {(2,

-1)},

(6) {(1,

-

-F{v).

-1, -1)}, dim

dim T^ -1,

^ A,

=Ax-9 =

= V^^^^.

F(0)

?7

-» C,

(/

+ (-u)) =

-

A

(iii)

- 6).

6,

:

(iv)

7a

+ 26, -3a +

8o

3

{3, 4, 5, 6}.

= 4a;2 - 6a; + 1 = 2a;2 + 6a;-l

(x

3

- 8a! + 15.

6.47.

6.52.

T

is

Two operators S, T G A(V) are said to be similar if there exists an invertible operator P G A{V) for which S = P-^TP. Prove the following, (i) Similarity of operators is an equivalence relation. (ii) Similar operators have the same rank (when V has finite dimension).

Answers

6.51.

restriction of

Prove the following,

K

1)},

=

1.

dim

W=

\.

W=

\.

(yi)

{h°G°g) iB

^ B.

CHAP.

6.64.

LINEAR MAPPINGS

6]

U^

(i)

(")

")'

{(_2

F(x, y,

=

{x

6.66.

F(x, y,z,w)

=

6.67.

The kernel of

6.69.

(i)

(ii)

6.71.

(F

6.72.

(i)

+

+ 4y,

D

2x

+ 5y,

+ y - z,

(x

(a)

{(1,2,1), (0,1,1)}

(6)

{(4,

(a)

ImB =

-2, -5,

G)(,x, y, z)

2x

Sx

0), (1,

+ y - w,

R3;

=

(6)

(y

basis of

-3,

2.

0).

D

is

0, 5)}

Im A; dim(ImA) = 2. basis of KerA; dim(KerA) =

2.

basis of

{(-1,2/3,1,1)}

+ 2z, 2x-y + z),

The image of

(3F

-

KerB; dim(KerB)

2G)(x, y,

z)

=

(3j/

the entire space V.

=

l.

-Az,x + 2y + Zz).

+ z,2y), (H°G)(x,y,z) - {x-y.iz). (il) Not defined. (Ho(F + G)){x, y,z) = {HoF + Ho G)(x, y, z) = {2x-y + z, 2y + 4«). =

{x

+ T)(x, y) = (x, x) (5S - 3r)(a;, y) = (5a; + 8y, (S

SHx,

2.

+ 6y).

the set of constant polynomials.

is

(H°F){x,y,z)

(iii)

6.77.

z)

ImF; dim(ImF) =

^^sisof

l)' (I _2)|

6.65.

KerF; dim(KerF) =

I'asisof

i)|

(o

149

v)

Ti(x^ y)

=

= =

(x

+ y,

0);

-3x)

note that S^

{-X, -y); note that

=

(ST){x, y)

=z

{x- y,

(TS){x, y)

=

(0,

a;

0)

+ y)

S.

T^-\-I

=

Q,

hence

T

is

a zero of

6.78.

v{T)

6.79.

(i)

6.87.

There are no linear maps from RS into R* which are onto.

x'^

+

1.

0.

T-Hr,

s, t)

=

(14*

+ 3s + r,

4t

+ s,

t),

(ii)

T-^r,

s, t)

=

(^r

+ ^s,

t,

^r

- |s).

chapter 7

Matrices and Linear Operators INTRODUCTION

K

and, for v GV, suppose a basis of a vector space V over a field the coordinate vector of v relative to {ei}, which we write as a column vector unless otherwise specified or implied, is

Suppose

V

=

ttiei

{ei,

.

.

+ 0.262 +





.



,

e„} is

+ Omen. Then

\an

Recall that the

mapping v

l^ [v]e,

determined by the basis

{Ci}, is

an isomorphism from

V

onto the space K". In this chapter we show that there is also an isomorphism, determined by the basis from the algebra A{V) of linear operators on V onto the algebra cA of n-square matrices

{ei},

over K.

A

similar result also holds for linear

mappings F:V-^U, from one space

into another.

MATRIX REPRESENTATION OF A LINEAR OPERATOR

K

en} is and suppose (ei, a linear operator on a vector space V over a field of combination is linear each a so r(e„) vectors in V and are Now T{ei), the elements of the basis {e,}:

Let

r be

a basis of V.

.

.

.

r(e2)

= =

T{en)

=

T(ei)

The following Definition:

Example

.

.

.

,

,

01262

02161

+ +

Oniei

+

an2e2

anCi

+ 02262 +













+





+ + •

+

oi^en a2n6n

o„„e„

definition applies.

The transpose of the above matrix of called the

matrix representation of

matrix of

T

7.1

in the basis

T

coefficients,

denoted by

[T]e or [T], is

relative to the basis {ei} or simply the

{et}:

(On

021

...

Onl

012

022

.

.

fln2

Om

a2n

.

...

'

0,

-* V Let V be the vector space of polynomials in t over R of degree ^ 3, and let D V be the differential operator defined by D{p(t)) = d{p(t))/dt. We compute the matrix of D in the basis {1, t, t^, fi}. We have: :

:

D(l) D(t) D(fi) D{fi)

= = = =

150

= + Of + 0*2 + 0*3 1 = 1 + Ot + 0(2 + 0*3 + 2t + 0f2 + 0«3 It = + Ot + 3t2 + 0t3 3t2 =

CHAP.

MATRICES AND LINEAR OPERATORS

7]

151

Accordingly,

=

[D]

Example

Let T be the linear operator on R2 defined by T(x, y) — (ix — 2y, 2x + y pute the matrix of T in the basis {/i = (1, 1), /a = (-1, 0)}. We have

7.2:

T(Ji)

=

Tifz)

= n-1,

=

r(l, 1)

(2,3)

=

0)

3(1, 1)

=

(-4, -2)

-2(1,

1)

+

3/1

2(-l,

+

/2

=

0)

we

v

its

K

use the usual basis of K".

first theorem tells us that the "action" of an operator T on a vector v matrix representation:

Theorem

7.1:

Let

(ei,

.

.

That

we

e„}

.,

any vector

then

2/2

Av

t^

Our by

+

-2/1

A over defines a linear operator on K" by (where v is written as a column vector). We show (Problem that the matrix representation of this operator is precisely the matrix A

map

7.7) if

=

(-1, 0)

2

Recall that any n-square matrix

the

+

com-

-2

3

(3

Remark:

=

We

V

be a basis of

vGV,

=

[T]e [v]e

and

T

let

is

be any operator on V.

preserved

Then, for

[Tiv)]e.

is, if we multiply the coordinate vector of v by the matrix representation of T, obtain the coordinate vector of T{v).

Example

7.3:

D:V -^V

Consider the differential operator p{t)

= a+bt +

cfi

Hence, relative to the basis

show that Theorem

{1,

t, t^,

7.1

D{p{t))

=

/j

=

=

:

=

(5,7)

T{v)

=

(6,

17)

and

fz

=

[T]f in

m,M,

3dt^

= [D(pm

=

7(1, 1)

=

in

+

17(1, 1)

(-1, 0).

Example

^

7.2,

Example

7.2:

T{x, y)

2(-l, 0)

=

7/1

+

+

0)

=

17/i

11(-1,

=

we

[T(v)]f

— 2y,

2x

+

11/2 {/i, /a),

= 11

verify that Theorem 7.1 holds here:

(r^G) (et) = e* i)(«e«) = e« +

I>(1)

te*

= = = =

0(1) 1(1) 0(1) 0(1)

+ + + +

= = 3e»« = D( "m2)

(ai„, (l2r»



•>



,

rpi

/«n

«12

••

"21

«22







«a

y^ml

«m2







«tr

-

«in

I

"rnn)

Find the matrix representation of each of the following linear mappings relative to the usual bases of R": (i)

(ii)

(iii)

F F F

:

:

:

^ R3 R* ^ R2 R3 ^ R* R2

By Problem

=

defined

by F{x,

y)

defined

by F{x,

y, s, t)

defined

by F{x,

y, z)

we need

7.21,

-y,2x + 4y, 5x - ey)

{Zx

=

+ 2s-U,hx + ly-s- 2t) = {2x + Zy-%z,x + y + z. Ax - 5z, &y) (3a;

-4:y

only look at the coefficients of the unknowns in F{x, y, 3

g\

6

0/

.

.).

.

Thus

(2

7.23.

T:R2^R2

Let

the bases

{ei

(We can view T

own

= =

Tie^)

A =

Let fined

/

(0, 1)}

mapping from one space

Show

W

r(l,0)

r(0,l)

—3

A

Recall that

.

\1 -4

=

= (26-5o)/i + (3a-6)/2. = (2,1) = -8/1+ 5/2 = (-3,4) ^ 23/1-13/2

{a,b)

5

2

(

by F(v)

of (ii)

(1, 0), 62

as a linear

7.2,

r(ei)

(i)

=

the matrix of

7/ Av where v

in

into another, each having its

U=

(1, 1, 1),

f '

^^^ ^'^

^ /-8 \

5

F-.W^B?

de-

F

F

relative to the usual basis of R^

and

relative to the following bases of R^

(1, 1, 0),

h = (1, 0, 0)},

(1 _4

^)

(j _J

^)

{^1

=

(1, 3),

g^

=

(i)

F(1,0,0)

=

F(0,1,0)

=

/

from which

By Problem

W\ = 7.2,

{

(a, 6)

2

5

_.

-,

=

(26

23

"13

written as a column vector.

is

Find the matrix representation of

=

Then

determines a linear mapping

that the matrix representation of is the matrix A itself: [F] = A.

{/i

(ii)

T

of R^ respectively.

basis.)

By Problem

7.24.

= (2x-Zy,x + Ay). Find and {A = (1,3), ^ = (2,5)}

be defined by T{x,y)

=

3\



)

=

-A-

- 5a)flri +

1

= (1)

=

=

- 561-462

(_J)

(Compare with Problem

(Za-V^g^.

Then

261

+

7.7.)

162

(2,

5)}

and

R*.

CHAP.

MATRICES AND LINEAR OPERATORS

7]

F(h)

=

F{f2)

=

F(h)

=

5

Prove Theorem basis of

where

/

C7 is

^( 4)

il -4

==

-41flri

+

24fir2

-

-SfiTi

'-

I)

+

5fr2

5

Let F:y-*J] be linear.

7.12:

V-

Then there

F

Ml that {mi,

.

. ,

.

W

mJ

is

=

M2

F('Ui),

a basis of

J7',

{%,

=

...,

^(va),

Mr

=

A =

form

and a

(

\

V

the image of F.

m — r.

r\

Y

exists a basis of

JJ = n. Let be the kernel of F and hence the dimension of the kernel of F is and extend this to a basis of V:

and dim

m,

Set

J7.

8^2

such that the matrix representation A of i^ has the the r-square identity matrix and r is the rank ofF.

Suppose dim

of

+

5

il -4

24

are given that rank F = be a basis of the kernel of

We note

-12flri

:)

-12 -41 ~^^

[F]«

8

7.25.

-

--

(I -4 -?)(;

5

and

167

Let {wi,

.

.

We

.,«)„_ J

F(t)^)

the image of F. Extend this to a basis .

.

.,

M„

Mr+l,

.

.

.,

M„}

Observe that F(t;i)

=

Ml

=

^(va)

=

M2



F(i;,)

=

M,

F(Wi)

=0

= =

F(w^_r)

=

Thus the matrix of

F in the

0^2

+

+

1^2

+

+ +

0^2 OM2

+ +

= 0% +

OM2

+

1mi 0*

:

has a matrix representation of the form

Show that linear operators F and G are similar if and only matrix representations [F]^ and [G]g are similar matrices. Show

invariant under T, that

W each be invariant under a linear operator

= m and dim V = n. Show that T B are mXm and nX n submatrices,

(ii)

V

^

Recall that two linear operators F and operator T on V such that G = T-^FT. (i)

7.51.

1/

and

G

is also

diagonalizable.

there exists an m-square invertible

Q and an n-square invertible matrix P such that B — QAP. Show that equivalence of matrices is an equivalence relation.

matrix (i)

(ii)

Show

that

and only (iii)

Show

if

A and B A and B

can be matrix representations of the same linear operator

that every matrix A is equivalent to a matrix of the form V and r = rank A. (

identity matrix

7.52.

Two

F :V -> U

if

are equivalent. j '

where /

is

the r-square

A and B over a field K are said to be isomorphic (as algebras) if there exists a bijective A -* B such that for u,v S A and k G K, f{u + v) = f(u) + f(v), (ii) /(few) = fe/(w),

algebras

mapping

/

:

(i)

=

f{uv) f{u)f{v). (That is, / preserves the three operations of an algebra: vector addition, scalar onto multiplication, and vector multiplication.) The mapping / is then called an isomorphism of B. Show that the relation of algebra isomorphism is an equivalence relation. (iii)

A

7.53.

Let cA be the algebra of *i-square matrices over K, and let P be an invertible matrix in cA. that the map A \-^ P~^AP, where A G c/f, is an algebra isomorphism of a4 onto itself.

Show

c

MATRICES AND LINEAR OPERATORS

170

Answers /2 -3 7.26.

(i)

7.27.

Here

7.28.

Here

7.29.

(i)

1

=

(a, 6)

(26

=

(a, 6)

1

-2

(4a

- 3a)/i +

- h)gi +

10 10

(6

Supplementary Problems

to

(2a

- b)!^.

- Za)g2.

14

3

6-8

1

2

(i)

^..^

/-23 -39

^"^

(

(")

V-27 -32

-32 -45

35/

15

26

35

41

(iii)

,0

'0

101 5

(iii)

(ii)

2,

,0

25 \

-11 -15 j

« (25

5

1

,18

«

(

0'

1

^.,

-7 -4I

'2

0,

,0

7.30.

6 3

(ii)

\1

[CHAP.

2

o\

1

(iv)

0-3

5, 3

iO

1 7.31.

(i)

(ii)

-1

-3 4 -2 -3 ^c

-6 7.32.

(i)

(iii)

(ii)

P =

7.35.

P =

8 7.36.

7.37.

7.41.

-3

Q =

Q =

2

2

5

-1 -3

11

-2 -1

P =

-4

9

3

-2

(i)

3 7.42.

5

3

-1 -2

(i)

11

-1 -8

2

-1

-1

1

(iii)

(2,3,-7,-1)

6

d—a

c

7.34.

h

a-d

(iv)



0/

7

chapter 8

Determinants INTRODUCTION

K

To every square matrix A over a field determinant of A; it is usually denoted by

there

det(A)

or

assigned a specific scalar called the

is

|A|

This determinant function was first discovered in the investigation of systems of linear We shall see in the succeeding chapters that the determinant is an indispensable tool in investigating and obtaining properties of a linear operator. equations.

We comment that the definition in the case

We

of the determinant and most of its properties where the entries of a matrix come from a ring (see Appendix B).

with a discussion of permutations, which

shall begin the chapter

is

also apply

necessary for

the definition of the determinant.

PERMUTATIONS A one-to-one mapping denote the permutation

V Then vi, belonging to distinct eigenvalues Ai, A„. .,Vn are linearly independent. .

.

.

The proof Assume « > 1.

is

by induction on Suppose

OiV,

where the

Oj

Applying T

are scalars.

aiT(Vi)

But by hypothesis

r(i;j)

=

XjVj;

+

-I-

.

n



02^2

+

Ii

n.

. ,

.

to the

.„ O33W3

hz2W2

I

Hence

A =

an ""

a2i "''

ttl2

0^22

f

and

^

B =

'&n

&21

^31

612

^22

b

613

&23

^"33

I

I

1

{mi, M2, wi, W2, wz) are matrix representations of Ti and Ta respectively. By the above theorem T in this basis is matrix of = the = r2(Wj), and r(Wi) T,{Ui) Since r(tti) is a basis of V.

the block diagonal matrix

A generalization Theorem

10.5:



I

1

argument gives us the following theorem.

of the above

and V is the direct sum of T-invariant submatrix representation of the restriction of a spaces Wu ••, Wr. If Ai is by the block diagonal matrix represented T to Wi, then T can be Suppose

T:V^V

is

linear

[Ai

M

...

A2

...

A,

...

M with

The block diagonal matrix direct

sum

of the matrices Ai,

.

.

.

,

diagonal entries Ai, = Ai Ar and denoted by .

M

.

.,

©

A. •



is •

sometimes called the

© Ar.

CHAP.

CANONICAL FORMS

10]

225

PRIMARY DECOMPOSITION The following theorem shows that any operator is decomposable into operators whose minimal polynomials are powers of irreducible pols^omials. This is the first step in obtaining a canonical form for T,

T:V^V

Primary Decomposition Theorem

T:V^V

Let

10.6:

be a linear operator with minimal

polynomial

=

m{t)

where the direct

T

.

Moreover,

fi{T)"'.

V is the the kernel of the minimal polynomial of the restriction of

are distinct monic irreducible polynomials. of T-invariant subspaces Wi, .,Wr where Wi

fi{f)

sum

/i(f)">/2(t)"^... /.(*)"'

/i(;t)«i

is

.

Then

is

to Wi.

Since the polynomials /i(i)"* are relatively prime, the above fundamental result follows (Problem 10.11) from the next two theorems.

Theorem

10.7:

Suppose T:V^V is linear, and suppose f{t) = git)h(t) are polynomials such that f{T) = and g{t) and h(t) are relatively prime. Then V is the direct sum of the T-invariant subspaces U and W, where U = Ker g{T) = Ker h{T). and

W

Theorem

10.8:

In Theorem 10.7, if f{t) is the minimal polynomial of T [and g{t) and h{t) are monic], then g{t) and h{t) are the minimal polynomials of the restrictions of T to U and respectively.

W

We

will also use the

primary decomposition theorem

to prove the following useful

characterization of diagonalizable operators.

Theorem

10.9:

Alternate

Form

^V

A

linear operator T -.V has a diagonal matrix representation if and only if its minimal polynomial m{t) is a product of distinct linear polynomials.

Example

10.4:

10.9: A matrix A is similar to a diagonal matrix if and only minimal polynomial is a product of distinct linear polynomials.

Theorem

of

if its

Suppose

A

is

A#

7

is

a square matrix for which

similar to a diagonal matrix

complex

if

A

is

A^

=

I.

a matrix over

Determine whether or not (i)

the real field R,

(ii)

the

field C.

Since A^ - I, A is a zero of the polynomial f(t) = t^-1 = {t- l){t^ +t + The minimal polynomial m(t) of A cannot be t — 1, since A ¥' I. Hence m{t)

=

t2

+

t

+

1

or

m(t)

=

t^

-

l).

X

Since neither polynomial is a product of linear polynomials over R, A is not diagonalizable over R. On the other hand, each of the polynomials is a product of distinct linear polynomials over C. Hence A is diagonalizable over C.

NILPOTENT OPERATORS A linear operator T F -» V

is termed nilpotent if T" = for some positive integer n; k the index of nilpotency of T if T'' — but T''-^ ¥= 0. Analogously, a square matrix A is termed nilpotent if A" = for some positive integer n, and of index fc if A'' = but yj^k-i ^ Clearly the minimum polynomial of a nilpotent operator (matrix) of index k is m{t) — f"; hence is its only eigenvalue. :

we

call

The fundamental

Theorem

10.10:

result

on nilpotent operators follows.

Let T:V-^V be a nilpotent operator of index k. Then T has a block diagonal matrix representation whose diagonal entries are of the form

[CHAP. 10

CANONICAL FORMS

226

1 1

.

.

.

.

.

.

.

.

N 1

except those just above the main diagonal where are of orders of order k and all other ^ k. The number of of each possible order is uniquely determined by of all orders is equal to the nullity T. Moreover, the total number of (i.e. all

entries of A^ are

they are

There

1).

is

at least one

N

N

N

N

of T.

N

of order i is In the proof of the above theorem, we shall show that the number of — mi+i — Mi- 1, where mj is the nullity of T\ We remark that the above matrix is itself nilpotent and that its index of nilpotency is of order 1 is just the 1 x 1 zero equal to its order (Problem 10.13). Note that the matrix

2mi

N

N

matrix

(0).

JORDAN CANONICAL FORM An operator T can be put into Jordan canonical form if its characteristic and minimal polynomials factor into linear polynomials. This is always true if K is the complex field C. In any case, we can always extend the base field Z to a field in which the characteristic and minimum polynomials do factor into linear factors; thus in a broad sense every operator has a Jordan canonical form. Analogously, every matrix is similar to a matrix in Jordan canonical form. Theorem

Let T:V ->¥ be a linear operator whose characteristic and minimum polynomials are respectively m{t) = (i - Ai)"' ...{t- Xr)™and A{t) = (t- Ai)"' ...(*- XrY'

10.11:

Then T has a block diagonal matrix Ai are distinct scalars. representation / whose diagonal entries are of the form

where the

/A;

Ai «/ ij

0\

...

1

...

1

— Ai

Ai/

For each

A.

the corresponding blocks Ja have the following properties: at least one Ja of order mi;

(i)

There

(ii)

The sum of the orders of the Ja

(iii)

The number

(iv)

The number of Ja of each

is

is

all

other Ja are of order

^ mi.

m.

of Ja equals the geometric multiplicity of

Ai.

possible order is uniquely determined

Ai

A

1 Ai

1

T.

in the above theorem is called the Jordan canonical form of the block diagonal Ja is called a Jordan block belonging to the eigenvalue Ai.

The matrix J appearing operator T. Observe that

by

.

.

.

.

.

.

^\

1

...

Ai

Ai

..

1

...

..

+ ..

.

Ai

1

A

J

... .

.

.

Ai

'

Ai

.. ..

1

.

CHAP.

That

CANONICAL FORMS

10]

227

is, Jtj

=

Xil

+

N

N

where is the nilpotent block appearing in Theorem 10.10. In fact, we prove the above theorem (Problem 10.18) by showing that T can be decomposed into operators, each the sum of a scalar and a nilpotent operator. Example 105:

Suppose the characteristic and minimum polynomials of an operator T are respectively A(«)

=

(f-2)4(t-3)3

and

Then the Jordan canonical form of T

m{t)

=

(«-2)2(t-3)2

one of the following matrices:

is

or

The first matrix occurs if T has two independent eigenvectors belonging to its eigenvalue 2; and the second matrix occurs if T has three independent eigenvectors belonging to 2.

CYCLIC SUBSPACES Let r be a linear operator

on a vector space V of finite dimension over K. Suppose V and v ^ 0. The set of all vectors of the form f{T){v), where f{t) ranges over all polynomials over K, is a T-invariant subspace of V called the T-cyclic subspace of V generated by v;we denote it by Z{v, T) and denote the restriction of T to Z{v, T) by r„. We could equivalently define Z{v,T) as the intersection of all T-invariant subspaces of V containing v.

GV

Now

consider the sequence V, T{v),

T\v), T\v),

.

.

of powers of T acting on v. Let k be the lowest integer such that T''{v) bination of those vectors which precede it in the sequence; say, T^iv)

Then

=

m„(i)

-ttfc-i

=

t"

T'^-^v)

+

-

ak-it^-^

-

...

+

aiT{v)

+

ait

+

the unique monic polynomial of lowest degree for which mv(T) T-annihilator of v and Z{v, T).

is

The following theorem

Theorem

10.12:

Let Z(v,

is

a linear com-

aov ao (v)

=

0.

We

call m„(i) the

applies.

T),

T^ and m„(i) be defined as above.

(i)

The set

(ii)

The minimal polynomial of T„

(iii)

The matrix representation

{v, T{v), ..., r'=-i (v)} is

Then:

a basis of Z{v, T); hence dim Z{v, T) is

m„(i).

of Tv in the above basis is

=

fe.

CANONICAL FORMS

228

.

1 1

is called

— tto

.

.

.

-ai

.

.

-tti

.

.

.

The above matrix C

[CHAP. 10

— aic-2 — aic-i

1

.

the companion matrix of the polynomial m„(t).

RATIONAL CANONICAL FORM In this section we present the rational canonical form for a linear operator T:V^V. emphasize that this form exists even when the minimal polynomial cannot be factored into linear polynomials. (Recall that this is not the case for the Jordan canonical form.)

We

Lemma

Let

10.13:

T:V-*V

f{t) is

be a linear operator whose minimal polynomial

Then V

a monic irreducible polynomial.

V =

©

Z{vi, T)







e

is

the direct

is /(*)"

where

sum

Zivr, T)

of T-cyclic subspaces Z{Vi, T) with corresponding T-annihilators

-, fit)"',

/(*)"', /(«)"^

Any

other decomposition of

V

n =

Ml

^ %2 -







- Wr same number

into jT-cyclic subspaces has the

of components and the same set of T-annihilators.

above lemma does not say that the vectors vi or the T-cyclic subdetermined by T; but it does say that the set of T-annihilators uniquely spaces Zivi, T) are T. Thus T has a unique matrix representation are uniquely determined by

We emphasize that the

Cr

\

where the

d

polynomials

are companion matrices.

In fact, the Ci are the companion matrices to the

/(*)"*.

Using the primary decomposition theorem and the above lemma, we obtain the following fundamental result.

Theorem

10.14:

Let

T:V^V

be a linear operator with minimal polynomial m{t)

=

fi{tr^f2{tr- ... fsitr-

Then T has a

are distinct monic irreducible polynomials. unique block diagonal matrix representation

where the

/{(«)

'Cn

\ Clrj

Cs

where the C« are companion matrices. panion matrices of the polynomials

mi =

nil

— ni2

=

ni: \'

In particular, the C« are the com-

/i(t)"« rris

where

=

TCsl



^52









— Msr.

CHAP.

CANONICAL FORMS

10]

The above matrix representation of T nomials

called its rational canonical form.

The

poly-

are called the elementary divisors of T.

/i(i)»"

Example

is

229

Let V be a vector space of dimension 6 over R, and let T be a linear operator whose minimal polynomial is m{t) = (t^-t + 3)(« - 2)2. Then the rational canonical form of T is one of the following direct sums of companion matrices:

10.6:

(i)

C(t2-( +

3)

(ii)

C{f2-t +

3)

(iii)

C(t2-t +

where

3)

e © ©

C(f(t)) is the

C(«2-t

+ 3)

C((t-2)2) C((t-2)2)

©

© ©

C((

(vi)

are bases for Z(Vi, T) and Z(v^, f) respectively, for

Z(v^, T),

Problem

10.27),

(i;^)}

as required.

w^ are uniquely determined by T.

Since d denotes

fit),

dimy =

dimZj

and

h n^)

d{nx^

=

is

&V

f(T)Hv) /(r)«(Wj)



f(Ty{Zi).

Let

t

= f{Ty(Wi)+

%1

f(Ty{V)

=

dim(/(r)'(V))

S,

...,

Jlt

>

S,

Wt + i

f(T)HZi)

©

=

- s) +

d[{ni

Wi

=

-

w-^+

s. •





+ Wr where

w^

£ Zj.

+f(T)s(w,)

be the integer, dependent on

>

l,...,r

a cyclic subspace generated by if

can be written uniquely in the form v Now any vector v Hence any vector in f(T)^{V) can be written uniquely in the form

=

i

drii,

Also, if s is any positive integer then (Problem 10.59) f(T)^(Z>i f{T)s(Vi) and it has dimension d(Mj-s) if «i > s and dimension

and so



(1)

f(t)«-''2 g(t) ,

the other hand, by

.

We

{Vi, T(v,), ...,

Then



(1),

so that /(*)"« has degree dwj.

/(t)

and the f-annihilator of

where

is

f(T)«-n2g{T)(vi)

/(i)" divides

is /(t)"2

v^,

the f-annihilator of ^.

Z2

It



n2

9{T){Vi)

f(T)'H(w-h(T)(vi))

Consequently the T-annihilator of v^

is



ti

^2 also belongs to the coset 82-

multiple of the if-annihilator of V2f{T)«^{v2)

=

f{T)^{w)

U2

is

V

Z(v„ f)

/(*)">,

.,

we have by

the minimal polynomial of T,

=

Vi

Consequently, by induction,

f.

is a vector V2 in the coset V2 whose T-annihilator is /(^(x, y, z)

=

=

4x

5(3x

-

+

y

=

j/)

+

-13a;

(1,-1,3), Vi

=

+ h^y + bgz,

03(3;,

h-^x

= = =

0i(v2)

03(^2)

9j/

(0,1,-1), Vs

01(^3) 1

= = =

we

Solving the system of equations,

next find

= = =

02(^2) 02(1^3)

Solving the system,

we

0i(0, 1,

-1)

0i(0, 3,

-2)

obtain

a,^

= = =

3)

=

«!

we

obtain

61

=

7,

-1.

02(1.

02

= = =

02(0,1,-1) 02(0,3,-2)

h^—

=

—2, 63

=

61

= = =

03('"2)

03(^3)

Solving the system,

we

obtain

Cj

=

03(0,3,-2)

= = =

=

=

03(1,

-1,

3)

03(0,1,-1)

—2,

C2

1,

Cg

0, 03

-

=

+

62

(0,3,-2)}.

+

3C3

C3

= = 3c2-2c3 =

Thus

1.

1

0i(a;, V, 2)

«.



7x



2y



+

z.

3«.

1

03(x, y, z)

= —2x +

y

— 1,

i.e.

V =

=

, the annihilator of W.

S^S"";

of V,

=

V

Suppose

W + dim W

has

Consider the dual basis

v

v(0) = 0(v) = Accordingly,

e S*, S S»o.

(SO)*.

S C S"*. annihilates every ele-

But S1CS2; hence

Sg.

G

v

Hence

0.

finite

dimension and

(ii)

W^ = W.

We

want

W

n.

it

to the following basis of V:

to

show that dim

is

& subspace of V.

W'>

{wi,

.

= n-r. We .

.,w„

Vi,

.

.

.,

choose v„_r}-

,

, •





>

0r> "it





• >

. Hence nullity (T') = m — r. It then follows

dim

11.7:

-

C/*

nullity (T')

=

m ~

(m-r)

=

r

=

rank

Prove:

11.5,

that, as

(T)

T:V -^ U

Let

relative to bases {vi,

matrix A* is [Ui] and {Vj}.

T:V ^ U

be linear and let A be the matrix representation Then the transpose it„} of U. Vm} of V and {ui, the matrix representation of T:JJ* ^ V* relative to the bases dual to

Prove Theorem of

=

dimension and suppose

and dim

T)0)

By

finite

.

.

.,

^(^i) T{V2)

.

= =

a-iiU-i

a2iUi

+ a^^u-i + + a22M2 +

.











.

,

+ ain^n + a2nU„

.^.

LINEAR FUNCTIONALS AND THE DUAL SPACE

258

We

want

r'(of„)

{(tJ

Let

V

T(v)

11

prove that

to

r'(o'2)

where

[CHAP.

and

=

«1201

+

R be the and = Ax-2y + 3z. Find (i) +

R3

:





v^ 0.

In the next chapter we will see how a real quadratic form q transforms when the transiP is "orthogonal". If no condition is placed on P, then q can be represented in diagonal form with only I's and — I's as nonzero coefficients. Specifically, tion matrix

Corollary

12.6:

Any

form q has a unique representation

real quadratic qytVif

.

.

.

,

iXyn)



The above result for real quadratic forms or Sylvester's Theorem.

is

in the

form

2 vCi

I

'

*

*

~T'

i^g

'2,'>'i)

/(^2.^2)

= = = =

/((2,1), (2,1)) /((2.1), (1,-1))

/((l.-l), (2,1)) /((I, -1), (1,-1))

= 8-6 + 1 = 4 + 6-1 = 4-3-1 = 2+3+1

9\

I

must write

/((1,0), (1,0))

*^® matrix of / in the basis {u^, n^}.

=(r;)'

611

(iii)

-

B^P*AP.

A =

Thus

(ii)

2xi2/i

Find the matrix A of / in the basis {Ui — (1,0), 112 = (1, 1)}. Find the matrix B of / in the basis {vi = (2, 1), V2 = (1, -1)}. Find the transition matrix P from the basis {mi} to the basis that

(i)

=

X2), (yi, 2/2))

ft

Vi

/

's

^^^ matrix of / in the basis {vi, v^}.

and V2

in

ri

V2

terms of the

= =

(2,1)

=

(1,-1)

Mj:

(1,0)

=

+

2(1,0)

(1,1)

-(1,1)

= M1 + M2 = 2Mi-M2

= = = =

3 9

6

{Vi},

and verify

BILINEAR, QUADRATIC

268

^ =

Then

J

12.4.

and so

_j)

(

Q _M

=

P'

AND HERMITIAN FORMS

[CHAP. 12

Thus

.

Prove Theorem 12.1: Let F be a vector space of dimension n over K. Let {^j, ^„} be a basis of the dual space V*. Then {/«: ij" = 1,. .,%} is a basis of B{V) where /« is defined by f^.{u,v) = ^.(m)^.(v). Thus, in particular, dimB(F) = n\ .

.

.

,

.

Let

{e^,

.

.

and suppose

.,

for

ej)

(2aij/ij)(es,

V dual to {^J. claim that /

e„} be the basis of

=

/(ej, e^)

ay.

=

s,t

We

We

l,...,n.

(2 ay/y)(es, et) Hence

as required.

=

1,.

.

12.5.

G B(y) =

/(e^.e,)

have

=

2ay/«(es,

=

2ay«isSjt

is

linearly

0(es, et)

Thus

last step follows as above.

2 Oy ^i(es) ^^(et)

=

ej)

=

"st

=

f(es,et)

Soy/y =

Suppose

independent.

Then

0.

for

= (2 ao/y)(es, Bf) =

{/y} is independent

and hence

a^s is

a basis of B(V).

denote the matrix representation of a bilinear form / on F relative to a basis Show that the mapping / i-» [/] is an isomorphism of B{V) onto the vector space of w-square matrices.

Let

[/]

{ei,

.

.

.,e„) of V.

Since / onto.

completely determined by the scalars show that the mapping / l-> [/]

is

It suffices to

=

[af+bg] However, for

i,j

=

1,.

.

which

is

a restatement of

Prove Theorem

f(e^, ej),

is

a[f]

the

mapping

/*"*[/]

a homomorphism; that

+

is,

is

one-to-one and

that

b[g]

(*)

.,n,

(af

12.6.

show that

to

.,n,

= The

spans B(y). Let /

{/y} spans B{V).

remains to show that {/y}

It s,t

We first show that {/„} = ^ay/y. It suffices

12.2:

+

bg){ei, e^)

(*).

Thus the

Let

P

A

=

+

«/(«{, e^)

bg(ei, Bj)

result is proved.

be the transition matrix from one basis {e,} to another {Ci}, then B = P'^AP is the

basis {gj}. If is the matrix of / in the original basis matrix of / in the new basis {e\}. Let u,v

eV.

Since

P is the transition matrix from {e^} to {e,-}, we = [u]l, PK Thus f{u,v) = [u]lA[v], = [u]l.PtAP[v],.

P[v]e-

=

Since

u and v are arbitrary elements

[v]e-

hence

[u]l

of V, P^ A

P

is

have

P[u]g,



[u]g

and

the matrix of / in the basis {e^}.

SYMMETRIC BILINEAR FORMS. QUADRATIC FORMS 12.7.

Find the symmetric matrix which corresponds to each of the following quadratic polynomials: (i)

q{x, y)

(ii)

q{x, y)

= -

4x^

xy

-

6xy

-

7y^

+ y^

The symmetric matrix

A—

(uy)

(iii)

q{x, y, z)

(iv)

q(x, y, z)

= =

+ 4xy - y' + - 2yz + xz

3x^ x^

8xz



Qyz

+

z^

representing q(xi, .,x„) has the diagonal entry a^ equal to ajj each equal to half the coefficient of ajjajj-. Thus

the coefficient of xf and has the entries ay and

.

.

CHAP.

BILINEAR, QUADRATIC

12]

AND HERMITIAN FORMS

269

(ii)

12.8.

For each of the following such that P*

AP

A -

(i)

(i)

symmetric matrices A,

real

diagonal and also find

is

its

-:

(ii)

First form the block matrix (A,

find

A =

-

=

R2^^Ri + R2 and iZa -» — 2i2i + iJj -> SCj + C^, and C3 -» — 2Ci + C3

sponding column operations C^ 1

-3 -2

2

1

1

3

1

4

-2

C3-* C2

+ 2C3

and then

1

-» i?2

to

+ ^^3

A

and then the corre-

to obtain 1

-2

1

3

1

4

-2

\o

1/

B3

to (A,/)

/I

0\

Next apply the row operation

P

I):

{A, I)

Apply the row operations

a nonsingular matrix

signature:

1 1

^^^ then the corresponding column operation

to obtain

and then

Now A

has been diagonalized.

The signature S of

(ii)

A

Set

5 =

is

First form the block matrix (A,

P

then

2-1 =

P^AP -

1

I):

(A,/)

=

1

1

1

-2

2

1

2

-1

1 1 1

In order to bring the nonzero diagonal entry —1 into the first diagonal position, apply the row operation Ri R^ and then the corresponding column operation Ci «-> C3 to obtain 1

2

-1

1

-2

2

1

1

1\

1

-*

2

1

2

3

3

1

0/

1

Apply the row operations column operations C^

and then

1

2Ci

i?2 ~*

+ C^

\

1

2

1

2

-2

1

1

1

2Bi + -^2 ^nd JB3 -> iJj + R^ and C3 ^ Ci + C3 to obtain

2

1/

1 1 1

and then the corresponding

/-I

1\ 1

-1

and then \

1

2

3

3

1

1 1

2 1

AND HERMITIAN FORMS

BILINEAR, QUADRATIC

270

Apply the row operation C3



-3C2 + 2C3

—3^2 + 2R3

iJg ->

and then the corresponding column operation

to obtain

/-I

/-I

l\

2

12

3

4

12.9.

'

the difference

is

-3 -4/

1

\l

The signature S of

-14

-3 -4,

2

2\

P =

Set

12

2 \

/O has been diagonalized.

1^

and then

2-3-4/

-7

\

Now A

[CHAP. 12

2

S =



1

2

;

=

P'AP =

then

—1.

Suppose 1 + 1 v^ in K. Give a formal algorithm to diagonalize (under congruence) a symmetric matrix A = (an) over K. Case

I:

Apply the row operations

aii¥=0.

corresponding column operations

Cj -*

fij ->

— Oji Cj + an C;

— ajxi?x + OxxiJj, A

to reduce

=

i

to the

2,

.

.

and then the

.,n,

/ill

form

(

0"

^

^0

Case II: a^ = but a^ ^ 0, for some i > 1. Apply the row operation Ri «^i2j and then the corresponding column operation Cj to bring ctjj into the first diagonal position. This reduces the matrix to Case I.

Q

All diagonal entries Oji = 0. Choose i, j such that ay ¥= 0, and apply the row operainto the Rj + Rf and the corresponding column operation Ci-* Cj + Cj to bring 2ay # ith diagonal position. This reduces the matrix to Case II.

Case

III:

i?i ->

tion

In each of the cases,

matrix of order

we can

By

than A.

less

induction

Remark: The hypothesis that

12.10.

1

+1

A

we can

#^

in

/ail

form

(

finally bring

A

to the

K,

is

.

0\ _

)

where

B

is

a symmetric

into diagonal form.

used in Case III where

we

state that 2ay ¥= 0.

Let q be the quadratic form associated with the symmetric bilinear form /. Verify (Assume that the following polar form of /: fiu,v) - ^q{u + v) - q{u) - q{v)).

+ 1^0.)

1

+ v) —

q(u

+

1

If

12.11.

reduce

finally

we can

1 7^ 0,



= = =

f(u

+ v,u + v) — f{u, u) — f(v, v) + f{u, v) + f{v, u) + fiy, v) -

f(u, u)

f{u, u)

-

f(v, v)

2f{u,v)

divide by 2 to obtain the required identity.

K

12.4:

+ 1^0). Then V i.e. f{Vi, v,) =

.

matrix, Method

qiv)

(in which Let / be a symmetric bilinear form on V over a diagonal represented by has a basis {vi, .,Vn) in which / is

Prove Theorem 1

q{u)

for

.

i ¥- j.

1.

= = n>



then the theorem clearly holds. Hence we can suppose f ¥= Q and - for every v&V, then the polar form of / (see Problem 12.10) implies that / = 0. Hence we can assume there is a vector t?i e V such that f(Vi, v^ ¥= 0. Let consist of those vectors 1; G y for which /(^i, v) = 0. U be the subspace spanned by Vi and let If

dim V

/

or

dim V

if

If

1.

q(v)

=

1,

f(v, v)

W

We (i)

V = U ®W.

claim that

Proof that

UnW = =

uGW,

Since fore u

=

kvi

=

0.

{0}:

f{u,u)

Thus

Suppose

=

uG U nW.

f(kvi,kvi)

=

UnW = {0}.

Since

k^ f(Vi,Vi).

ue^U, u —

But

kv^ for some scalar

/(^i.-Wi) ?^ 0;

hence

k-0

k&K.

and there-

;

CHAP.

V=

Proof that

(il)

U + W:

Then

By

AND HERMITIAN FORMS

BILINEAR, QUADRATIC

12]

Now

f{v^,w)

w G W. By (1), v is (ii), V = U ® W.

Thus and

(i)

vG

Let

/ restricted to

W

the

is

sum

V.

Set

=

f(v„v)

Method



- ;^^/(^i'^i) =

of an element of

U

"

V = U + W.

and an element of W. Thus

W

— n. — 1; hence by a symmetric bilinear form on W. But dim such that f(v^, Vj) = for i ¥= j and 2 — i, j — n. But v„} of V for j = 2, .,n. Therefore the basis {v-^,

W

induction there is a basis {^2. . v„} of by the very definition of W, fiv^, Vj) = has the required property that /(-Uj, Vj) = •

271



.

for

.

.

.

.

,

i ¥= j.

2.

K

is congruent to a The algorithm in Problem 12.9 shows that every symmetric matrix over diagonal matrix. This is equivalent to the statement that / has a diagonal matrix representation.

12.12.

Let

(i)

(ii)

A =

if

K

if

K

I's,

(i)

Let

is

and

Show

a diagonal matrix over K.

, 1

for any nonzero scalars with diagonal entries aifcf I's

(iii)

^ I

fci,

.

.

.

the complex field C, then O's as diagonal entries;

,

A

that:

fc„

e

If ,

A

is

congruent to a diagonal matrix with only

is

congruent to a diagonal matrix

the real field K, then A is congruent to a diagonal matrix with only —I's and O's as diagonal entries.

P

is

be the diagonal matrix with diagonal entries

ptAP =

^"2 I

11

fc^.

Then

02^2

02

O-nKl

(ii)

Let

P

be the diagonal matrix with diagonal entries

f>i

if «{ '^

fl/Voi



"]

_

•«

-.

r,



Then P^AP has

the required form.

(iii)

P

be the diagonal matrix with diagonal entries the required form.

Let

Remark. We emphasize that (ii) is no longer true gruence (see Problems 12.40 and 12.41).

12.13.

6j

=

fl/A/hl

if

<

^

if

congruence

Oi^O _

.

Then P^AP has

'

is

replaced by Hermitian con-

Let / be a symmetric bilinear form on V over R. Then there is a basis of V / is represented by a diagonal matrix, and every other diagonal representation of / has the same number of positive entries and the same number of negative entries.

Prove Theorem

12.5:

in

which

u^} of V in which / is represented by a diagonal By Theorem 12.4, there is a basis {itj, ., w„} is another basis of matrix, say, with P positive and negative entries. Now suppose {wi, V in which / is represented by a diagonal matrix, say, with P' positive and N' negative entries. We can assume without loss in generality that the positive entries in each matrix appear first. Since = P' + N', it suffices to prove that P = P'. rank (f) - P + .

N

N

.

.

,

.

.

AND HERMITIAN FORMS

BILINEAR, QUADRATIC

272

Let

W

be the linear span of u^, .., up and let be the linear span of Wp, + 1, for every nonzero v e U, and f{v,v) ^ for every nonzero v

[7

.

>

f{v,v)

[CHAP. 12

UnW = {0}.

.

.

Note that dimU = P and dimW = n- P'. Thus dim{U+W) = dimU + dimW - dim(UnW) = P + {n-P')-0 = P But dim(U+W) ^ dim V = n; hence P-P' + n^n or P ^ P'. Similarly, P' ^ P fore P = P', as required.

Remark. The above theorem and proof depend only on the concept of theorem is true for any subfield K of the real field R.

12.14.

,

.

m)„.

& W.

Then Hence

+ n

P'

and there-

Thus the

positivity.

An nxn real symmetric matrix A is said to be positive definite if X*AX > for every nonzero (column) vector G R", i.e. if A is positive definite viewed as a bilinear form. Let B be any real nonsingular matrix. Show that (i) B*B is symmetric and (ii) B*B is positive definite. {Btpy = ptP" = B

^n(n

+ 1).

quadratic

real

form

qix,y)

=

b^

if

Suppose A is a real symmetric positive P such that A = P«P.

definite matrix.

Consider a real quadratic polynomial

qix^,

Show

0.

that there exists a nonsingular matrix

n 12.35.

.

.

.

,

2=

=

a;„)

ijj

(i)

an

If

'^ 0,

the

ay

=

ttjj.

=

2/n

show that the substitution Xi

yields

where

Oij^i^j, 1

=

(ai22/2

Vi

equation

+



+ «-ln2/n).



=

«2

••'

2/2.

««

aji

x^)

q{xi

= a^yl +

where

-yVn),

q'iVz,

q'

is

also

quadratic

a

polynomial. (ii)

an =

If

ajg

but, say,

^ 0,

show that the substitution

xi-yi + 2/2. yields the equation

q{xi,...,x„)

=

This method of diagonalizing q

12.36.

Use steps of the type

X2

= yi- V2>

2 Mi^/jis

known

«3

"^^^^^

=

2/3.

^n







^^ 0.

^n

.

i.e.

=

Vn

reduces this case to case

as "completing the square".

problem to reduce each quadratic polynomial Find the rank and signature in each case.

in the preceding

12.29 to diagonal form.

(i).

in

Problem

HERMITIAN FORMS 12.37.

For any complex matrices A, B and any k e C, show (i)

12.38.

ATB = A+B,

(ii)

M^JcA,

(iii)

AB = AB,

For each of the following Hermitian matrices H,

that: (iv)

A*

=

A^.

find a nonsingular

matrix

diagonal:

Find the rank and signature in each

12.39.

Let

A

,

P

such that P*

HP

is

^^.,

.

case.

be any complex nonsingular matrix.

Show that

H = A*A

is

Hermitian and positive

definite.

12.40.

We say that B is Hermitian congruent to A B = Q*AQ. Show that Hermitian congruence

if is

there exists a nonsingular matrix an equivalence relation.

Q

such that

CHAP.

12.41.

BILINEAR, QUADRATIC

12]

AND HERMITIAN FORMS

277

Prove Theorem 12.7: Let / be a Hermitian form on V. Then there exists a basis {e^, .,e„} of V which / is represented by a diagonal matrix, i.e. /(ej, ej) = for t # j. Moreover, every diagonal representation of / has the same number P of positive entries and the same number N of negative entries. (Note that the second part of the theorem does not hold for complex symmetric bilinear forms, as seen by Problem 12.12(ii). However, the proof of Theorem 12.5 in Problem 12.13 does carry over to the Hermitian case.) .

.

in

MISCELLANEOUS PROBLEMS 12.42.

Consider the following elementary row operations:

Bi

w^

for every

in V.

DIAGONALIZATION AND CANONICAL FORMS IN EUCLIDEAN SPACES V over K. Let r be a linear operator on a finite dimensional inner product space

RepT, of eigenvalues and eigenvectors the upon resenting r by a diagonal matrix depends Now (Theorem T 9.6). of A(t) polynomial characteristic and hence upon the roots of the field C, but may not have any ^^t) always factors into linear polynomials over the complex for Euclidean spaces (where situation the linear polynomials over the real field R. Thus

them

K

= C); hence we treat inherently different than that for unitary spaces (where the next spaces unitary and below, spaces separately. We investigate Euclidean

X = R)

is

m

section.

Theorem

13.14:

dimensional Let T be a symmetric (self-ad joint) operator on a real finite basis of V orthonormal an exists there Then V. inner product space by a represented can be T is, that T; of consisting of eigenvectors diagonal matrix relative to an orthonormal basis.

We

give the corresponding statement for matrices.

Alternate

Form

of

Theorem

13.14:

Let A be a real symmetric matrix. Then there exists an orthogonal matrix P such that B = P-^AP = P*AP is

diagonal.

normalized orthogonal eigencan choose the columns of the above matrix P to be eigenvalues. corresponding vectors of A; then the diagonal entries of B are the

We

CHAP.

INNER PRODUCT SPACES

13]

Example

13.18:

2

/

-2\

Let

A =

The

characteristic polynomial A(t) of

We

.

j

(

=

A(t)

find

289

an orthogonal matrix

A

such that

P^AP

is

diagonal.

is

t-2

=

\tI-A\

P

2

(«-6)(t-l)

f-

2

5

The eigenvalues of A are 6 and 1. Substitute « = 6 into the matrix tl obtain the corresponding homogeneous system of linear equations

+

4x

A

nonzero solution

is

=

2y

2x

0,

+

Next substitute (1, —2). homogeneous system

-X + 2y =

-

2x

0,

is (2, 1). As expected by Problem Normalize Vi and V2 to obtain the orthonormal basis

Finally let

P

=

=

-2/V5), M2

=

1

matrix

into the

13.31, v^

/ I/V5

2/V5\

expected, the diagonal entries of

P*AP

Then

/6 (

VO

I/V5/

V-2/X/5

~A

(2/V5, 1/V5)}

P-iAP = PtAP =

and

tl

and V2 are orthogonal.

be the matrix whose columns are u^ and U2 respectively.

P = As

(1/-V/5,

t

=

4y

A nonzero solution

{«!

to

=

2/

=

v^

to find the corresponding

-A

1

are the eigenvalues corresponding to the

columns of P.

We observe that the matrix B - P~^AP = P*AP is also congruent to A. Now if q is a real quadratic form represented by the matrix A, then the above method can be used to diagonalize q under an orthogonal change of coordinates. This is illustrated in the next example. Example

13.19:

Find an orthogonal transformation form q{x, y) = 2x^ — 4xy + 5y^.

of coordinates virhich diagonalizes the quadratic

The symmetric matrix representing Q

/

A =

is

/ I/V5

(Here 6 and

1

.

In the preceding

PtAP =

/fi „

"

(

Vo

1

Thus the required orthogonal transforma-

is

x\ )

this

)

'

1/V5/

are the eigenvalues of A.)

tion of coordinates

Under

2/\/5\ for which

V-2/V/5

-2\

^

example we obtained the orthogonal matrix

P =

2

I

=

X

/x'\

= P[

that

>,')

change of coordinates q

is

a;7\/5

+

^

is,

22/'V5

^

transformed into the diagonal form

q(x',y')

=

6x'^

+

y'^

Note that the diagonal entries of q are the eigenvalues of A.

An orthogonal operator T need not be symmetric, and so it may not be represented by a diagonal matrix relative to an orthonormal basis. However, such an operator T does have a simple canonical representation, as described in the next theorem. Theorem

13.15:

Let

T"

there

form:

be an orthogonal operator on a real inner product space V. Then an orthonormal basis with respect to which T has the following

is

290

INNER PRODUCT SPACES

[CHAP. 13

COS Or

— sin

Or

sin Or

cos

Or

The reader may recognize the above 2 by 2 diagonal blocks as representing rotations in the corresponding two-dimensional subspaces.

DIAGONALIZATION AND CANONICAL FORMS IN UNITARY SPACES We now present the fundamental diagonalization theorem for complex inner spaces,

i.e.

for unitary spaces.

Recall that

an operator T

is

said to be normal

mutes with its adjoint, i.e. if TT* = T* T. Analogously, a complex matrix normal if it commutes with its conjugate transpose, i.e. if AA* = A*A. Example

Let

13.20:

A =

(

.

„,„.).

A*A Thus

A

The following theorem

Theorem

13.16:

is

\1

com-

said to be

S

+ 2iJ\l -'

V^

S-2iJ\i

S-2iJ 3

M

+

3

=

2iJ

+

3 3i

-

3i

14

^"^* '

^

\3 +

3i

14

a normal matrix.

applies.

T be a normal operator on a complex finite dimensional inner product space V. Then there exists an orthonormal basis of V consisting of eigenvectors of T; that is, T can be represented by a diagonal matrix Let

relative to

We

= (^

is

Then 2

\i

A

product

if it

an orthonormal

basis.

give the corresponding statement for matrices.

Alternate

Form

of

Theorem

tary matrix

Let A be a normal matrix. such that B — P~^AP = P*AP

13.16:

P

Then there is

exists a uni-

diagonal.

The next theorem shows that even non-normal operators on unitary spaces have a relatively simple form.

Theorem

13.17:

Let T be an arbitrary operator on a complex finite dimensional inner product space V. Then T can be represented by a triangular matrix relative to an orthonormal basis of V.

CHAP.

INNER PRODUCT SPACES

13]

Form

Alternate

Theorem

of

291

Let A be an arbitrary complex matrix. Then there matrix P such that B = p-^AP = P*AP is triangular.

13.17:

exists a unitary

SPECTRAL THEOREM The Spectral Theorem

Theorem

is

a reformulation of the diagonalization Theorems 13.14 and 13.16.

Theorem): Let T be a normal (symmetric) operator on a complex (real) finite dimensional inner product space V. Then there exist orthogonal projections Ei, ..,Er on V and scalars Xi, ...,\r such that

13.18 (Spectral

.

(i)

T =

+ X2E2 +

XiEi

(ii)

(iii)

EiEj

=

for i¥=

+







E1 + E2+ • -^Er ^

XrEr

I

j.

The next example shows the relationship between a diagonal matrix representation and the corresponding orthogonal projections. '2

Example

13.21

:

A =

Consider a diagonal matrix, say

|

|

The reader can verify that the Ei are (i)

A =

2£7i

+

3^2

+

5^3.

(ii)

projections,

^1

fif

i.e.

+ ^2 + ^3 =

Let

.

=

I,

Bj,

(iii)

and that

E^^ =

Solved Problems

INNER PRODUCTS 13.1.

Verify the relation Using

[12], [Ii]

avi

[I2],

+

bv2)

avi

Verify that the following {u, V)

=

Method

We

XiVi,

-

+ hvi) =

and then {u,

13.2.

{u,

xiy2

-

X2yi

we

d{u, Vi)

+

b{u, V2).

find

=

{avi

=

d

+

6^2, u)

+

=

a{Vi,u)

6 {^2, u)

is

an inner product in

+

3x2y2,

where u =

=

+

b{V2, u)

a(u,Vi)

+

6

R^:

{xi, X2),

v

=

(2/1, 1/2).

1.

verify the three axioms of an inner product.

au

+ bw =

a{xi, Wj)

+

H^i,

Z2)

Letting

=

(ffl«i

+

w= bzi,

(«i, Zz),

aX2

we

+ 623)

find

for i¥= j

INNER PRODUCT SPACES

292

[CHAP. 13

Thus

+ bw,

{au

and so axiom

[/2]

=





=

9

-

12

-

hence

25;

12

+

48

=

=

l|vll

33;

hence

\\v\\

=

=

12v/||12i;||

=

+

12

+

(—1)2

8 (6, 8,

We

-3).

(6/\/l09, 8/a/109. -3/\/i09

have



Also,

=

is satisfied.

[/i]

{v,u)

and axiom

V)

19/3

g{t)

t*/4

and

= t^-2t-Z.

- 7t2/2 - 6tT =

||/i|

=

y/Wj)

Find

(i)

-37/4

=

VW3

(f,g)

and

{f,g) (ii)

=

||/||.

CHAP.

13.6.

INNER PRODUCT SPACES

13]

Prove Theorem 13.1 (Cauchy-Schwarz): — If V = 0, the inequality reduces to zz

=

is

any real value:

Set

t

any complex number

(for

|z|2

=

— = =

we

root of both sides,

13.7.

(u, u)

||m|P

=

\\v\\

[N,\. [N,]:

and

[/g],

(v,v)

||m|P-

find

=

\\v\\

from which

,

.

\\kv\\^

if

I

||m||2 H-yH^.

{kv, kv)

+

and only

=

v

if

following axioms:

0.

||i;||

we

inequality,

obtain

+ v,u + v) — {u,u) + {u,v) + (u,v) + {v,v) ||«|p + 2|M||HI + IMP = (IMI + IMIP

= ^

0)112

{u

Taking the square root of both sides yields [A^s]Remark: [Ng] is frequently called the triangle inequality because if we view m + -u as the side of the triangle formed with u and v (as illustrated

Taking the square

= if and only if {v,v) = Q, \\v\\ = ^/l^v/v) — 0. Furthermore, = Q. Thus [Wj] is valid. = kk(v,v) = |fc|2 ||-y||2. Taking the square root of both sides gives [ATg].

Using the Cauchy-Schwarz |m

^

satisfies the

hence if v

0;

=

](m,i>>P

\\v\\.

and only

this holds if

We



Using where t

obtain the required inequality.

]H| = |fc||H|. \\u + v\\^\\u\\ +

By and

^ 0;

0.

= (u — {u,v)tv, u — (u,v)tv) — {u, v)t(u, v) — (u, v)t(v, u) + (u, V)(U, V)t^V, V) - 2t\{u,v)\^ + |(M,i;)|2t2||v||2

Prove that the norm in an inner product space [2Vi]:

Now suppose v # ||m —

Then

=

=

Vi



(1,

i,

and

0)

=

(1

1

2wi

\

i

=

r

+

Suppose aiMi







+

a^u^

=

=

=

Accordingly, {mi, It

.

.

'

.

,

is,

w

is

||vi||

= V2

^_^,^,l-^

-

/l

+

2i

2-t

2

-2i

...,«,} is linearly independent and, for

{t*i,

{V,

is

vig'^'vn

ii2»iii



W

18

U^Ut







{V, Ur)Ur

im.

=

2,

Taking the inner product of both

0.

+ a^u^, Ml) ai(Mi,Mi> + a2 + •• + 0^ = + + Oi 1 + a2

= = =

.

.

. ,

sides with respect to Mi,

(aiMi

+













ftr

O-i

'

r,

+

e„

e^Xv, e,)







+



fe„

representing

ej,

fc„(e„, ej}

=



h fc„e„,

+



+

H

fci •

+

+

1

fcie,

for

+







fci

O'^i

and

ej)



+

+





fc„

fc„e„; = aibi + chbl + +



=

By

=W ®W^.

{u, ei)ei 4- (u, 62)62

{u, gj)

But

{0}.

we remark



by the above; hence

WcW'^-'-



Suppose

=

dimension;

+ OnBu, biCi + + 6„e„) Onhn; for any u,v GV, (u, v) = {u, ei){v, ei) + + (u, e„); if T-.V^V is linear, then (r(ej), ei) is the i/-entry of the matrix A T in the given basis {d}. (ttifii

v e.V,

^

hence WnW''-

finite

that

V —

13.2,

dim

and

an orthonormal basis of V.

uGV, u =

Similarly, for

(iii)

"

0;

wSTV^-^.

hence

By Theorem

dim W*"

= =

(ii)

vGW^;

for every

dimension.

finite

has

WcW^^-^, and

that

'

If

Hence

.

Let

=

Then {w,v)

suppose

dim

13.15.

Show

Let W' be a subspace of W. has finite dimension.

+

TF"*-.

give the desired result

{0},

Note that we have proved the theorem only for the case that that the theorem also holds for spaces of arbitrary dimension.

13.14.

i

w=

This yields

0.

=

W'"'-

(ir+i«*r +

part of an ortho-

...,«„£

u^+i,

fej

obtain the desired result.

ai6;ei

+

+









+





+

{u, e„){v, e„>

a„6„

(i),

e„>e„

296

INNER PRODUCT SPACES

[CHAP. 13

By(i),

(iv)

r(e,)

=

ei

+

{T(e,), e^je^

+

T{e„)

=

iT(e„), e,)ei

+

{T(e„), 6^)6^

+

A representing T in the basis {e;} hence the v-entry of A is (T(ej), ej.

The matrix efficients;







+

{T{e,), e„>e„

+

(T{e^), e„)e„

the transpose of the above matrix of

is

ADJOINTS 13.16.

T

Let

be the linear operator on C^ defined by

=

T{x, y, z)

(2x

+ (1 - i)y,

+ 2i)x - 4iz, 2ix + (4 - Zi)y - Zz)

(3

¥mdiT*{x,y,z). First find the matrix

A

representing

T

in the usual basis of C^ (see

3

Form

the conjugate transpose

A*

+

-4i

2i

4

2i

\

7.3):

X-i

2

/

A =

Problem

-

-3

3i

of A: 2

/

=

A*

1

+

-

3

-2i

2i

+

4

i

3t

-3

4i

\

Thus T*(x, y,

13.17.

Prove Theorem

13.5:

product space V. every v G.V. Let

{ei,

.

.

.

,

e„} be

=

z)

{2x

(4

+ 3i)z,

=

V

0(ei)ei

+

0(62)62

defined by u(v)

(ej.M)

=

agree on each basis vector,

^ej,

=

0(e7)ei

u =

+







(v, u),

+







+

0(e„)e„

for every v ElV.

+ 0(ije„> =

13.6:

Let

T be

Then for

i

=

1,

.

.

.

,

to,



for every

uGV.

=

for

Accordingly,

1

1

INNER PRODUCT SPACES

298

By

(ii)

+ w),

(T(v

hypothesis,

and (T{w), w)

=

+ w) =

v

w

arbitrary in

is

-i(T(v),w)

v.wGiV. Expanding and

for any

-

+

Dividing through by

=

{iT(w),v)

and adding

i

=

(T(w), V)

(1)

=

{T(v), iw)

i(T{,v),

w)

=

i{T(w),v),

+

to (1),

we

=

i{T(w), V)

obtain

a

=

{T(w), v)

for

Q

T -Q.

By

=

{T{v),v}

Q

Substituting iw for w, and using

(1).

and {T{iw),v)

w)

-i{T(v), w)

the result holds for the complex case; hence + w),v-\-w) = 0, we again obtain (1). a real space, we have (T(w),v) = (w,T(v)) = {T(v),w). {T(v), w) = Q for any v,w&V. By (i), T - Q.

(iii)

setting

0,

{T(v),

Note

[CHAP. 13

we need Since T

(ii),

Expanding {T(v

v.wGV. By

any

(1)

only consider the real case. is

self-adjoint

and since

Substituting this into

For our example, consider the linear operator T on R2 defined by = for every m G V, but T ¥^ 0.

T(x, y)

=

(1),

(y,

it is

we

obtain

~x).

Then

{T{u), u)

ORTHOGONAL AND UNITARY OPERATORS AND MATRICES 13,23.

Prove Theorem

=

C/-i;

every v

&V.

(i)C/*

Suppose

The following conditions on an operator {Uiv),Uiw)} = {v,w}, for every v.wGV; (iii)

13.9:

(ii)

=

{Uiv),U(w))

Thus

(i)

implies

(ii).

Now

if

(ii)

implies

Suppose

(iii)

(iii).

=

Then for every

Hence

13.24.

and

implies



U

w.

is

=

{U{v), V(v))

U*U =

so

U{W) — W; that

nonsingular,

Now

let

V

G W^. Then

Let

A

to

W

.

for

w)

=

Therefore

W

(U(y),

Thus U{v) belongs

{v,vo)

\\v\\

(i).

I.

is,

=

{V, v)

t7*t7

-/

Thus

17*

=

t7-i

let

wG

for any

{I{v), V)

is self -adjoint

C7 be a unitary (orthogonal) operator on V, and under U. Show that W^ is also invariant under U.

Since

13.25.

(iii)

Let

U{w')

-

{v,I(w))

V,

ve.V. But

for every

V*U-I-Q

S

i;

=

(U*U{v), V)

((U*V - I)(v),v) =

=

= V(^> =

V 1. By the preceding problem, there exists a nonzero eigenvector v^ of T. Let be the space spanned by v-^, and let Mj be a unit vector in W, e.g. let Mj = i'i/||vi||.

The proof

Now

W

CHAP.

INNER PRODUCT SPACES

13]

Since v^

W^

an eigenvector of T, the subspace TF of

is

Theorem

an orthonormal basis

exists

=

t

A = (

=

.

.

{u^,

.

Thus the restriction T of T to W^ is Hence dim TF"*- = m - 1 since dim W^ ,

.

.

W^

m„} of

Find a

.

2

)

^

.

The characteristic polynomial

=

A(t)

|f/-A|

A

A(t) of

t-

=

-2

1

t-

-2

2x-2y nonzero solution

is

Next substitute

t



Vi

(1, 1).

— —l

Finally

As

13.34.

P

let

v^

is

=

(1,

—1).

A —

2t

-

-

=

2j/

P^AP

is

set

diagonal.

=

{2

-

=

3

(t

- 3)(t +

1)

=

t

-2x +

into the matrix tl

3

—A

-

to obtain the

=

2j/

=

to obtain the corresponding

-2a;

0,

—A

(ll\2, l/v2).

homogeneous system

=

2^/

Normalize V2 to find the unit solution u^

=

(1/a/2, —1/^/2).

be the matrix whose columns are Mj and Mj respectively; then

expected, the diagonal entries of

Let

for which

But

1

0,

into the matrix tl

-2x nonzero solution

P

of T.

an orthonormal

is

By

induction, there

T and hence

Normalize v^ to find the unit solution Mi

of linear equations

A

By

1.

is

and thus the eigenvalues of A are 3 and —1. Substitute corresponding homogeneous system of linear equations

A

=

consisting of eigenvectors of

orthogonal matrix

(real)

13.21,

a symmetric operator.

.

,

By Problem

invariant under T.

is

m because Mj G PF-*- Accordingly {%, %,...,«„} 2, eigenvectors of T. Thus the theorem is proved.

for

and consists of

13^3. Let

T.

=W ®W^.

V

13.2,

=

T*

invariant under

is

y

301

\1

2

1

\l

1

2/

Find a

.

P*AP

are the eigenvalues of A.

orthogonal matrix

(real)

P for which P'^AP

is

diagonal.

First find the characteristic polynomial A{t) of A:

-

t

=

A(t)

\tI-A\

=

-1 -1

-1

2

-1 -1

t

-

2

t -

-1

=

(t-l)2(t-4)

2

A are 1 (with multiplicity two) and 4 (with multiplicity matrix tl — A to obtain the corresponding homogeneous system —X — — 2 = 0, —X — y — z = 0, —X — y — z =

Thus the eigenvalues of t

=

into the

1

one).

Substitute

J/

That (1,

is,

—1,

0).

X

+

We

y

+

z

=

0.

The system has two independent

seek a second solution V2

a+ b + For example, Vj

=

(1, 1,

Ml

Now

substitute

—2).

=

c

= (a, 6, c) which = and also

a

Next we normalize f j and V2 (l/\/2,

-I/V2,

0),

Ma

=

One such solution is Vi = orthogonal to v^; that is, such that

solutions.

is also



6

=

to obtain the unit orthogonal solutions

(l/\/6, l/\/6,

-2/V^)

= 4 into the matrix tl — A to find the corresponding homogeneous 2x — y — z = 0, -X + 2y - z = 0, -x - y + 2z = t

Find a nonzero solution such as t^s M3 = (l/v3> l/v3, l/vS). Finally, if P

= is

(1, 1, 1),

and normalize v^

to

system

obtain the unit solution

the matrix whose columns are the Wj respectively.

INNER PRODUCT SPACES

302

P =

I

i/\/2

i/Ve

i/VsX

-I/V2

l/Ve

l/Vs

-2/\/6

13.35.

[CHAP. 13

PtAP

and

I/V3/

Find an orthogonal change of coordinates which diagonalizes the real quadratic form q(x, y) = 2x^ + 2xy + 2y^. First find the symmetric matrix '2

A =

representing q and then

and

=

A{t)

A

are 1 and

find the

=

1*7- A|

=

x'^

+

its characteristic

polynomial

A(t):

-1

2

-1

hence the diagonal form of q

3;

q(x', y')

We

t-

1-

2

1

The eigenvalues of

A

-

t

{t-l){t-3) 2

is

Zy'^

corresponding transformation of coordinates by obtaining a corresponding orthonormal A.

set of eigenvectors of

Set

f

=

1

matrix

into the

tl

—A

A

=

nonzero solution is v^ homogeneous system

(1,-1).

A Vi

nonzero solution

=

V2

is

P

The transition matrix

0,

i



y

=

3

y



into the matrix tl

—A

to find the corresponding

—X + y =

0,

As expected by Problem

(1, 1).

and V2 are orthogonal. Normalize

13.31, v^

P';

that

Then there

=

(l/\/2,

I/V2)} follow:

+ y')/V2 (-x' + y')/^/2 (x'

= P

l/\/2

P

are Mj and

1*2-

We

can also express

x'

and

y' in

terms of x and

=

{x-y)/V2,

y'

=



by

+ j/)/\/2

Let T be an orthogonal operator on a real inner product space an orthonormal basis with respect to which T has the following

13.15: is

j/

is,

x'

Prove Theorem

-l/\/2), M2

and

Note that the columns of

P^i =

(l/\/2,

l/\/2

-l/\/2

using

=

and the required transformation of coordinates

l/^/2

P =

V.

set

—X —

and V2 to obtain the orthonormal basis {ui

13.36.



=

y

Now X

homogeneous system

to obtain the corresponding

—X —

form:

1

I

-_j I

-1 -1

-1 ^

1

I

1

Oi

— sm

di

sin di

cos

01

cos

I

I

I

cos

9r

sin 9r

— sm

9r

cos 6r

CHAP.

INNER PRODUCT SPACES

13]

Let S = operator on V.

303

r + r-i = T + T*. Then S* = (T + T*)* = T* + T = S. Thus S By Theorem 13.14, there exists an orthonormal basis of V consisting

Is a symmetric of eigenvectors

denote the distinct eigenvalues of S, then V can be decomposed into the direct Vm where the Vj consists of the eigenvectors of S belonging to Xj. We claim that each Vj is invariant under T. For suppose v e Vj; then S{v) — \v and If Xi,

of S.

.

.

.

,

sum y = Vi

Xjn

©

Va







©

= (T+T-^)T(v) = T(T+T-^){v) =

S(T(v))

TS{v)

=

TiXfV)

=

\iT(v)

That is, T{v) & Fj. Hence Vi is invariant under T. Since the V; are orthogonal to each other, we can restrict our investigation to the way that T acts on each individual 'V^.

On

a given V;, (T

+

=

T-^)v

=

S(v)

Multiplying by T,

\v.

(T2-\T + I){v) =

We

consider the cases

(T

leads to

Xj

± I){v) =

= ±2

or

and

=

T(v)

±2 separately. Thus T restricted

X; ¥=

±v.

=

-

±

±2, then (T I)Hv) If Xj to this Fj is either I or -/.

which

If Xj ¥= ±2, then T has no eigenvectors in Vj since by Theorem 13.8 the only eigenvalues of T be are 1 or —1. Accordingly, for v ¥= the vectors v and T{v) are linearly independent. Let is invariant under T, since the subspace spanned by v and T(v). Then

W

W

=

T(T(v))

By Theorem 13.2, Vj = Thus we can decompose

W © W-^

=

T^v)

\^T(v)

-

v

Furthermore, by Problem 13.24 w'' is also invariant under T. V^ into the direct sum of two dimensional subspaces Wj where the Wj are orthogonal to each other and each Wj is invariant under T. Thus we can now restrict our investigation to the way T acts on each individual Wj.

t^



.

T^ — XjT + / = 0, the characteristic polynomial A(t) + 1. Thus the determinant of T is 1, the constant term A representing T acting on Wj relative to any orthonormal — sin e^ 'cos e

T

of

\t

in A(t).

matrix

^

Wj is A(t) = By Problem 13.30, the Wj must be of the form

acting on

Since

basis of

cos e y

sin e

The union of the basis of the Wj gives an orthonormal basis of Vj, and the union of the basis of the Vj gives an orthonormal basis of V in which the matrix representing T is of the desired form.

NORMAL OPERATORS AND CANONICAL FORMS 13.37.

Determine which matrix

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF