Mathematical Methods For Physicists Webber and Arfken Ch. 3 Selected Solutions

Share Embed Donate


Short Description

Ch. 3: 3.3.1, 3.3.12, 3.3.13, 3.5.4, 3.5.6, 3.5.9, 3.5.30...

Description

Physics 451

Fall 2004 Homework Assignment #2 — Solutions

Textbook problems: Ch. 3: 3.3.1, 3.3.12, 3.3.13, 3.5.4, 3.5.6, 3.5.9, 3.5.30 Chapter 3 3.3.1 Show that the product of two orthogonal matrices is orthogonal. e = I and B B e = I. Suppose matrices A and B are orthogonal. This means that AA We now denote the product of A and B by C = AB. To show that C is orthogonal, e and see what happens. Recalling that the transpose of a product we compute C C is the reversed product of the transposes, we have e = (AB)(AB) g = AB B eA e = AA e=I CC The statement that this is a key step in showing that the orthogonal matrices form a group is because one of the requirements of being a group is that the product of any two elements (ie A and B) in the group yields a result (ie C) that is also in the group. This is also known as closure. Along with closure, we also need to show associativity (okay for matrices), the existence of an identity element (also okay for matrices) and the existence of an inverse (okay for orthogonal matrices). Since all four conditions are satisfied, the set of n × n orthogonal matrices form the orthogonal group denoted O(n). While general orthogonal matrices have determinants ±1, the subgroup of matrices with determinant +1 form the “special orthogonal” group SO(n). 3.3.12 A is 2 × 2 and orthogonal. Find the most general form of   a b A= c d Compare with two-dimensional rotation. e = I, or Since A is orthogonal, it must satisfy the condition AA     2    a b a c a + b2 ac + bd 1 0 = = c d b d ac + bd c2 + d2 0 1 This gives three conditions i) a2 + b2 = 1,

ii) c2 + d2 = 1,

iii) ac + bd = 0

These are three equations for four unknowns, so there will be a free parameter left over. There are many ways to solve the equations. However, one nice way is

to notice that a2 + b2 = 1 is the equation for a unit circle in the a–b plane. This means we can write a and b in terms of an angle θ a = cos θ,

b = sin θ

Similarly, c2 + d2 = 1 can be solved by setting c = cos φ,

d = sin φ

Of course, we have one more equation to solve, ac + bd = 0, which becomes cos θ cos φ + sin θ sin φ = cos(θ − φ) = 0 This means that θ − φ = π/2 or θ − φ = 3π/2. We must consider both cases separately. φ = θ − π/2: This gives c = cos(θ − π/2) = sin θ, or

 A1 =

d = sin(θ − π/2) = − cos θ

cos θ sin θ

sin θ − cos θ

 (1)

This looks almost like a rotation, but not quite (since the minus sign is in the wrong place). φ = θ − 3π/2: This gives c = cos(θ − 3π/2) = − sin θ, or

 A2 =

d = sin(theta − 3π/2) = cos θ

cos θ − sin θ

sin θ cos θ

 (2)

which is exactly a rotation. Note that we can tell the difference between matrices of type (1) and (2) by computing the determinant. We see that det A1 = −1 while det A2 = 1. In fact, the A2 type of matrices form the SO(2) group, which is exactly the group of rotations in the plane. On the other hand, the A1 type of matrices represent rotations followed by a mirror reflection y → −y. This can be seen by writing  A1 =

1 0 0 −1



cos θ − sin θ

sin θ cos θ



Note that the set of A1 matrices by themselves do not form a group (since they do not contain the identity, and since they do not close under multiplication). However the set of all orthogonal matrices {A1 , A2 } forms the O(2) group, which is the group of rotations and mirror reflections in two dimensions. 3.3.13 Here |~x i and |~y i are column vectors. Under an orthogonal transformation S, |~x 0 i = S|~x i, |~y 0 i = S|~y i. Show that the scalar product h~x |~y i is invariant under this orthogonal transformation. To prove the invariance of the scalar product, we compute e y i = h~x |~y i h~x 0 |~y 0 i = h~x |SS|~ e = I for an orthogonal matrix S. This demonstrates that the where we used SS scalar product is invariant (same in primed and unprimed frame). 3.5.4 Show that a real matrix that is not symmetric cannot be diagonalized by an orthogonal similarity transformation. We take the hint, and start by denoting the real non-symmetric matrix by A. Assuming that A can be diagonalized by an orthogonal similarity transformation, that means there exists an orthogonal matrix S such that Λ = SASe

where Λ is diagonal

We can ‘invert’ this relation by multiplying both sides on the left by Se and on the right by S. This yields e A = SΛS Taking the transpose of A, we find ee e = (SΛS) eg = SeΛ eS A e However, the transpose of a transpose is the original matrix, Se = S, and the e = Λ. Hence transpose of a diagonal matrix is the original matrix, Λ e = SΛS e A =A Since the matrix A is equal to its transpose, A has to be a symmetric matrix. However, recall that A is supposed to be non-symmetric. Hence we run into a contradiction. As a result, we must conclude that A cannot be diagonalized by an orthogonal similarity transformation.

3.5.6 A has eigenvalues λi and corresponding eigenvectors |~xi i. Show that A−1 has the same eigenvectors but with eigenvalues λ−1 i . If A has eigenvalues λi and eigenvectors |~xi i, that means A|~xi i = λi |~xi i Multiplying both sides by A−1 on the left, we find A−1 A|~xi i = λi A−1 |~xi i or |~xi i = λi A−1 |~xi i Rewriting this as A−1 |~xi i = λ−1 xi i i |~ it is now obvious that A−1 has the same eigenvectors, but eigenvalues λ−1 i . 3.5.9 Two Hermitian matrices A and B have the same eigenvalues. Show that A and B are related by a unitary similarity transformation. Since both A and B have the same eigenvalues, they can both be diagonalized according to Λ = U AU † , Λ = V BV † where Λ is the same diagonal matrix of eigenvalues. This means U AU † = V BV †

B = V † U AU † V



If we let W = V † U , its Hermitian conjugate is W † = (V † U )† = U † V . This means that B = W AW † where W = V † U and W W † = V † U U † V = I. Hence A and B are related by a unitary similarity transformation. 3.5.30

a) Determine the eigenvalues and eigenvectors of 

1   1



Note that the eigenvalues are degenerate for  = 0 but the eigenvectors are orthogonal for all  6= 0 and  → 0.

We first find the eigenvalues through the secular equation 1 − λ 

 = (1 − λ)2 − 2 = 0 1 − 

This is easily solved (1 − λ)2 − 2 = 0

(λ − 1)2 = 2





(λ − 1) = ±

(3)

Hence the two eigenvalues are λ+ = 1 +  and λ− = 1 − . For the eigenvectors, we start with λ+ = 1 + . Substituting this into the eigenvalue problem (A − λI)|xi = 0, we find 

−   −

  a =0 b



(a − b) = 0



a=b

Since the problem did not ask to normalize the eigenvectors, we can take simply   1 |x+ i = 1

λ+ = 1 +  : For λ− = 1 − , we obtain instead 

 

 

  a =0 b





(a + b) = 0

a = −b

This gives  λ− = 1 −  :

|x− i =

1 −1



Note that the eigenvectors |x+ i and |x− i are orthogonal and independent of . In a way, we are just lucky that they are independent of  (they did not have to turn out that way). However, orthogonality is guaranteed so long as the eigenvalues are distinct (ie  6= 0). This was something we proved in class. b) Determine the eigenvalues and eigenvectors of 

1 2

1 1



Note that the eigenvalues are degenerate for  = 0 and for this (nonsymmetric) matrix the eigenvectors ( = 0) do not span the space.

In this nonsymmetric case, the secular equation is 1 − λ 1 2 = (1 − λ)2 − 2 = 0  1 − λ Interestingly enough, this equation is the same as (3), even though the matrix is different. Hence this matrix has the same eigenvalues λ+ = 1 +  and λ− = 1 − . For λ+ = 1 + , the eigenvector equation is    − 1 a =0 ⇒ −a + b = 0 2 − b



b = a

Up to normalization, this gives λ+ = 1 +  :

  1 |x+ i = 

For the other eigenvalue, λ− = 1 − , we find     1 a =0 ⇒ a + b = 0 2   b

(4)



b = −a

Hence, we obtain  λ− = 1 −  :

|x− i =

1 −

 (5)

In this nonsymmetric case, the eigenvectors do depend on . And furthermore,   1 when  = 0 it is easy to see that both eigenvectors degenerate into the same . 0 c) Find the cosine of the angle between the two eigenvectors as a function of  for 0 ≤  ≤ 1. For the eigenvectors of part a), they are orthogonal, so the angle is 90◦ . Thus this part really refers to the eigenvectors of part b). Recalling that the angle can be defined through the inner product, we have hx+ |x− i = |x+ | |x− | cos θ or cos θ =

hx+ |x− i hx+ |x+ i1/2 hx− |x− i1/2

Using the eigenvectors of (4) and (5), we find cos θ = √

1 − 2 1 − 2 √ = 1 + 2 1 + 2 1 + 2

Recall that the Cauchy-Schwarz inequality guarantees that cos θ lies between −1 and +1. When  = 0 we find cos θ = 1, so the eigenvectors are collinear (and degenerate), while for  = 1, we find instead cos θ = 0, so the eigenvectors are orthogonal.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF