# [Frank Harris] Solution Manual to Mathematics for (BookZZ.org)

November 25, 2017 | Author: ali | Category: Series (Mathematics), Summation, Vector Space, Pi, Ordinary Differential Equation

#### Short Description

solutions for mathematics for physics science and engineering...

#### Description

Mathematics for Physical Science and Engineering

Symbolic Computing Applications in Maple and Mathematica

Frank E. Harris

Instructor’s Manual

Mathematics for Physical Science and Engineering: Symbolic Computing Applications in Maple and Mathematica

Instructor’s Manual

Frank E. Harris University of Utah, Salt Lake City, UT and University of Florida, Gainesville, FL

Contents 0 Introduction

1

1  Computers, Science, and Engineering 1.1  1.2  1.3  1.4  1.5  1.6

3 Computing: Historical Note���������������������������������������������������������������������3 Basics of Symbolic Computing����������������������������������������������������������������3 Symbolic Computation Programs������������������������������������������������������������8 Procedures���������������������������������������������������������������������������������������������10 Graphs and Tables���������������������������������������������������������������������������������12 Summary: Symbolic Computing������������������������������������������������������������15

2  Infinite Series

16 2.1  Definition of Series��������������������������������������������������������������������������������16 2.2  Tests for Convergence���������������������������������������������������������������������������18 2.3  Alternating Series����������������������������������������������������������������������������������20 2.4  Operations on Series�����������������������������������������������������������������������������21 2.5  Series of Functions��������������������������������������������������������������������������������22 2.6  Binomial Theorem���������������������������������������������������������������������������������26 2.7  Some Important Series��������������������������������������������������������������������������29 2.8  Some Applications of Series������������������������������������������������������������������29 2.9  Bernoulli Numbers��������������������������������������������������������������������������������30 2.10  Asymptotic Series���������������������������������������������������������������������������������32 2.11  Euler-Maclaurin Formula����������������������������������������������������������������������32

3  Complex Numbers and Functions 3.1  3.2  3.3  3.4  3.5

35 Introduction������������������������������������������������������������������������������������������35 Functions in the Complex Domain��������������������������������������������������������36 The Complex Plane�������������������������������������������������������������������������������38 Circular and Hyperbolic Functions�������������������������������������������������������40 Multiple-Valued Functions��������������������������������������������������������������������43

4  Vectors and Matrices

47 4.1  Basics of Vector Algebra�����������������������������������������������������������������������47 4.2  Dot Product������������������������������������������������������������������������������������������50 4.3  Symbolic Computing, Vectors���������������������������������������������������������������51

ii

CONTENTS

4.4  4.5  4.6  4.7  4.8

Matrices����������������������������������������������������������������������������������������������������� 54 Symbolic Computing, Matrices������������������������������������������������������������������ 57 Systems of Linear Equations���������������������������������������������������������������������� 61 Determinants���������������������������������������������������������������������������������������������� 63 Applications of Determinants��������������������������������������������������������������������� 64

5  Matrix Transformations 5.1  5.2  5.3  5.4  5.5  5.6  5.7  5.8

70 Vectors in Rotated Systems����������������������������������������������������������������������� 70 Vectors under Coordinate Reflections ������������������������������������������������������� 72 Transforming Matrix Equations���������������������������������������������������������������� .72 Gram-Schmidt Orthogonalization�������������������������������������������������������������� 73 Matrix Eigenvalue Problems���������������������������������������������������������������������� 74 Hermitian Eigenvalue Problems���������������������������������������������������������������� .75 Matrix Diagonalization������������������������������������������������������������������������������ 75 Matrix Invariants�������������������������������������������������������������������������������������� .77

6  Multidimensional Problems 6.1  6.2  6.3  6.4  6.5  6.6  6.7

79 Partial Differentiation�������������������������������������������������������������������������������� 79 Extrema and Saddle Points����������������������������������������������������������������������� .82 Curvilinear Coordinate Systems���������������������������������������������������������������� .83 Multiple Integrals�������������������������������������������������������������������������������������� .85 Line and Surface Integrals������������������������������������������������������������������������� .88 Rearrangement of Double Series��������������������������������������������������������������� .90 Dirac Delta Function���������������������������������������������������������������������������������� 91

7  Vector Analysis 7.1  7.2  7.3  7.4  7.5  7.6

93 Vector Algebra������������������������������������������������������������������������������������������� 93 Vector Differential Operators��������������������������������������������������������������������� 99 Vector Differential Operators: Further Properties�������������������������������������103 Integral Theorems�������������������������������������������������������������������������������������106 Potential Theory���������������������������������������������������������������������������������������108 Vectors in Curvilinear Coordinates�����������������������������������������������������������111

8  Tensor Analysis 8.1  8.2  8.3  8.4

119 Cartesian Tensors��������������������������������������������������������������������������������������119 Pseudotensors and Dual Tensors���������������������������������������������������������������124 NonCartesian Tensors�������������������������������������������������������������������������������125 Symbolic Computation������������������������������������������������������������������������������128

9  Gamma Function 9.1  9.2  9.3  9.4  9.5  9.6

130 Definition and Properties��������������������������������������������������������������������������130 Digamma and Polygamma Functions��������������������������������������������������������132 Stirling’s Formula��������������������������������������������������������������������������������������135 Beta Function�������������������������������������������������������������������������������������������136 Error Function������������������������������������������������������������������������������������������140 Exponential Integral����������������������������������������������������������������������������������142

CONTENTS

iii

10  Ordinary Differential Equations 10.1  10.2  10.3  10.4  10.5  10.6  10.7

144 Introduction������������������������������������������������������������������������������������������������������ 144 Symbolic Computing����������������������������������������������������������������������������������������� 145 First-Order Equations��������������������������������������������������������������������������������������� 147 ODEs with Constant Coefficients���������������������������������������������������������������������� 155 More General Second-Order Equations������������������������������������������������������������� 167 General Processes for Linear Equations������������������������������������������������������������ 171 Green’s Functions���������������������������������������������������������������������������������������������� 173

11  General Vector Spaces 11.1  11.2  11.3  11.4  11.5  11.6

177 Vectors in Function Spaces������������������������������������������������������������������������������� 177 Gram-Schmidt Orthogonalization��������������������������������������������������������������������� 184 Operators���������������������������������������������������������������������������������������������������������� 185 Eigenvalue Equations���������������������������������������������������������������������������������������� 187 Hermitian Eigenvalue Problems������������������������������������������������������������������������ 187 Sturm-Liouville Theory������������������������������������������������������������������������������������� 188

12  Fourier Series 12.1  12.2  12.3  12.4  12.5

190 Periodic Functions��������������������������������������������������������������������������������������������� 190 Fourier Expansions�������������������������������������������������������������������������������������������� 194 Symbolic Computing����������������������������������������������������������������������������������������� 196 Properties of Expansions����������������������������������������������������������������������������������� 201 Applications������������������������������������������������������������������������������������������������������ 203

13  Integral Transforms 13.1  13.2  13.3  13.4  13.5  13.6  13.7

209 Introduction������������������������������������������������������������������������������������������������������ 209 Fourier Transform��������������������������������������������������������������������������������������������� 209 Fourier Transform: Symbolic Computation������������������������������������������������������� 212 Fourier Transform: Solving ODEs��������������������������������������������������������������������� 217 Fourier Convolution Theorem��������������������������������������������������������������������������� 220 Laplace Transform�������������������������������������������������������������������������������������������� 221 Laplace Transform: Solving ODEs�������������������������������������������������������������������� 225

14  Series Solutions: Important ODEs 14.1  14.2  14.3  14.4  14.5  14.6  14.7  14.8

232 Series Solutions of ODEs����������������������������������������������������������������������������������� 232 Legendre Equation�������������������������������������������������������������������������������������������� 234 Associated Legendre Functions������������������������������������������������������������������������� 241 Bessel Equation������������������������������������������������������������������������������������������������� 244 Other Bessel Functions������������������������������������������������������������������������������������� 250 Hermite Equation���������������������������������������������������������������������������������������������� 256 Laguerre Equation�������������������������������������������������������������������������������������������� 261 Chebyshev Polynomials������������������������������������������������������������������������������������� 265

15  Partial Differential Equations

267 15.1  Introduction������������������������������������������������������������������������������������������������������ 267 15.2  Separation of Variables������������������������������������������������������������������������������������� 268

iv

CONTENTS

15.3  15.4  15.5  15.6

Separation of Variables in Cylindrical Coordinates��������������������������������278 Spherical-Coordinate Problems��������������������������������������������������������������284 Inhomogeneous PDEs�����������������������������������������������������������������������������295 Integral Transform Methods������������������������������������������������������������������296

16  Calculus of Variations 16.1  16.2  16.3  16.4

299 Introduction�������������������������������������������������������������������������������������������299 Euler Equation���������������������������������������������������������������������������������������300 More General Problems�������������������������������������������������������������������������306 Variation with Constraints���������������������������������������������������������������������310

17  Complex Variable Theory 17.1  17.2  17.3  17.4  17.5  17.6  17.7  17.8  17.9

315 Analytic Functions���������������������������������������������������������������������������������315 Singularities�������������������������������������������������������������������������������������������320 Power Series Expansions������������������������������������������������������������������������322 Contour Integrals�����������������������������������������������������������������������������������325 The Residue Theorem����������������������������������������������������������������������������328 Evaluation of Definite Integrals�������������������������������������������������������������333 Evaluation of Infinite Series�������������������������������������������������������������������348 Cauchy Principal Value��������������������������������������������������������������������������352 Inverse Laplace Transform���������������������������������������������������������������������355

18  Probability and Statistics 18.1  18.2  18.3  18.4  18.5  18.6

359 Definitions and Simple Properties����������������������������������������������������������359 Random Variables����������������������������������������������������������������������������������366 Binomial Distribution����������������������������������������������������������������������������371 Poisson Distribution�������������������������������������������������������������������������������374 Gauss Normal Distribution��������������������������������������������������������������������376 Statistics and Applications to Experiment �������������������������������������������382

Appendices 387 A  Methods for Making Plots����������������������������������������������������������387 B  Printing Tables of Function Values���������������������������������������������387 C  Data Structures for Symbolic Computing�����������������������������������389 D  Symbolic Computing of Recurrence Formulas����������������������������389 E  Partial Fractions�������������������������������������������������������������������������389 F  Mathematical Induction��������������������������������������������������������������391 G  Constrained Extrema������������������������������������������������������������������392 H  Symbolic Computing for Vector Analysis�����������������������������������393 I  Maple Tensor Utilities�����������������������������������������������������������������393 J  Wronskians in ODE Theory��������������������������������������������������������394 m K  Maple Code for P m n and Y l ��������������������������������������������������������394

Chapter 0

INTRODUCTION PREPARING TO USE THIS TEXT The scientiﬁc world is presently in a transitional state with regard to symbolic computing, and there are still many physical scientists (and even engineers) who are not yet ﬂuent in any symbolic computing language. This population even includes members of the instructional faculties of distinguished educational institutions. But despite that situation, it is important to include the use of symbolic computing tools in the education of the next generation of scientists, engineers, and their teachers. Fortunately, it is not necessary that the instructor in a course using this text be truly expert in symbolic computation. But it is of course assumed that he/she have an appropriate background in the mathematics the text is designed to teach. To assist instructors in preparing for the tasks they face, this manual provides, in addition to problem answers, complete coding for Exercises that are to be completed using symbolic computation. An important decision that will need to be made before using this text is that of the computer language to be used. One could leave this choice to individual students or present both languages, but the instructor may wish to consider issues such as the language availability and cost, and in addition the preparative eﬀort needed. An instructor who uses maple should become familiar with its help facility (use the command ?topic). The maple help pages are rather complete, and, though sometimes opaque to students, can provide answers to vexing problems. It is also important to learn the editing features needed to make corrections to Worksheet input, namely the split and join options of the edit pull-down menu. An attempt to use Enter to open space in previously-written code is interpreted by maple as a command to execute it, so the editing techniques are important to avoid that action. Without good command of the editing features it is diﬃcult to build easily understood code. Instructors who use mathematica might want to obtain (from its free on-line documentation) the two books Core Language and Mathematics and Algorithms. These sources contain examples of the use of almost all the features relevant for the present book. There is no special technique needed to correct errors or make insertions in Notebooks. Because mathematica uses Shift-Enter as its command-termination signal, multi-line insertions can just be typed where they are needed. 1

2

CHAPTER 0. INTRODUCTION

GETTING STUDENTS ON-LINE The text contains no information describing how students can initiate a symbolic computing session or obtain stored or printed copies of their work. The instructor will need to provide, externally to the text, all the information that is needed in that area. The ideal situation is for students to have access to symbolic computing using their own computers and with an appropriate group license. An alternative can be access through an institution’s computer lab, but in that case it is important that the access hours and availability be adequate.

Chapter 1

COMPUTERS, SCIENCE, AND ENGINEERING 1.1 1.2

COMPUTING: HISTORICAL NOTE BASICS OF SYMBOLIC COMPUTING

Exercises Carry out these exercises using the symbolic computation system of your choice.

1.2.1.

Reduce the following polynomial in x and z to fully factored form. 3x2 + 3x4 − 4xz + 3x2 z + 2x3 z + 6x5 z + z 2 − 4xz 2 − 7x2 z 2 + 6x3 z 2 − 8x4 z 2 + z 3 + 2xz 3 − 5x2 z 3 + 2x3 z 3 − 2xz 4 + 6x3 z 4 + z 5 − 8x2 z 5 + 2xz 6

Solution: In maple, > ZZ := 3*x^2+3*x^4-4*x*z+3*x^2*z+2*x^3*z+6*x^5*z+z^2-4*x*z^2 >

-7*x^2*z^2+6*x^3*z^2-8*x^4*z^2+z^3+2*x*z^3-5*x^2*z^3

>

+2*x^3*z^3-2*x*z^4+6*x^3*z^4+z^5-8*x^2*z^5+2*x*z^6;

> factor(ZZ): In mathematica, ZZ = 3*x^2+3*x^4-4*x*z+3*x^2*z+2*x^3*z+6*x^5*z+z^2-4*x*z^2 -7*x^2*z^2+6*x^3*z^2-8*x^4*z^2+z^3+2*x*z^3-5*x^2*z^3 +2*x^3*z^3-2*x*z^4+6*x^3*z^4+z^5-8*x^2*z^5+2*x*z^6: Factor[ZZ]; Both symbolic systems give an answer equivalent to (x − z)(3x − z)(2xz + 1)(x2 + z 3 + z + 1) 1.2.2.

Because the polynomial in Exercise 1.2.1 can be factored, symbolic computer systems can easily ﬁnd the values of x that are its roots. Calling the polynomial in its original form poly, use one of the following:

3

4

CHAPTER 1. COMPUTERS, SCIENCE, AND ENGINEERING solve(poly=0,x)

or

Solve[poly==0,x]

Note. Less convenient forms than that found here are usually obtained as the solutions to more general root-ﬁnding problems.

Solution: Letting ZZ be as deﬁned in the solution to Exercise 1.2.1, execute one of > solve(ZZ = 0, x); √ √ 1 1 z, − , z, −z 3 − z − 1 , − −z 3 − z − 1 3 2z Solve[ZZ == 0, x] {{ } { } 1 x → − 2z , x → z3 , {x → z}, √ √ { } { }} x → − −1 − z − z 3 , x → −1 − z − z 3 1.2.3.

(a) Starting with f (x) = 1 + 2x + 5x2 − 3x3 , obtain an expansion of f (x) in powers of x + 1 by carrying out the following steps: (1) Deﬁne a variable s and assign to it the value of the polynomial representing f (x); (2) Deﬁne x to have the value z − 1 and recompute s; (3) Expand s in powers of z; (4) Deﬁne z to have the value x + 1 and recompute s. You will need to be careful to have x and/or z undeﬁned at various points in this process. (b) Expand your ﬁnal expression for s and verify that it is correct. Note. Your algebra system may combine the linear and constant terms in a way that causes (x + 1) not to be explicitly visible.

Solution: In maple, these steps correspond to this coding: > s := 1+2*x+5*x^2-3*x^3: > x:=z-1: s: > ss:=expand(s): > x:=’x’: z:=x+1: ss: In mathematica, s = 1 + 2*x + 5*x^2 - 3*x^3; x = z - 1; s; ss = Expand[s]; Clear[x];

z = x + 1; ss;

Both symbolic systems give a result equivalent to 7 − 17(x + 1) + 14(x + 1)2 − 3(x + 1)3 1.2.4.

Verify the trigonometric identity sin(x + y) = sin x cos y + cos x sin y by simplifying sin(x + y) − sin x cos y − cos x sin y to the ﬁnal result zero. Use the commands for simplifying and expanding (try both even if the ﬁrst works). These two commands do not always give the same results.

Solution: Both work in maple, Expand does not simplify in mathematica.

1.2. BASICS OF SYMBOLIC COMPUTING 1.2.5.

5

Obtain numerical values, precise to 30 decimal digits, for π 2 and sin 0.1π. Note. Obtain these 30-digit results even if your computer does not normally compute at this precision.

Solution: Execute one of the following (illustrated only for sin 0.1π): evalf(sin(0.1*Pi), 30); N[Sin[0.1*Pi], 30] Both give sin 0.1π = 0.30901 69943 74947 42410 22934 17183. The result for π 2 is 9.86960 44010 89358 61883 44909 9988. 1.2.6.

(a) Verify that your computer system knows the special function corresponding to the following integral: ∫ x 2 2 e−t dt . erf(x) = √ π 0 In maple, it is erf(x); in mathematica, Erf[x]. (b) Write symbolic code to evaluate the integral deﬁning erf(x), and check your code by comparing its output with calls to the function in your symbolic algebra system. Check values for x = 0, x = 1, and x = ∞. Note. Inﬁnity is infinity in maple, Infinity in mathematica.

Solution: erf(0) = 0,

erf(1) = 0.842701,

erf(∞) = 1.

evalf(2/sqrt(Pi)*int(exp(-t^2),t=0 .. 1)); N[2/Sqrt[Pi]*Integrate[E^(-t^2), {t, 0, 1}]]

1.2.7.

Plot the function deﬁned in Exercise 1.2.6 for various ranges of x. Use enough ranges to get some experience in using your algebra system’s plotting utility.

Solution: We illustrate for the range (0 · · · 2): plot(erf(x), x = 0 .. 2); Plot[Erf[x], {x, 0, 2}]

1.2.8.

Obtain the value of erf(π) to 30 decimal places and print out the message "erf(pi)= · · · ". Note. The value of erf must be supplied directly by the computer; you should not obtain an answer by typing it in.

Solution: In maple: VAL := evalf(erf(Pi),30): print("erf(pi) = ",VAL); “erf(pi) = ”, 0.999991123853632358394731620781

or

6

CHAPTER 1. COMPUTERS, SCIENCE, AND ENGINEERING printf("erf(pi) = %33.30f\n",VAL); erf(pi)= 0.999991123853632358394731620781 In mathematica, VAL = N[Erf[Pi], 30]; Print["erf(pi) = ", VAL] erf(pi)= 0.999991123853632358394731620781

1.2.9.

√ Obtain the sixth derivative with respect to x of 1/ x2 + z 2 . Evaluate it numerically for x = 0, z = 1.

Solution: maple code for this problem can be > d6 := diff(1/sqrt(x^2+z^2),x,x,x,x,x,x); In mathematica, d6 = D[1/Sqrt[x^2+z^2],x,x,x,x,x,x]

Both symbolic programs give results equivalent to 10395x6 14175x4 4725x2 225 − + − 2 2 2 13/2 2 2 11/2 2 2 9/2 (x + z ) (x + z ) (x + z ) (x + z 2 )7/2 At x = 0, z = 1 this expression evaluates to −225. To obtain this result symbolically, execute one of > x:=0: z:=1: d6; x=0; z=1; d6 ∫

1.2.10.

(a) Evaluate the indeﬁnite integral

dz . (z 2 + 4)7/2

(b) Evaluate the above integral numerically for the range (0, π).

Solution: (a) Use either of the following coding statements; both give essentially the same result: int(1/(z^2+4)^(7/2),z); Integrate[1/(z^2 + 4)^(7/2), z] z(30 + 10z 2 + z 4 ) 120(4 + z 2 )5/2 (b) Use one of int(1/(z^2+4)^(7/2),z = 0 .. Pi); Integrate[1/(z^2 + 4)^(7/2), {z,0,Pi}] Either of these yields 0.00826264 · · · .

1.2. BASICS OF SYMBOLIC COMPUTING 1.2.11.

7

The amplitude A of the light of wavelength λ that is transmitted through a slit of width d is in the approximation of Fraunhofer diﬀraction given as A = A0 sin x/x, where x = πd sin θ/λ, with θ the angle from the normal to the slit; see Fig. 1.2. (a) Plot A over a range of x that is symmetric about x = 0 and contains 11 extrema. (b) The places where A = 0 occur are at x = ±π, ±2π, · · · , and the extrema lie between (but not exactly half-way between) these zeros. Using the plot facilities of your symbolic computing system, change the range of your plot in ways that permit each extremum to be located to an accuracy of 0.001 in x. (c) Returning now to the formula for A, compute the integral of A2 for each region bounded by adjacent zeros of A, and thereby determine the ratio of the intensity for the region containing x = 0 (referred to as the principal maximum) to the intensity in one of the immediately adjacent regions.

Solution: (a) Use one of the following statements: plot(sin(x)/x, x=-6*Pi

.. 6*Pi);

Plot[Sin[x]/x, {x, -6*Pi, 6*Pi}] (b) The extrema are at x = 0, ±4.493, ±7.725, ±10.904, ±14.066, ±17.221. Illustrating for the extremum near 4.5, we can change the plot range repeatedly until reaching something similar to one of the following: plot(sin(x)/x, x = 4.493 .. 4.494); Plot[Sin[x]/x, {x, 4.493, 4.494}] (c) The intensity of the principal maximum is given by one of evalf(int(sin(x)^2/x^2, x=-Pi .. Pi)); N[Integrate[Sin[x]^2/x^2, {x,-Pi,Pi}]] Either of these yields 2.836. The intensities of other regions is given (for nonzero n) by one of evalf(int(sin(x)^2/x^2, n*x= Pi .. (n+1)*Pi)); N[Integrate[Sin[x]^2/x^2, {x, n*Pi, (n+1)*Pi}]] For successive n values these integrals have the values 0.074, 0.026, 0.013, 0.008, 0.005. The intensity of the principal maximum is 2.836/0.074 ≈ 38 times that of either adjacent region. 1.2.12.

Consider the function f (x) = x6 − 2x5 + x4 − 1.783x3 + 1.234x2 + 0.877x − 0.434. Find all the roots of f (x) to three decimal places by making plot(s) over a suitable range or ranges.

Solution: The roots are most easily found by executing, in maple > f:=x^6-2*x^5+x^4-1.783*x^3+1.234*x^2+0.877*x-0.434; > solve(f); or, in mathematica, f=x^6-2*x^5+x^4-1.783*x^3+1.234*x^2+0.877*x-0.434 Solve[f == 0,x]

8

CHAPTER 1. COMPUTERS, SCIENCE, AND ENGINEERING There are real roots near −0.5566, 0.3913, 0.9416, and 1.7040. There is a pair of complex roots near −0.2402 ± 1.0882i. The real roots can also be found graphically by making plots limited to the vicinity of individual roots. One way to ﬁnd the complex roots graphically starts by deﬁning a function f1 (x + iy) obtained by dividing f (x + iy) by the product of x − xn for the real roots. We can then minimize |f1 | graphically with respect to x (for ﬁxed y) and then with respect to y (for ﬁxed x), iterating until |f1 | has been made acceptably small.

1.2.13.

Given the function f (x) = a3 x3 − x2 + 3x − 4a, ﬁnd by making suitable plots a value of a such that f (2) = 0. Is your value of a unique?

Solution: Setting x = 2, we require 8a3 − 4a + 2 = 0. The plot shows that there is only one crossing of the x-axis, so there is one real value of a and two conjugate complex values. Reﬁning the plot range, the real value of a is approximately −0.8846.

1.3

SYMBOLIC COMPUTATION PROGRAMS

Exercises Carry out these exercises using the symbolic computation system of your choice.

1.3.1.

Code the problem of Example 1.3.2 and execute it for a variety of values of m and x. Verify that your code performs as expected.

Solution: Run the codes given in the Example. 1.3.2.

Write code that examines the values of two numerical quantities x and y, and produces as output one of the strings Case I or Case II, whichever applies. Case I is when x + y > 10 and 0 < xy < 3; Case II occurs otherwise.

Solution: In maple (can test with various values of x and y): x := 20: y := 0.1: if (x+y

> 10 and x*y > 0 and x*y < 3) then

print("Case I") else print("Case II") end if; ”Case I” In mathematica (can test with various values of x and y): x = 20;

y = -0.1;

If[x + y > 10 && x*y > 0 && x*y < 3, Print["Case I"], Print["Case II"]] Case II

1.3.3.

Code the ﬁrst two problems of Example 1.3.3 and execute them with a variety of input parameters (n and t). For the ﬁrst problem, verify that your code yields the exact values given by the formula that was presented as its solution. For the second problem, see how rapidly the ﬁnite summations converge toward the analytical result. Note. The fact that individual terms get small is not a guarantee that a series converges. In

1.3. SYMBOLIC COMPUTATION PROGRAMS

9

the present case, the series is convergent.

Solution: Run the codes given in the Example. 1.3.4.

Determine what happens if, in the second problem of Example 1.3.3, the decimal point is removed from the 1./n4 in the fourth line of the coding. Explain the reason for what you observe.

Solution: The result is then expressed as a fraction with the numerator and denominator both large integers. 1.3.5.

Write code which adds members of the series 1+

1 1 1 + + + ··· 2 3 4

until the sum has reached a value exceeding 10, then giving as output the number of addends used.

Solution: In maple: s:=0.: n:= 1: while (s 0.4.

Solution: In maple: k:=0: for n from 1 to 1000 do if (abs(sin(0.27*n)) = t, s = s + 1./n^4; n = n + 1]; Print["Largest n used = ", n - 1]; s] Calling this procedure: Largest n used = 10

inv4powerSum[.0001]

1.08204

1.4.3.

Enter the code for the procedure legenP of Example 1.4.5. (a) Verify that it gives correct results by checkng that P4 (x) =

35 4 15 2 3 x − x + . 8 4 8

and that (for several n, some even and some odd), Pn (1) = 1. (b) Remove the simplifying command expand or Expand and notice how the results are changed.

Solution: (a) Run P4:=legenP(4,x) or P4=legenP[4,x], obtaining P 4 = 35x4 /8 − 15x2 /4 + 3/8. (b) Without expand or Expand the recursion produces results that need simpliﬁcation. 1.4.4.

To see how the use of recursion in procedure deﬁnitions can make the coding cleaner, consider a procedure that computes the Legendre polynomial Pn (x) without using the recursive strategy illustrated in Example 1.4.5. Referring to that example for deﬁnitions, write a procedure altLegP that computes the Legendre polynomial Pn (x) in the following way: 1. Letting P be the polynomial Pj in Step j, and assuming that Q and R contain the polynomials of the two previous steps, write for P the formula P = [(2j − 1)xQ − (j − 1)R]/j. 2. After making P , update Q and R by moving the contents of Q into R and those of P into Q. 3. You will need starting values of Q and R to initiate this process and must organize the coding to deal with special cases and control the number of steps to be taken.

Solution: In maple, altLegP := proc(n,x) local P,Q,R,j; P := 1; if (n > 0) then Q := P; P := x end if; for j from 2 to n do

R := Q;

Q := P;

P := expand( ((2*j-1)*x*Q - (j-1)*R)/j ) end do; P

end proc:

12

CHAPTER 1. COMPUTERS, SCIENCE, AND ENGINEERING In mathematica, altLegP[n_, x_] := Module[ {p, q, r, j}, p = 1; If[n > 0, q = p; p = x]; Do[ r = q; q = p; p = Expand[((2*j-1)*x*q - (j-1)*r)/j], {j,2,n} ]; p ]

1.4.5.

Generate formulas that give sin nx and cos nx in terms of sin x and cos x by deﬁning procedures based on the formulas sin(n + 1)x = sin nx cos x + cos nx sin x , cos(n + 1)x = cos nx cos x − sin nx sin x .

Solution: maple procedures: ssin := proc(n,x) local S; if n=1 then S:=sin(x) end if; if n>1 then S:=ssin(n-1,x)*cos(x)+ccos(n-1,x)*sin(x) end if; expand(S) end proc; ccos := proc(n,x) local C; if n=1 then C:=cos(x) end if; if n>1 then C:=ccos(n-1,x)*cos(x)-ssin(n-1,x)*sin(x) end if; expand(C) end proc; mathematica procedures: ssin[n_, x_] := Module[ {s}, If[n == 1, s = Sin[x] ]; If[n > 1, s = ssin[n - 1, x]*Cos[x] + ccos[n - 1, x]*Sin[x] ]; Expand[s] ] ccos[n_, x_] := Module[ {c}, If[n == 1, c = Cos[x] ]; If[n > 1, c = ccos[n - 1, x]*Cos[x] - ssin[n - 1, x]*Sin[x] ]; Expand[c] ]

1.5

GRAPHS AND TABLES

Exercises Carry out these exercises using the symbolic computation system of your choice.

1.5.1.

Using the procedure for generating a table of function values (Example 1.5.2), make a table giving values of the function erfc(x) for 0 ≤ x ≤ 10 in unit steps. This function is deﬁned as 1 − erf(x) and is a known function in both maple and mathematica.

1.5. GRAPHS AND TABLES Solution: Using the maple procedure MakeTable, > MakeTable(erfc, 0, 1, 10);

Values of erfc(x) for x from 0 to 10 in steps of 1 x 0.000 1.000 2.000 3.000 4.000 5.000 6.000 7.000 8.000 9.000 10.000

erfc(x) 1.000000E+00 1.572992E-01 4.677735E-03 2.209050E-05 1.541726E-08 1.537460E-12 2.151974E-17 4.183826E-23 1.122430E-29 4.137032E-37 2.088488E-45

Using the same procedure in mathematica, MakeTable[Erfc, 0, 10, 1]

Values of Erfc[x] for x from 0 to 10 in steps of 1 x 0.000 1.000 2.000 3.000 4.000 5.000 6.000 7.000 8.000 9.000 10.000

Erfc[x] 1.000000 1.572992*10^(-1) 4.677735*10^(-3) 2.209050*10^(-5) 1.541726*10^(-8) 1.537460*10^(-12) 2.151974*10^(-17) 4.183826*10^(-23) 1.122430*10^(-29) 4.137032*10^(-37) 2.088488*10^(-45)

13

14 1.5.2.

CHAPTER 1. COMPUTERS, SCIENCE, AND ENGINEERING Using the procedure dfac developed in Example 1.4.3, make a table of double factorials for the range −1 ≤ n ≤ 20.

Solution:

MakeTable(dfac, -1, 20, 1): Values of dfac(x) for x from -1 to 20 in steps of 1 x -1.000 0.000 1.000 2.000 3.000 4.000 5.000 6.000 7.000 8.000 9.000 10.000 11.000 12.000 13.000 14.000 15.000 16.000 17.000 18.000 19.000 20.000

dfac 1.000000E+00 1.000000E+00 1.000000E+00 2.000000E+00 3.000000E+00 8.000000E+00 1.500000E+01 4.800000E+01 1.050000E+02 3.840000E+02 9.450000E+02 3.840000E+03 1.039500E+04 4.608000E+04 1.351350E+05 6.451200E+05 2.027025E+06 1.032192E+07 3.445942E+07 1.857946E+08 6.547291E+08 3.715891E+09

In mathematica, a similar table is produced by the command MakeTable[dfac, -1, 20, 1] 1.5.3.

By graphical methods, ﬁnd the three smallest positive roots of the transcendental equation cot x − 0.1x = 0 and estimate the precision of your answers.

Solution: Plot cot(x) − 0.1 ∗ x for 0 ≤ x ≤ 8 and identify roots near 1.5, 4.3, and 7.25. Then reﬁne your results (near 1.5), eventually reaching one of plot(cot(x)-0.1*x, x=1.4288 .. 1.4290); Plot[Cot[x] - 0.1*x, {x, 1.4288, 1.4290}] showing a root at 1.42887 ± 0.000001. The next two roots are at 4.30580 and 7.22811. 1.5.4.

Figure out how to plot three functions on the same graph, and then make a plot of sin x, 2 sin 2x, and 3 sin 3x on the range 0 ≤ x ≤ π.

1.6. SUMMARY: SYMBOLIC COMPUTING

15

Note. By including the coeﬃcients 2 and 3, it may make it easier to tell which plot corresponds to each function. Most versions of maple and mathematica by default plot diﬀerent functions in diﬀerent colors, but there is nothing universal about the color assignment.

Solution: > plot([sin(x), 2*sin(2*x), 3*sin(3*x)], x=0 .. Pi); Plot[{Sin[x], 2*Sin[2*x], 3*Sin[3*x]}, {x, 0, Pi}]

1.6

SUMMARY: SYMBOLIC COMPUTING

Exercises 1.6.1.

Using the symbolic computing system of your choice, open a new worksheet or notebook and, going through the earlier sections of this chapter, enter pieces of code that you anticipate might have future use. For each piece of code you add to your workspace, include comments that remind you of its purpose. When the code is that of a complete procedure, your comments should include the identiﬁcation of each procedure argument and any information needed to help in reaching an understanding of the output. Once the correctness and appropriacy of a piece of code has been veriﬁed and documented, preserve it for future use. It is recommended that you do this by using copy-and-paste techniques to place the coding into a text ﬁle that can be loaded as input into a symbolic computing session. You may wish to keep a set of these text ﬁles together in a single folder (directory), each with a name that makes its content easily identiﬁable.

It is also possible to preserve these codings in workspaces. The main limitation of such an approach is that because workspaces are written in the proprietary format of the symbolic system, they will only be available when you are on-line in that system.

1.6.2.

If you have not already included it in response to Exercise 1.6.1, add the procedure dfac to your preserved collection of symbolic code.

1.6.3.

If you have not already included it in response to Exercise 1.6.1, add the procedure MakeTable to your preserved collection of symbolic code.

Chapter 2

INFINITE SERIES 2.1

DEFINITION OF SERIES

Exercises 2.1.1.

A repeating decimal is equivalent to a geometric series. For example, 0.909090 · · · = 0.90 + (0.01)(0.90) + (0.01)2 (0.90) + · · · . Using this information, ﬁnd the fractions equivalent to (a) 0.909090 · · ·

Solution: 90 10 (a) = , 99 11 2.1.2.

(b) 0.243243243 · · ·

(b)

(c) 0.538461538461 · · ·

243 9 = , 999 37

(c)

538461 7 = . 999999 13

We may encounter series that are summable to give an analytical result but we do not recognize them as known summations. Sometimes a symbolic computation system can evaluate the sum. Write code that will form

∞ ∑

un , and see which of the following summations your symbolic

n=1

computation system can evaluate. (a) 1 −

1 1 1 + − + ··· 3 5 7

(d) 1 −

1 1 1 + − + ··· 2 3 4

(b) 1 +

1 1 1 + + + ··· 4 9 16

(e) 1 +

1 1 1 + + + ··· 3 5 7

(c) 1 −

1 1 1 + − + ··· 4 9 16

(f) 1 −

1 1 1 + − + ··· 4 7 10

Solution: Deﬁne a function whose values for integer arguments are the summands, for part (a) one of: u := proc(n); (-1)^(n-1)/(2*n-1) end proc: u[n_] := (-1)^(n-1)/(2*n-1) and then use the summation command, one of: sum( u(n), n = 1 .. infinity); Sum[ u[n], {n, 1, Infinity} ] 16

2.1. DEFINITION OF SERIES

17

Repeating for each of the series, the results are: (a) π/4, (b) π 2 /6, (c) π 2 /12, (d) ln 2, (e) divergent, √ 1 π 3 ln 2 (f) LerchPhi(−1, 1, 1/3) = + . 3 9 3 LerchPhi is deﬁned as a sum of the type encountered here, so maple did not really evaluate this sum. One can verify that the two expressions for the sum evaluate to the same decimal value. 2.1.3.

Obtain (to six signiﬁcant ﬁgures) a numerical value for

∞ ∑ sin n . n3 n=1

Note. Here (and in most mathematical contexts) angles are in radians. Hint. Do not panic if your answer is an unfamiliar function or something that is not useful. If all else fails, approach your answer through large ﬁnite N .

Solution: Both symbolic systems can evaluate this sum formally. In maple. > sum (sin(n)/n^3, n=1 .. infinity); ) 1 ( − I polylog(3, eI ) − polylog(3, e−I ) 2 Sum[Sin[n]/n^3, {n, 1, Infinity}] [ ] [ ]) i( PolyLog 3, e−i − PolyLog 3, ei 2 Making numerical evaluations to six ﬁgures, we get 0.942869. 2.1.4.

Partial fraction decompositions (see Appendix E) are sometimes helpful in evaluating summations. Apply the technique to show that the following summation converges and has the indicated value. ∞ ∑ 1 =1. n(n + 1) n=1

Solution: We ﬁnd ] ( ) ( ) ∞ [ ∑ 1 1 1 1 1 1 = − = 1− + − + ··· . n(n + 1) n=1 n n + 1 2 2 3 n=1 ∞ ∑

The second half of each term cancels against the ﬁrst half of the next term, leaving only the ﬁrst half of the ﬁrst term, 1. 2.1.5.

Another technique for establishing the value of a summation (most useful if you know or suspect the value and need to conﬁrm it) is the method of mathematical induction (see Appendix F). Prove that ∞ ∑ 1 1 = (2n − 1)(2n + 1) 2 n=1 in the following way: First, use mathematical induction to show that the partial sum (through n = m) has the value sm = m/(2m + 1). Then take the limit of sm as m → ∞. Note. The result is easily obtained from a partial fraction decomposition, but this problem is speciﬁcally an exercise in the use of mathematical induction.

18

CHAPTER 2. INFINITE SERIES Solution: To use mathematical induction, assume sm is as given above. Then sm+1 = =

m 1 m(2m + 3) + 1 (m + 1)(2m + 1) + = = 2m + 1 (2m + 1)(2m + 3) (2m + 1)(2m + 3) (2m + 1)(2m + 3) m+1 . 2m + 3

The above shows that if sm has the assumed form for any particular m, so does sm+1 . If we also note that s1 = 1/(1 · 3) is consistent with our assumed formula, the formula is then proved for all positive integer m. The large-m limit of sm clearly exists and is 1/2.

2.2

TESTS FOR CONVERGENCE

Exercises 2.2.1.

Prove that (a) If a series

∞ ∑

un converges, so also does

∞ ∑

∞ ∑

un diverges, so also does

n=1

(c) If a series

kun , where k is any constant.

n=1

n=1

(b) If a series

∞ ∑

∞ ∑

kun , where k is any constant.

n=1 ∞ ∑

un converges, so also does

un , where n1 > n0 .

n=n1

n=n0

Solution: To prove parts (a) and (b), bring the common factor k outside the sum. For part (c), note that the ﬁrst sum is equal to the second sum plus the ﬁnite set of terms un0 + · · · + un1 −1 . 2.2.2.

Prove the following limit tests for a series of terms un : (a) If, for some p > 1, lim np un < ∞, then

∞ ∑

n→∞

(b) If lim nun > 0, then n→∞

∞ ∑

un converges.

n

un diverges.

n

Solution: For (a), let the limit be C. Then limn→∞ un < C/np . If p > 1 then C/np is convergent, and by the comparison test, so is the un series. For (b), let the limit be C, so un > C/n. The series C/n is the harmonic series, which is divergent. By the comparison test, the un series also diverges. ∞ ∑

2.2.3.

(a) Show that the series n=2

1 n (ln n)2

converges.

(b) Write a symbolic computation program that will evaluate 1000 ∑ n=2

1 , n (ln n)2

2.2. TESTS FOR CONVERGENCE

19

and, using Eq. (2.11), make an estimate of the sum of the inﬁnite series. Determine the accuracy you can guarantee for your estimate.

Solution: (a) Apply the Cauchy integral test, for which ∫ ∞ dt 1 . = 2 t(ln t) ln x x This is a convergent integral, so the series also converges. (b) Here are maple and mathematica programs for the sum to n = 1000: S1000 := 1.964988446

1.9649884501113843‘

= Sum[1./n/Log[n]^2, {n, 2, 1000}]

The inﬁnite series has a value that is between S1000 + 1/ ln 1000 and S1000 + 1/ ln 1001. These bounds are 2.10973 and 2.10976. 2.2.4.

Determine whether each of the following series converges. (a)

(d)

∞ ∑

(ln n)−1

(b)

n=2

n=0

∞ ∑

∞ ∑

[n(n − 1)]−1/2

(e)

n=2

(g)

∞ ∑

∞ ∑ n=1

n=2

n! 10n+1

(h)

1 n+3

(c)

√ n2 + 1 n2 − 1

(f)

∞ ∑ 25n 52n n=0

∞ ∑ n=0 ∞ ∑ n=1

(i)

1 2n − 1 1 2n(2n − 1)

∞ ∑ arctan(n) . n2 n=1

Solution: (a) through (e), divergent (can compare with a harmonic series or multiple thereof), (f) convergent (compare with 1/(2n−1)2 ), (g) divergent (terms do not go to zero), (h) divergent (term is (32/25)n ), (i) convergent (arctan(n) < π/2). 2.2.5.

Determine whether each of the following series converges. (a)

∞ ∑ n=1

(d)

∞ ∑

(b)

(e)

( ) 1 ln 1 + n n=1

(h)

∞ ∑

∞ ∑ n=1

1 n ln n

n=2

(g)

1 n(n + 1)

∞ ∑ n=1

1 n2n

(c)

1 n · n1/n

(f)

∞ ∑ arccot(n) n1/2 n=1

∞ ∑

e−n

n=0

(i)

∞ ∑ (2n)! n! (3n)! n=0 ∞ ∑

arccot(n) .

n=0

Solution: (a) convergent (compare with 1/n2 ), (b) convergent (compare with 1/2n ), (c) convergent geometric series rn with r = 1/e), (d) divergent (Cauchy integral test diverges), (e) divergent (n1/n < 2 for all integer n; compare with harmonic series), (f) convergent (after canceling (2n)! there remains a product of n fractions, each less than 1/2), (g) divergent (expand each term, then sum, obtaining a harmonic series plus other series that all converge), (h) Write arccot(n) = arctan(1/n) = (1/n) − 1/3n3 + · · · , so terms of arccot(n)/n1/2 ∼ n−3/2 , convergent. (i) See discussion under (h); like the harmonic series, divergent. 2.2.6.

Try to evaluate the summations of the two preceding exercises using maple or mathematica. This approach will show that some of the divergent series are conﬁrmed as divergent and will

20

CHAPTER 2. INFINITE SERIES give values of some of the convergent series. Notice that symbolic computation is not yet as accomplished as skilled humans in identifying series that are divergent.

2.2.7.

∑1,000 −1 Write a symbolic computation program to evaluate n=1 n , and use the result to obtain upper and lower bounds on the Euler-Mascheroni constant.

Solution: Here are symbolic computation programs: S1000:=add(1./n, n=1 .. 1000); S1000

S1000 := 7.485470861

= Sum[1./n, {n, 1, 1000}];

Using limits obtained from the Cauchy integral test, the The Euler-Mascheroni constant γ is bounded by S1000 − ln 1000 and S1000 − ln 1001. These bounds are 0.5767 and 0.5778.

2.3

ALTERNATING SERIES

Exercises 2.3.1.

Determine whether each of these series is convergent, and if so, whether it is absolutely convergent: (a)

ln 3 ln 4 ln 5 ln 6 ln 2 − + − + − ··· , 2 3 4 5 6

(b)

1 1 1 1 1 1 1 1 + − − + + − − + ··· , 1 2 3 4 5 6 7 8

(c) 1 −

1 1 1 1 1 1 1 1 1 1 1 1 1 − + + + − − − − + ··· + − ··· − + ··· . 2 3 4 5 6 7 8 9 10 11 15 16 21

Solution: None of these series is absolutely convergent, as the sums of the absolute values of the terms are all at least as divergent as the harmonic series. The terms of (a) alternate in sign and approach zero as a limit, so this series converges. (b) One cannot rearrange the order of the terms in this series, but they can be grouped into adjacent pairs, which decrease to zero and alternate in sign. This series therefore converges. (c) The terms of this series can also be grouped. The nth set of adjacent terms of the same sign starts with 1/m, with m = 1 + n(n − 1)/2 and contains n members. This set therefore sums to a value less than n/m, which for large n approaches zero as a limit. This series therefore converges.

2.3.2.

Catalan’s constant, sometimes designated β(2), is deﬁned by the series β(2) =

∞ ∑

(−1)k (2k + 1)−2 =

k=0

1 1 1 − 2 + 2 − ··· . 12 3 5

Using symbolic computations, calculate β(2) to six-digit accuracy. Hint. The rate of convergence is enhanced by pairing the terms: (4k − 1)−2 − (4k + 1)−2 =

16k . (16k2 − 1)2

2.4. OPERATIONS ON SERIES

21

∑ 2 2 If you have carried enough digits in your summation, 1≤k≤N 16k/(16k − 1) , additional signiﬁcant ﬁgures may be obtained by setting upper and lower bounds on the tail of the series, ∑∞ k=N +1 . These bounds may be set by comparison with integrals, as in the Cauchy integral test. Check your work against the built-in value of Catalan’s constant in your symbolic computation system; its name is Catalan (the same in both maple and mathematica).

Solution: By making the term pairing shown above, we can use the Cauchy integral test to obtain bounds. These bounds (for the k series through k = N ) are ∞ ∑ 1/2 1/2 16k > > . 2 2 2 16N − 1 (16k − 1) 16(N + 1)2 − 1 k=N +1

Comparing the k series with β(2), we have β(2) = 1 −

N ∑ k=1

16k − Remainder, (16k 2 − 1)2

where the remainder is bounded as shown. Taking N = 50, we ﬁnd the explicit part of β(2) to be 0.91597785, with the remainder between 0.00001201 and 0.00001250. The remainder estimate gives us two more signiﬁcant digits of accuracy; overall, we have 0.9159653 < β(2) < 0.9159658. The accurate value of β(2) is 0.9159656. Here are some relevant symbolic computations. in maple, > 1.-add(16.*k/(16*k^2-1)^2, k = 1 .. 50);

0.9159778470

> 1.-add(16.*k/(16*k^2-1)^2, k = 1 .. 200);

0.9159663716

> evalf(Catalan);

0.9159655942

In mathematica (after positioning the cursor on the answer and pressing Enter),

2.4

1.-Sum[16.*k/(16*k^2-1)^2, {k,1,50}]

0.9159778469771014‘

1.-Sum[16.*k/(16*k^2-1)^2, {k,1,200}]

0.915966371531943‘

N[Catalan]

0.915965594177219‘

OPERATIONS ON SERIES

Exercises 2.4.1.

The convergence improvement of Example 2.4.2 may be carried out more eﬀectively if we rewrite α2 , from Eq. (2.21), into a more symmetric form: Replacing n by n − 1 and changing the lower summation limit to n = 2, we get α′2 =

∞ ∑ n=2

1 1 = . (n − 1)n(n + 1) 4

22

CHAPTER 2. INFINITE SERIES Show that by combining ζ(3) and α′2 we can obtain convergence as n−5 .

Solution: Start from ζ(3) −

α2′

] ∞ [ ∑ 1 1 − . =1+ n3 n(n2 − 1) n=2

Inserting the value of α2′ and simplifying the summand, we reach ∞

ζ(3) = 1 +

1 ∑ 1 − , 3 2 4 n=2 n (n − 1)

which converges as n−5 . 2.4.2.

The formula for αp , Eq. (2.21), is a summation of the form

∞ ∑

un (p), with

n=1

un (p) =

1 . n(n + 1) · · · (n + p)

Applying a partial fraction decomposition to the ﬁrst and last factors of the denominator, i.e., [ ] 1 1 1 1 = − , n(n + p) p n n+p show that un (p) =

∞ ∑ un (p − 1) − un+1 (p − 1) 1 un (p) = and that . p p p! n=1

Hint. Consider using mathematical induction. In that connection it is useful to note that u1 (p − 1) = 1/p !.

Solution: Using the partial fraction decomposition we write un (p) as two terms that correspond to the recurrence formula. If we sum both sides of the recurrence formula for n from 1 to ∞ the only terms that do not cancel correspond to ∞ ∑

un (p) =

n=1

2.5

SERIES OF FUNCTIONS

Exercises 2.5.1.

u1 (p − 1) 1 = . p p p!

Show that (a) sin x =

∞ ∑

(−1)n

n=0

(b) cos x =

∞ ∑ n=0

(−1)n

x2n+1 , (2n + 1)! x2n . (2n)!

2.5. SERIES OF FUNCTIONS

23

Solution: Use the formula for Maclaurin series with d2n sin x d2n+1 sin x = 0, = (−1)n , dx2n x=0 dx2n+1 x=0 d2n cos x = (−1)n , dx2n x=0 2.5.2.

d2n+1 cos x = 0. dx2n+1 x=0

(a) Show that f (x) = x1/2 has no Maclaurin expansion. (b) Show that the Taylor series for x1/2 about x = a > 0 is x1/2 = a1/2 +

∞ ∑

un (x − a)n ,

with

un =

n=1

(−1)n−1 (2n − 2)! . 22n−1 n! (n − 1)! an−1/2

(c) Determine the range over which this Taylor series converges.

Solution: (a) The derivatives of x1/2 become inﬁnite at x = 0. (b) The derivatives of x1/2 at x = a are ( )( ) ( ) dn x1 /2 1 3 1 1 − − ··· − n + 1 a−n+1/2 = dxn x=a 2 2 2 2 =

(−1)n−1 (2n − 3)!! (−1)n−1 (2n − 2)! = 2n−1 . n n−1/2 2 a 2 (n − 1)!an−1/2

These can be inserted into the formula for the Taylor series. (c) Applying the ratio test, we obtain convergence if, for large n, n−1 (−1)n (2n − 1)!! (2n − 3)!! n+1 (−1) n (x − a) (x − a) / 2n+1 (n + 1)! an+1/2 < 1. n n−1/2 2 n! a This condition reduces to |(x − a)/a| < 1, which corresponds to convergence for x within the range (−a, a) and divergence outside that range. We do not discuss convergence at x = ±a. 2.5.3.

(a) Use maple or mathematica to obtain the power series expansions of tan x and tan2 x. Arrange to keep terms at least as far as x10 . (b) Diﬀerentiate the series for tan x term by term; to what trigonometric function does your new series correspond? (c) Verify that the power series found in parts (a) and (b) are consistent with the trigonometric identity 1 + tan2 x = sec2 x.

Solution: (a) Use one of series or Series: > ttan := series(tan(x),x,10): > ttansq:= series(tan(x)^2,x,12): ttan = Series[Tan[x], {x,0,10}]; ttansq = Series[Tan[x]^2, {x,0,10}];

24

CHAPTER 2. INFINITE SERIES Either of the above code sequences produces output equivalent to tan x = x +

1 3 2 5 17 7 62 9 x + x + x + x + ··· , 3 15 315 2835

tan2 x = x2 +

2 4 17 6 62 8 1382 10 x + x + x + x + ··· . 3 45 315 14175

(b) The operators diff and D can operate on series. Use one of: > diff(ttan,x): D[ttan,x]; Either of these commands produces results equivalent to d tan x 2 17 6 62 8 = sec2 x = 1 + x2 + x4 + x + x + ··· . dx 3 45 315 (c) The above series combine in a way consistent with 1 + tan2 x = sec2 x. 2.5.4.

(a) Use maple or mathematica to obtain the Taylor series expansion of 1/ sin x about the point x = π/2. Keep terms at least as far as (x − π/2)10 . (b) Integrate this series term by term. Notice that the variable of integration should be speciﬁed as x, not x-Pi/2. Thus, the form of the termwise integration command is int(series,x); (maple)

or

Integrate[series,x] (mathematica).

(c) The integral of 1/ sin x from x = π/2 to x = x can be written in the form ln tan(x/2). Note that at x = π/2 we have tan(x/2) = 1, giving the required value of zero for this reference value of x. Expand ln tan(x/2) about x = π/2, and compare your result with that obtained in part (b).

Solution: (a) Using series or Series, we get T =1+

1( π )2 5 ( π )4 61 ( π )6 x− + x− + x− 2 2 24 2 720 2 277 ( π )8 50521 ( π )10 + x− + x− + ··· 8064 2 3628800 2

(b) Termwise integration of T , carried out by symbolic computation, yields π )3 1 ( π )5 61 ( π )7 277 ( π )9 π 1( x− + x− + x− + x− +· · · T T = x− + 2 6 2 24 2 5040 2 72576 2 (c) The expression T T is in exact agreement with the series expansion of ln tan(x/2) about π/2. Both symbolic systems write the indeﬁnite integral of a series in a form such that all occurrences of x are in the combination x−x0 .

2.5.5.

(a) Using the expansion of ln(1 + x) given in Eq. (2.41), show that, for suﬃciently large n, ( ( ) ) n−1 n+1 1 1 (i) + ln − ln 0. n n n n (b) Use these inequalities to show that the limit deﬁning the Euler-Mascheroni constant, Eq. (2.13), is ﬁnite.

2.5. SERIES OF FUNCTIONS

25

Solution: (a) Write these logarithms as ln(1 ± n−1 and then expand. We get ( ) ( ) 1 1 1 1 1 1 ln 1 − = − − 2 − ··· , ln 1 + = − 2 + ··· , n n 2n n n 2n and therefore at large n the leading term in (i) is −1/2n2 and that in (ii) is +1/2n2 . (b) Write the expression for the Euler-Mascheroni constant γ as [ γ = lim

N →∞

1+

N ∑

] Tn ,

with

Tn = n−1 + ln(n − 1) − ln n .

n=2

Alternatively, [ γ = lim

N →∞

N

−1

+

N −1 ∑

] with Un = n−1 − ln(n + 1) + ln n .

Un ,

n=1

In part (a) it was shown that Tn < 0 and Un > 0. These inequalities indicate that 0 < γ < 1, so the series describing γ cannot diverge. [

2.5.6.

Show that lim

x→0

sin(tan x) − tan(sin x) x7

] =−

1 . 30

Solution: Expand sin(tan x) and tan(sin x) in Maclaurin series and subtract the second from the ﬁrst. The leading term of this diﬀerence is −x7 /30. In mathematica you can directly take the diﬀerence of the two series expansions. In maple, you can only do so in a meaningful way if both are ﬁrst converted to polynomials, using convert(expr, polynom). 2.5.7.

A power series T converges for −R < x < R. (a) Show that a series obtained by termwise integration of T converges for any range −S < x < S such that that |S| < |R|. (b) Show that termwise diﬀerentiation of T also produces a series that converges on all ranges interior to the convergence range of T .

Solution:

∑ Applying the ratio test to the series T = n an xn with |x| = S: an S < 1. lim n→∞ an−1 The series that are integrated or diﬀerentiated termwise will converge for x such that n an x n an x < 1 and < 1. lim lim n→∞ n − 1 an−1 n→∞ n + 1 an−1 These inequalities will be satisﬁed for |x| = S because n/(n + 1) and n/(n − 1) both approach unity for large n.

26

CHAPTER 2. INFINITE SERIES ∫

2.5.8.

1

Show that the integral

tan−1 t

0

dt has a value equal to Catalan’s constant. t

Note. The deﬁnition and numerical computation of Catalan’s constant was addressed in Exercise 2.3.2.

Solution: Use the series expansion of tan−1 t, Eq. (2.44). Integrate tan−1 t/t termwise over 0 < t < 1: ∫

1

−1

tan 0

dt t = t

1

0

=1−

] ] [ ∫ 1[ t5 t4 1 t3 t2 t − + − · · · dt = 1 − + − · · · ] dt t 3 5 3 5 0 1 1 + 2 − ··· . 32 5

This series is a representation of Catalan’s constant.

2.6

BINOMIAL THEOREM

Exercises 2.6.1.

Write (a + b)n as a binomial series which is convergent for all n when |a| < |b| and as another binomial series that converges when |a| > |b|. Which of these series can be used when |a| > |b| and n is a positive integer?

Solution: Binomial series for (1 + x)n converge when |x| < 1, and are closed, ﬁnite expressions (for any x) when n is a positive integer. Therefore, the series n

(a + b) = b

n

(

∞ ( ) ( )m ∑ a )n n a n 1+ =b b b m m=0

converges when |a| < |b| and ( n

(a + b) = a

n

b 1+ a

)n

∞ ( ) ( )m ∑ n b =a a m m=0 n

converges when |b| < |a|. Either series may be used when n is a positive integer. 2.6.2.

Using maple or mathematica, (a) Generate (as exact fractions) the binomial coeﬃcients (1/2) (1/2) (1/2) , , , 0 1 2 (1/2) and then generate a list of (as decimal quantities) for n = 0 through n = 10. n

Solution: The three listed binomial coeﬃcients have the values 1, 1/2, and −1/8.

2.6. BINOMIAL THEOREM

27

Since MakeTable requires as its ﬁrst argument a function of a single variable, we deﬁne (in maple) binom1 := proc(n); binomial(1/2, n) end proc: and then call: MakeTable(binom1,0,10,1): Values of binom1(x) for x from 0 to 10 in steps of 1 x binom1 0.000 1.000000E+00 1.000 5.000000E-01 2.000 -1.250000E-01 3.000 6.250000E-02 4.000 -3.906250E-02 5.000 2.734375E-02 6.000 -2.050781E-02 7.000 1.611328E-02 8.000 -1.309204E-02 9.000 1.091003E-02 10.000 -9.273529E-03 In mathematica, use MakeTable[binom1, 0, 10, 1] to get a similar table after ﬁrst deﬁning binom1[n_] := Binomial[1/2, n]. 2.6.3.

Show that for integral n ≥ 0,

∞ ( ) ∑ 1 m m−n = x . n+1 (1 − x) n m=n

Solution: Write the binomial expansion (1 − x)−n−1 =

∞ ∑ (−n − 1)(−n − 2) · · · (−n − j)

j!

j=0

=

∞ ∑ (n + 1)(n + 2) · · · (n + j) j=0

j!

(−x)j

) ∞ ( ∑ n+j j x = x . n j=0 j

Then change the summation index from j to m = n + j and change the lower limit of the summation from j = 0 to m = n. 2.6.4.

For positive integer m, show that (1 + x)−m/2 =

∞ ∑ n=0

(−1)n

(m + 2n − 2)!! n x . 2n n!(m − 2)!!

28

CHAPTER 2. INFINITE SERIES Solution: Write the binomial expansion (1 + x)−m/2 =

=

∞ m m ∑ (− m 2 )(− 2 − 1) · · · (− 2 − n + 1) n x n! n=0 ∞ ∑ m(m + 2) · · · (m + 2n − 2) (−1)n xn n n! 2 n=0

∞ ∑ (m + 2n − 2)!! (−1)n n = x . (m − 2)!! 2n n! n=0

2.6.5.

Write the function (1 − x)−n−1 as a series (keeping terms through x5 ). Use maple or mathematica to verify that the coeﬃcient of each power of x corresponds to the formula in Exercise 2.6.3.

Solution: Here is code that compares the formula of Exercise 2.6.3 with results obtained from the series or Series command. In both languages it is necessary to convert the series into an ordinary expression (via convert or Normal). In maple, A := convert(series((1-x)^(-n-1),x,6),polynom); B := 0; for m from 0 to 5 do B := B + expand(binomial(m+n,n))*x^m end do: simplify(A - B) In mathematica, A = Normal[Series[(1 - x)^(-n - 1), {x, 0, 5}]] B = Sum[Binomial[m, n]*x^(m - n), {m, n, n + 5}] Simplify[A - B] 2.6.6.

Use symbolic computation to verify correctness of the coeﬃcients in the formula of Exercise 2.6.4. It suﬃces to check four values of n for each of three m values.

Solution: Here is code that compares the formula of Exercise 2.6.4 with results obtained from the series or Series command. The code uses the procedure dfac (found in Example 1.4.3) for the double factorials in the formula, and only works when a numeric value of m has previously been speciﬁed. In maple, m := 3; A := convert(series((1-x)^(-m/2),x,6),polynom); B := 0: for n from 0 to 5 do B := B + dfac(m+2*n-2)/dfac(m-2)*(-1)^n/2^n/n!*x^n end do: B; simplify(A - B)

2.7. SOME IMPORTANT SERIES

29

In mathematica, m = 3 A = Normal[Series[(1 + x)^(-m/2), {x, 0, 5}]] B = 0; Do[ B = B + dfac[m+ 2*n-2]/dfac[m-2]*(-1)^n/2^n/n!*x^n, {n, 0, 5}]; B Simplify[A - B]

2.6.7.

Prove the identities (for positive integer n) (a)

n ( ) ∑ n

m

m=0

(b)

n ∑

= 2n ,

(−1)m

m=0

(n) m

= 0.

Solution: The series in part (a) is the binomial expansion of (1 + 1)n ; the series in part (b) is the expansion of (1 − 1)n . 2.6.8.

Show that

( ln

1+x 1−x

)

( ) x3 x5 =2 x+ + + ··· , 3 5

−1 < x < 1.

Solution: Write as ln(1 + x) − ln(1 − x), and evaluate each logarithm as a series of the type given in Eq. (2.68). Then subtract the second series from the ﬁrst.

2.7

SOME IMPORTANT SERIES

2.8

SOME APPLICATIONS OF SERIES

Exercises Find the following limits. Use symbolic computation to check any limits you ﬁnd by hand.

2.8.1.

lim

x→0

cos2 x − 1 x2

Solution: Apply l’Hˆopital’s rule: ( ) sin x −2 sin x cos x cos2 x − 1 = lim = lim − = −1 . lim x→0 x→0 x→0 x2 2x x If you are not sure about the limit of sin x/x, you can either insert the Maclaurin series for sin x or apply l’Hˆopital’s rule a second time. 2.8.2.

lim

x→π

x2 sin x x−π

30

CHAPTER 2. INFINITE SERIES Solution: Replace sin x by sin(π − x), to which it is equal, and note that the limit (at x = π) of sin(π − x)/(x − π) is −1. So the limit we seek is that of −x2 , which evaluates to −π 2 .

2.8.3.

lim

x→0

ln x x

Solution: The numerator approaches (from positive x) the value −∞ while the denominator approaches zero. Technically the limit does not exist, but if the approach is limited to be from positive x (where the logarithm is real) we usually say the limit is −∞. 2.8.4.

lim xp ln x.

x→0

Here p is an arbitrary positive real number; give the limit as a function of p.

Solution: If we note that ln x = (1/p) ln xp , we see that our current problem is equivalent to the limit of (y ln y)/p as y approaches zero. Since the logarithm becomes inﬁnite at y = 0 more slowly than y approaches zero, this limit will be zero for any positive value of p.

2.9

BERNOULLI NUMBERS

Exercises 2.9.1.

Write a symbolic computing procedure bern(p) or bern[p] that will generate the Bernoulli number Bp (as an exact fraction) recursively using one of Eqs. (2.78). Your code should give correct results for all nonnegative integers p, both even and odd. Hint. You may want to consult Appendix D to see how to manage the data for this problem (here, intermediate Bernoulli numbers).

Solution: maple code: bern := proc(n) local S,M,B,j; if (n=0) then S := 1 end if;

if (n=1) then S := -1/2 end if;

if (n > 1) then S := 0; if (n/2 = floor(n/2)) then M := n/2; for j from 1 to M-1 do B := bern(2*j); S := S + B * binomial(n+2,2*j) end do; S := (M-S)/binomial(n+2,n) end if end if; S end proc:

2.9. BERNOULLI NUMBERS

31

mathematica code: bern[n_] := Module[ {s, m, b, j}, If[n == 0, s

= 1];

If[n == 1, s = -1/2]; If[n == 2, s = 1/6];

If[n > 2, s = 0; If[n/2 == Floor[n/2],

m = n/2;

Do[ b = bern[2*j]; s = s + b * Binomial[n + 2, 2*j], {j, 1, m - 1}]; s = (m - s)/Binomial[n + 2, n] ] ];

2.9.2.

s

]

Here are some series expansions that use Bernoulli numbers. Write procedures that evaluate these series (keeping terms through x15 ), using your symbolic system’s Bernoulli number function. Then compare the accuracy of your procedures with the results of direct function evaluations.

(a) cot x =

∞ ∑ (−1)n 22n B2n 2n−1 x , (2n)! n=0

(b) tan x =

∞ ∑ (−1)n−1 22n (22n − 1)B2n 2n−1 x , (2n)! n=1

(c) csc x =

∞ ∑ (−1)n−1 2(22n−1 − 1)B2n 2n−1 x , (2n)! n=0

(d) ln cos z =

∞ ∑ (−1)n 22n−1 (22n − 1)B2n 2n x . n(2n)! n=1

Solution: Sample coding, maple: Bcot := proc(x) local n,S; S := 0; for n from 0 to 8 do S := S + (-1)^n*2^(2*n)*bernoulli(2*n)/(2*n)! * x^(2*n-1) end do; end proc: Sample coding, mathematica: bTan[x_, nmax_] := Module[{n, s}, s = 0; Do[s = s + (-1)^(n-1) * 2^(2*n) * (2^(2*n) - 1) * BernoulliB[2*n]/(2*n)!*x^{2*n-1}, {n, 1, nmax}];

s ]

32

CHAPTER 2. INFINITE SERIES

2.10

ASYMPTOTIC SERIES

Exercises 2.10.1.

The complementary Gauss error function is deﬁned as ∫ ∞ 2 2 erfc(x) = √ e−t dt . π x Show, applying repeated integrations by parts, that erfc(x) has the asymptotic expansion ] 2 [ e−x 1 1·3 1·3·5 (2n − 1)!! erfc(x) ∼ √ . 1− + − + · · · + (−1)n 2 2 2 2 3 2 n πx 2x (2x ) (2x ) (2x ) See if you can obtain this asymptotic expansion from your symbolic computing system. The complementary error function is called erfc(x) or Erfc[x].

Solution: To prepare to answer this Exercise, consider the related integrals ∫ ∞ −t2 2 e dt In (x) = √ . tn π x Integrate In by parts, integrating te−t and diﬀerentiating 1/tn+1 . The result is ( ) ∫ ∞ −t2 2 2 e−x n+1 2 e−x n+1 e dt √ In = √ n+1 − = √ n+1 − In+2 . 2 tn+2 2 πx π x πx 2

The integral we require is I0 . Using the above result, [ ] 2 2 2 1 e−x 1 e−x 3 e−x √ 3 − I4 − I2 = √ − I0 = √ 2 πx 2 πx 2 πx [ ] 2 2 2 e−x e−x 1 · 3 e−x 5 √ 5 − I6 . =√ −√ + 2 πx π (2x3 ) 2 · 2 πx We continue this process until we have I2n+2 on the right-hand side, then making the approximation that I2n+2 is negligible. When simpliﬁed, we get the asymptotic expansion for erfc(x). This asymptotic expansion can be accessed by calling one of series(erfc(x),x=infinity,10); Series[Erfc[x], {x, Infinity, 10}]

2.11

EULER-MACLAURIN FORMULA

Exercises 2.11.1.

(a) Verify the algebraic steps leading to Eq. (2.88). (b) Enter maple or mathematica code for Example 2.11.1 into your computer and test its function for a wide variety of values of the arguments s, k and P (or p).

Solution: Check your code against values of the zeta function in Table 2.1 or by symbolic computation of Zeta( ) or Zeta[ ].

2.11. EULER-MACLAURIN FORMULA 2.11.2.

33

The Euler-Maclaurin integration formula may be used for the evaluation of ﬁnite series: [ ] ∫ n n ∑ 1 1 B2 f (m) = f (x) dx + f (1) + f (n) + f ′ (n) − f ′ (1) + · · · . 2 2 2! 1 m=1 Show that (a)

n ∑

1 2

m=

n(n + 1).

m=1

(b)

n ∑

m2 =

1 6

n(n + 1)(2n + 1).

m3 =

1 4

n2 (n + 1)2 .

m4 =

1 30

m=1

(c)

n ∑ m=1

(d)

n ∑

n(n + 1)(2n + 1)(3n2 + 3n − 1).

m=1

Solution:

(a) f (m) = m, f ′ (m) = 1, (

n ∑

m=

m=1

n2 1 − 2 2

f (m) dm = m2 /2. Thus,

) +

(b) f (m) = m2 , f ′ (m) = 2m, n ∑

( 2

m =

m=1

)

n3 1 − 3 3

+

1 B2 n2 n (n + 1) + (1 − 1) = + . 2 2! 2 2

f (m) dm = m3 /3. Also, B2 = +1/6. Thus,

1 2 1/6 n3 n2 n (n + 1) + (2n − 2) = + + . 2 2! 3 2 6

(c) f (m) = m3 , f ′ (m) = 3m2 , f (3) (m) = 6, n ∑

( m3 =

m=1

f (m) dm = m4 /4. Thus,

) 1 1 1/6 B4 n2 n3 n2 n4 − + (n3 + 1) + (3n2 − 3) + (6 − 6) = + + . 4 4 2 2! 4! 4 2 4

(d) f (m) = m4 , f ′ (m) = 4m3 , f (3) (m) = 24m, B5 = −1/30. Thus, n ∑

( 4

m =

m=1

=

n5 1 − 5 5

)

f (m) dm = m5 /5. Also

1 1/6 −1/30 + (n4 + 1) + (4n3 − 4) + (24n − 24) 2 2! 4!

n4 n3 n n5 + + − . 5 2 3 30

All these expressions can be factored to yield the forms given in the Exercise. 2.11.3.

Use the Euler-Maclaurin formula to compute the Euler-Mascheroni constant as follows:   ∫ n ∫ N n N ∑ ∑ dx dx −1 −1  s − s − (a) Show that γ = + lim  N →∞ x x 1 n s=1 s=n+1 (b) Using Eq. (2.85), show that the equation of part (a) can be rearranged to γ≈

n ∑ s=1

s−1 − ln n −

k∑ max 1 B2k + . 2k 2n (2k)n k=1

34

CHAPTER 2. INFINITE SERIES (c) Write a symbolic procedure EulerMasch that will compute approximations to γ as a function of n and kmax and ﬁnd values of these parameters that will give a value of γ that is accurate to 15 decimal places. Note. Remember to do your symbolic computations at suﬃciently high accuracy, and compare your results to an authentic value of γ obtained as a built-in constant of your symbolic computing system.

Solution: Part (a) is simply a partitioning of the limit that deﬁnes γ. To obtain the result in part (b), replace the integral from 1 to n by its value (ln n) and develop the limit quantity starting from N ∑

s−1 =

N n

s=n

dx 1 + x 2

(

1 1 + N n

) +

k∑ max k=1

] B2k [ (2k−1) f (N ) − f (2k−1) (n) , (2k)!

where f (s) = s−1 . To match our current requirements, we must change the lower limit of the s summation to n + 1; we also need to insert the derivative f (2k−1) (n) = −(2k − 1)!/n2k . We then get (in the limit of large N ) ∫ ∞ k∑ ∞ max ∑ 1 1 B2k dx + + + . s−1 = n s=n+1 x 2n (2k)n2k n k=1

A slight rearrangement brings us to the required result. maple code for part (c) can be the following: Digits := 20: EulerMasch := proc(n,kmax) local k,s; sum(1/s, s = 1 .. n) - log(1.*n)-0.5/n + sum(bernoulli(2*k)/(2*k*n^(2*k)) k = 1 .. kmax) end proc: mathematica code can be EulerMasch[n_, kmax_] := Module[{k, s}, N[ Sum[1/s, {s, 1, n}] - Log[n] - 1/(2*n) + Sum[BernoulliB[2*k]/(2*k*n^(2*k)), {k, 1, kmax}], 20 ]

]

In both these codes we have arranged for 20 digits of decimal precision to allow for the fact that the Exercise requested a 15-digit answer. The desired precision can be reached with n = 10; looking at diﬀerent kmax values, we see that the result is stable to 15 ﬁgures when kmax = 8: EulerMasch(10,7);

0.57721566490153290218

EulerMasch[10,8]

0.57721566490153285779

Chapter 3

COMPLEX NUMBERS AND FUNCTIONS 3.1

INTRODUCTION

Exercises 3.1.1.

Verify the formula for 1/z, Eq. (3.9).

Solution: Factor the denominator and cancel: x − iy 1 1 x − iy = = = . x2 + y 2 (x + iy)(x − iy) x + iy z 3.1.2.

Find the complex conjugates of (a)

z + z∗ z − z∗ , (b) . 2i 2

Solution: [ ]∗ z + z∗ z∗ + z z + z∗ (a) = =− . 2i −2i 2i [ ]∗ z − z∗ z∗ − z z − z∗ (b) = =− . 2 2 2 3.1.3.

Find the magnitude of (a) 3 + 4i, (b) 1 − i.

Solution:

√ √ (a) |3 + 4i| = (3 + 4i)(3 − 4i) = 32 + 42 = 5 . √ √ √ (b) |1 − i| = (1 − i)(1 + i) = 1 + 1 = 2 . 3.1.4.

Find both the roots of z 2 + 2z + 5 = 0. Show that they are complex conjugates.

Solution: Use the quadratic formula. √ 1√ −2 ± 22 − 4 · 5 = −1 ± −16 = −1 ± 2i . z= 2 2 (−1 + 2i)∗ = −1 − 2i. 35

36 3.1.5.

CHAPTER 3. COMPLEX NUMBERS AND FUNCTIONS (a) Find the real and imaginary parts of W =

(3 + i)2 (3 − 2i)(1 − 7i) . (2 − i)3 (5 − i)(4 + i)2

(b) Find the magnitude of W . (c) Verify that if W ∗ , the complex conjugate of W , is obtained by changing the sign of i everywhere in the original expression for W , the result reduces to Re W − i Im W .

Solution: Using symbolic computation, execute one of: (a)

W := (3+I)^2*(3-2*I)*(1-7*I)/(2-I)^3/(5-I)/(4+I)^2;

W = (3+I)^2*(3-2*I)*(1-7*I)/((2-I)^3*(5-I)*(4+I)^2) 4756 1342 From either of the above commands, W := − i. 18785 18785 (b)

Both commands give

abs(W);

2 √ 5 17

Abs[W] (c) Write input for WC, an expression with -I substituted for I in W. WC simpliﬁes to 4756 1342 + i. 18785 18785

3.2

FUNCTIONS IN THE COMPLEX DOMAIN

Exercises 3.2.1.

Separate into two real equations (but do not try to solve them): (a) z 3 =

(7 − 4i)3 , (1 − i)2

(b) e1/z = 1 − 5i. (c) z 7 = 1.

Solution: The object of this and the next exercise is to illustrate the complexity that may be generated by converting functions of a complex variable to the equivalent real-variable forms. (a) Converting left and right sides of this equation into their real and imaginary parts (using symbolic computation), Both are x3 − 3xy 2 + i(3x2 y − y 3 )

Left: evalc((x+I*y)^3); ComplexExpand[(x + I*y)^3] Right: evalc((7-4*I)^3/(1-I)^2);

Both are 262 +

ComplexExpand[(7 - 4*I)^3/(1 - I)^2] Then equating real and imaginary parts: x3 − 3xy 2 = 262

and

3x2 y − y 3 =

7 2

7i 2

3.2. FUNCTIONS IN THE COMPLEX DOMAIN

37

(b) A process similar to that of part (a) yields ex/(x

2

+y 2 )

( cos

y x2 + y 2

) = 1,

−ex/(x

2

+y 2 )

( sin

y x2 + y 2

) = −5

(c) A process similar to that of part (a) yields x7 − 21x5 y 2 + 35x3 y 4 − 7xy 6 = 1 ,

3.2.2.

7x6 y − 35x4 y 3 + 21x2 y 5 − y 7 = 0 .

Separate into real and imaginary parts: 5

(a) ez , (b) cos2 (1/z).

Solution: (a) evalc(exp((x+I*y)^(5))); ComplexExpand[E^((x + I*y)^5)] Both yield ex −10x 5

3 2

y +5xy 4

cos(5x4 y−10x2 y 3 +y 5 )+iex −10x 5

3 2

y +5xy 4

sin(5x4 y−10x2 y 3 +y 5 )

(b) evalc(cos(1/(x+I*y))^2); ComplexExpand[Cos[1/(x + I*y)]^2] Both yield ( )2 ( )2 ( )2 ( )2 x y x y cos cosh − sin sinh x2 + y 2 x2 + y 2 x2 + y 2 x2 + y 2 ( ) ( ) ( ) ( ) x y x y + 2i cos cosh sin sinh x2 + y 2 x2 + y 2 x2 + y 2 x2 + y 2

3.2.3.

(a) Separate sin z = sin(x + iy) into real and imaginary parts. (b) Find the two real equations equivalent to sin z = 0. (c) Plot cosh y and sinh y over a range suﬃcient to convince yourself that there is no real value of y such that cosh y = 0, and that the only real value of y for which sinh y = 0 is y = 0. (d) Based on your solutions to parts (b) and (c), determine all the values of z = x + iy for which sin z = 0. Note. This exercise shows how judicious use of a symbolic computing system can help you identify properties of the mathematical quantities involved in a problem.

Solution: (a) sin z = sin x cosh y + i cos x sinh y (b) sin x cosh y = 0,

cos x sinh y = 0 .

38

CHAPTER 3. COMPLEX NUMBERS AND FUNCTIONS (c)

(d) From part (b), using the fact that sin x and cos x are not both zero for the same real x, we see that sin z cannot be zero unless sinh y or cosh y is zero. From part (c), the only real y for which either sinh y or cosh y vanishes is at y = 0, where sinh y = 0. For sin z to vanish, we must have z = nπ, n = 0, ±1, ±2, · · · . All the zeros of sin z are on the real axis.

3.3

THE COMPLEX PLANE

Exercises 3.3.1.

Locate the following quantities on the complex plane. (a) 1 + i

(b) (1 + i)∗

(c) − (1 + i)

(d) 4eiπ/4

(e) 4e−iπ/2

(f) eiπ/4 /4 .

Solution:

3.3.2.

Locate the following quantities on the complex plane, given that z = 1 + i. z z+2

(a)

2 z

(b)

(d)

1 z+i

(e) eπz/5

(c)

1 z−2

(f) e−iπz/5 .

3.3. THE COMPLEX PLANE

39

Solution:

3.3.3.

For each of the following complex numbers, compute its absolute value and then multiply it by a real number such that the resulting complex number has absolute value unity. (a) 3 + 4i

(b)

1 4 − 3i

(c)

1 + 3i . 3−i

Solution: (a)

3.3.4.

3 + 4i , 5

(b)

4 + 3i , 5

(c) i

Find the reciprocal of x+iy, working in polar form but expressing the ﬁnal result in Cartesian form.

Solution: In polar form, x+iy = z = reiθ , with r = sin θ = y/r. Then

√ x2 + y 2 and θ such that cos θ = x/r,

z −1 = r−1 e−iθ = r−1 (cos θ − i sin θ) = 3.3.5.

1 (x y ) x − iy x − iy −i = = 2 . r r r r2 x + y2

Let z1 = 3 − 4i and z2 = 4 + 3i. (a) Find the polar forms of z1 and z2 . Express the arguments as angles given both in degrees and in radians. (b) Form z1 + z2 and z1 − z2 , expressing each of these results both in Cartesian and in polar form. (c) Form z1 z2 and z1 /z2 , expressing each of these results both in Cartesian and in polar form.

Solution: (a) z1 = 5eiθ1 , tan θ1 = −4/3 . θ1 = −0.9273 = −53.13◦ . z2 = 5eiθ2 , tan θ2 = 3/4 . θ2 = 0.6435 = 36.87◦ .

40

CHAPTER 3. COMPLEX NUMBERS AND FUNCTIONS √ 50 eiθ3 , tan θ3 = −1/7 . θ3 = −0.1419 = −8.13◦ . √ z1 − z2 = −1 − 7i = 50 eiθ4 , tan θ4 = 7 (in third quadrant),

(b) z1 + z2 = 7 − i =

θ4 = 4.5705 = 261.87◦ . (c) z1 z2 = 25 eiθ5 , θ5 = −0.9273 + 0.6435 = −0.2838 = −16.26◦ . z1 z2 = 25(cos θ5 + i sin θ5 ) = 24 − 7i . z1/z2 = eiθ6 , θ6 = −0.9273 − 0.6435 = −1.5708 = −π/2. z1/z2 = e−iπ/2 = −i . 3.3.6.

Plot the function e(i−a)t in the complex plane for a = 0.05, 0.10, and 0.15, in each case for the range 0 ≤ t ≤ 10. Hint. Make parametric plots.

Solution: An example of the plot coding is: In maple, > a:=0.05; plot([exp(-a*t)*cos(t),exp(-a*t)*sin(t),t= 0 .. 10]); In mathematica, a=0.05; ParametricPlot[{E^(-a*t)*Cos[t],E^(-a*t)*Sin[t]},{t,0,10}] Here are the plots for a = 0.05, 0.10, and 0.15.

3.4

CIRCULAR AND HYPERBOLIC FUNCTIONS

Exercises 3.4.1.

Evaluate, using symbolic computation if helpful: (a) sin(0.1),

sin(1),

(b) sinh(0.1), (c) cos(0), (d) cosh(0), (e) tan(0),

sin 3π/2,

sinh(1),

cosh(0.1), cot(0),

sinh πi/3, cos( 12 π

cos(0.1),

sin πi,

sin 8π,

sinh π,

+ 0.1),

cos πi,

cosh( 12 π + 0.1),

tan π/4,

cot π/4,

sinh 8π, cos 2π/3,

cosh πi, tan 8πi,

cos 8π,

cosh 2π/3, cot 8πi,

cosh 8π,

3.4. CIRCULAR AND HYPERBOLIC FUNCTIONS (f) tanh(0),

coth(0),

(g) tanh(∞),

coth(∞),

tanh π,

coth π,

tanh(−∞),

tanh πi/4,

41 coth πi/4,

coth(−∞).

Solution: − 1, i sinh(π) = 11.5487 i, 0 √ (b) 0.100167, 1.17520, i sin π/3 = i 3/2, 11.5487, 4.111315785 × 1010 (a) 0.0998334, 0.841471,

(c) 1, 0.995004, −0.0998334, cosh(π) = 11.5920,

− 1/2, 1

(d) 1, 1.00500, 2.75225, cos(π) = −1, 4.12184, 4.111315785 1010 (e) 0, ∞, 1, 1, i tanh(8π) ≈ i,

− i coth(8π) ≈ −i

(f) 0, ∞, tanh(π) = 0.996272, coth(π) = 1.00374, i tan(π/4) = i, − i cot(π/4) = −i, − 1,

(g) 1, 1, 3.4.2.

−1

Assume that the trigonometric functions and the hyperbolic functions are deﬁned for complex argument by the appropriate power series. If you cannot remember these power series, use maple or mathematica to obtain them. Then show that i sin z = sinh iz,

sin iz = i sinh z,

cos z = cosh iz,

cos iz = cosh z.

Solution: These formulas follow directly if z or iz (as appropriate) is inserted into the following series expansions: sin x =

sinh x =

3.4.3.

∞ ∑ (−1)n x2n+1 , (2n + 1)! n=0 ∞ ∑ x2n+1 , (2n + 1)! n=0

Using the identities cos z =

eiz + e−iz , 2

cos x =

cosh x =

sin z =

∞ ∑ (−1)n x2n , (2n)! n=0 ∞ ∑ x2n . (2n)! n=0

eiz − e−iz , 2i

show that (a) sin(x + iy) = sin x cosh y + i cos x sinh y, cos(x + iy) = cos x cosh y − i sin x sinh y, (b) | sin z|2 = sin2 x + sinh2 y,

| cos z|2 = cos2 x + sinh2 y.

These results show that | sin z| and | cos z| are not bounded by unity when z is permitted to assume complex values.

Solution: (a) Using the identities of this and the previous exercise, write the right-hand side of the ﬁrst equation of part (a) entirely in terms of exponentials. Initially we have (eix − e−ix )(ey + e−y ) (eix + e−ix )(ey − e−y ) +i , 4i 4

42

CHAPTER 3. COMPLEX NUMBERS AND FUNCTIONS which simpliﬁes to ] ei(x+iy) − e−i(x+iy) 1 [ ix−y e − e−ix+y = = sin(x + iy) . 2i 2i The other formula reduces in a similar fashion. (b) We need | sin z|2 = sin2 x cosh2 y + cos2 x sinh2 y . Use the identities sin2 x + cos2 x = 1 and cosh2 y − sinh2 y = 1 (to prove the latter write both terms in exponential form). Then we have | sin z|2 = sin2 x(1 + sinh2 y) + (1 − sin2 x) sinh2 y = sin2 x + sinh2 y , as claimed. The formula for | cos z|2 can be proved in a similar fashion.

3.4.4.

From the identities in Exercises 3.4.2 and 3.4.3 show that (a) sinh(x + iy) = sinh x cos y + i cosh x sin y, cosh(x + iy) = cosh x cos y + i sinh x sin y, (b) | sinh z|2 = sinh2 x + sin2 y,

| cosh z|2 = cosh2 x + sin2 y.

Solution: These formulas can be obtained by the method used for Exercise 3.4.3. 3.4.5.

Show that (a) tanh

sinh x + i sin y z = , 2 cosh x + cos y

(b) coth

z sinh x − i sin y = . 2 cosh x − cos y

Solution: Use the formulas of Exercise 3.4.4 for sinh(z/2) and cosh(z/2) and separate the expressions for sinh(z/2)/ cosh(z/2) and cosh(z/2)/ sinh(z/2) into real and imaginary parts. The numerators can be simpliﬁed using the identities cosh2 (x/2) − sinh2 (x/2) = 1

and

cos2 (y/2) + sin2 (y/2) = 1 ,

and we reach tanh(z/2) =

sinh(x/2) cosh(x/2) + i sin(y/2) cos(y/2) , cosh2 (x/2) cos2 (y/2) + sinh2 (x/2) sin2 (y/2)

coth(z/2) =

sinh(x/2) cosh(x/2) − i sin(y/2) cos(y/2) . sinh2 (x/2) cos2 (y/2) + cosh2 (x/2) sin2 (y/2)

We further simplify using the formulas 2 sinh(x/2) cosh(x/2) = sinh x,

2 sin(y/2) cos(y/2) = sin y,

2 sinh2 (x/2) = cosh x − 1 ,

2 sin2 (y/2) = 1 − cos y ,

2 cosh2 (x/2) = cosh x + 1 ,

2 cos2 (y/2) = 1 + cos y ,

obtaining the required ﬁnal results.

3.5. MULTIPLE-VALUED FUNCTIONS 3.4.6.

43

By inserting Euler’s formula for the trigonometric functions and applying Eq. (3.33), show that ∫ 2π ∫ 2π (a) cos 2x cos 3x dx = 0 , (b) sin 2x sin 3x dx = 0 , 0

0

sin 2x cos 3x dx = 0 ,

(c)

sin 2x cos 2x dx = 0 ,

0

(d) 0

(e)

∫ cos2 2x dx = π ,

(f)

0

sin2 3x dx = π .

0

Solution: (a) This integral becomes ∫

(

0

e2ix + e−2ix 2

)(

e3ix + e−3ix 2

) dx = ∫

0

Every term vanishes because

∫ 2π 0

e5ix + eix + e−ix + e−5ix dx = 0 . 4

einx dx = 0 for all nonzero integers n.

Similar processes yield zero for integrals (b), (c), and (d). (e) This integral becomes ∫

2π 0

(

e2ix + e−2ix 2

)2

dx = 0

e4ix + 2 + e−4ix dx . 4

Here the term “2” leads to a nonvanishing integral. We get

2 2π = π. 4

A similar result is obtained for integral (f). 3.4.7.

Using Eq. (3.34), show that ∫ (a cos bx + b sin bx)eax (a) eax cos bx dx = , a2 + b2 ∫ (b)

eax sin bx dx =

(a sin bx − b cos bx)eax . a2 + b2

Solution: These integrals are the real and imaginary parts of ∫ 1 a − ib ax e(a+ib)x dx = e(a+ib)x = 2 e (cos bx + i sin bx) . a + ib a + b2 Separating real and imaginary parts, we obtain the listed formulas.

3.5

MULTIPLE-VALUED FUNCTIONS

Exercises 3.5.1.

Show that complex numbers have square roots and that the square roots are contained in the complex plane. What are the square roots of i?

44

CHAPTER 3. COMPLEX NUMBERS AND FUNCTIONS Solution: Writing i in polar form as eiπ/2 , we identify its square roots as ±eiπ/4 = eiπ/4 and e5iπ/4 .

3.5.2.

Explain why eln z always equals z, but ln ez does not always equal z.

Solution: The function ln z is deﬁned by the equation z = eln z . This equation is still satisﬁed if ln z → ln z + 2nπi for any integer n; we get the same value of z z because e2nπi = 1. Now consider ln ez , which is deﬁned by ez = eln e , which has a solution z = ln ez . But this equation is also satisﬁed if ln ez → ln ez +2nπi, showing that ln ez is not uniquely z, but is z − 2nπi (with n any positive or negative integer). 3.5.3.

By comparing series expansions, show that tan−1 z =

i ln 2

(

) 1 − iz . 1 + iz

Solution: The relevant expansions are S1 = tan−1 z = z −

z3 z5 z7 + − + ··· 3 5 7

S2 = ln(1 − iz) = (−iz) − S3 = ln(1 + iz) = (iz) −

(−iz)2 (−iz)3 (−iz)4 (−iz)5 (−iz)6 + − + − + ··· 2 3 4 5 6

(iz)2 (iz)3 (iz)4 (iz)5 (iz)6 + − + − + ··· 2 3 4 5 6

Now form S2 − S3 ; the even powers of z cancel, and the odd powers combine to yield ( S2 − S3 = ln

1 − iz 1 + iz

)

[ ] (iz)3 (iz)5 = −2 (iz) + + + ··· . 3 5

We see that S2 − S3 = −2iS1 = (2/i)S1 , which is what we want to prove. 3.5.4.

Find the Cartesian form for all values of (a) (−8)1/3 , (b) i1/4 , (c) eiπ/4 .

Solution: (a) First write in polar form: −8 = 8 e(2n+1)πi . Take the 1/3 power: 2 e(2n+1)πi/3 . There are three distinct values: 2 eπi/3 , 2 eπi = −2, and 2 e5πi/3 . In Cartesian form these are √ 2[cos π/3 + i sin π/3] = 1 + i 3 , −2 ,

√ 2[cos 5π/3 + i sin 5π/3] = 1 − i 3 .

3.5. MULTIPLE-VALUED FUNCTIONS

45

(b) First write i = eπi/2+2nπi . Then take the 1/4 power. There are four distinct values: eπi/8 , e5πi/8 , e9πi/8 , and e13πi/8 . In Cartesian form, these are cos π/8 + i sin π/8 = 0.923880 + 0.382683 i cos 5π/8 + i sin 5π/8 = −0.382681 + 0.923881 i cos 9π/8 + i sin 9π/8 = −0.923880 − 0.382683 i cos 13π/8 + i sin 13π/8 = 0.382681 − 0.923881 i 1+i (c) This expression is single-valued; it is cos π/4 + i sin π/4 = √ . 2 3.5.5.

Find the polar form for all values of (a) (1 + i)3 , (b) (−1)1/5 .

Solution: (a) This function is single-valued. Start with 1 + i = 23/2 e3iπ/4 .

√ iπ/4 2 e ; its cube is

(b) Write (−1) = e(2n+1)iπ . Take the 1/5 power: e(2n+1)iπ/5 . This expression has ﬁve diﬀerent values with arguments in the range (0, 2π): eiπ/5 , e3iπ/5 , eπi = −1, e7iπ/5 , and e9iπ/5 . 3.5.6.

Bring the following to x + iy form. It suﬃces to identify a principal value for each of these quantities. Check your work using maple or mathematica. (a) cos−1 (3i)

(b) cosh−1 (i)

(c) cosh−1 π

(d) sin−1 (i)

(e) sin−1 iπ

(f) sinh−1 (i)

(g) tan−1 (i − 1)

(h) tanh−1 (2i)

(i) cot−1 (2i) .

Solution: π (a) − i sinh−1 3, 2 √ (d) i ln(1 + 2) (g) − 3.5.7.

π i 1 tan−1 (1/2) − + ln 5 2 4 4

(b) ln(1 +

√ iπ 2) + 2

(c) cosh−1 π iπ 2

(e) i sinh−1 π

(f)

(h) i tan−1 2

(i) = i coth−1 2 .

For each of the multiple-valued quantities in Exercise 3.5.6, indicate how all its other values are related to the principal value found as a solution to that exercise.

Solution: If S is a value of sin−1 z, other values of sin−1 z are S + 2nπ and (2n + 1)π − S. If S is a value found for sinh−1 z, other values are S + 2niπ and (2n + 1)iπ − S. If S is a value of cos−1 z, other values are 2nπ ± S. If S is a value of cosh−1 z, other values are 2niπ ± S. If S is a value of tan−1 z or cot−1 z, other values are S + nπ. If S is a value of tanh−1 z or coth−1 z, other values are S + niπ.

46 3.5.8.

CHAPTER 3. COMPLEX NUMBERS AND FUNCTIONS In Exercise 3.2.1 we decomposed Eq. (3.43), z3 =

(7 − 4i)3 , (1 − i)2

into two real equations which appeared diﬃcult to solve. We can now obtain solutions simply by taking the cube root of the right-hand side of the equation. Obtain all solutions to Eq. (3.43). Note. This shows how the use of complex variables can sometimes cause an apparently diﬃcult problem to assume a simpler form.

Solution: The expression for z 3 reduces to 262 + 7i/2. In polar form, this is z 3 = R eiΘ ,

with R =

653/2 2

and Θ = tan−1

(

7 524

) .

The equation for z has three solutions; in polar form they are R1/3 eiΘ/3 , R1/3 ei(Θ+2π)/3 , and R1/3 ei(Θ+4π)/3 .

Chapter 4

VECTORS AND MATRICES 4.1

BASICS OF VECTOR ALGEBRA

Exercises 4.1.1.

Given the two vectors A + B and A − B, show (using a drawing) how to ﬁnd the individual vectors A and B.

Solution: At left is the construction of 2A; at right that of 2B. A–B

A+B

– (A – B)

2A 2B

A+B

4.1.2.

Two airplanes are moving (relative to the ground) at the respective velocities vA and vB . Their relative velocity, vrel , is deﬁned by the equation vrel = vA − vB . Determine vrel if vA = 150 mi/hr west vB = 200 mi/hr north.

Solution:

√ The velocity has magnitude vrel = 1502 + 2002 = 250; the heading will be (relative to north) at tan−1 (150/200) (in radians). To convert to degrees, multiply by 180/π. Including the units, the answer is vrel = 250 mi/hr, 37◦ west of north. 4.1.3.

If a motorboat can travel at a speed of 10 mi/hr (relative to the water) and is in a river that ﬂows due north at 5 mi/hr, what compass heading should the boat take to make its net travel due east? What will be the speed of the boat relative to the river banks?

Solution: If the boat has compass heading an angle θ south of east, we choose θ such that 10 sin θ = 5, to compensate the northward ﬂow of the river. Solving, we ﬁnd 47

48

CHAPTER 4. VECTORS AND MATRICES sin θ = 0.5, equivalent to θ = 30◦ . The east component of the boat’s velocity, √ is 10 cos θ = 10( 3/2) = 8.66. Summarizing, Compass heading 120◦ (30◦ south of east). Speed relative to river banks: 8.66 mi/hr.

4.1.4.

The vertices A, B, and C of a triangle are given by the points (−2, 0, 1), (0, 2, 2), and (1, −2, 0), respectively. Find the point D that makes the ﬁgure ABCD a plane parallelogram.

Solution: Letting A = (−2, 0, 1), B = (0, 2, 2), C = (1, −2, 0), we have a parallelogram if B − A = C − D. If in doubt about this, draw and label a ﬁgure. Then D = A − B + C, so D = (−1, −4, −1). 4.1.5.

A triangle is deﬁned by the points of three vectors A, B and C that start at the origin. Show that the vector sum of the successive sides of the triangle (AB + BC + CA) is zero, where the side AB stands for the vector B − A, etc.

Solution: AB + BC + CA = (B − A) + (C − B) + (A − C) = 0 . 4.1.6.

A sphere of radius a is centered at a point r1 . (a) Give the algebraic equation for the sphere. (b) Using vectors, describe the locus of the points of the sphere.

Solution: Letting r1 = (x1 , y1 , z1 ), (a) (x − x1 )2 + (y − y1 )2 + (z − z1 )2 = a2 (b) r = r1 + a, where a is a vector of magnitude a in an arbitrary direction. 4.1.7.

Using formulas for vector components, show that the diagonals of a parallelogram bisect each other.

Solution: Deﬁning the parallelogram by vectors A, B, C, and D, deﬁning vertices relative to a common origin, the condition that ABCD be a parallelogram is that D = A − B + C. (See Exercise 4.1.4). One diagonal is the line connecting A and C, with midpoint at (A+C)/2. The other diagonal connects B and D, with midpoint (B+D)/2. The diagonals bisect each other if these midpoints coincide, equivalent to the equation A + C = B + D This equation is satisﬁed because it is equivalent to the parallelogram condition. 4.1.8.

A corner reﬂector is formed by three mutually perpendicular reﬂecting surfaces. Show that a ray of light incident upon the corner reﬂector (and suﬃciently near the intersection of its three surfaces) is reﬂected back along a line parallel to the line of incidence. Hint. A reﬂection reverses the sign of the vector component perpendicular to the reﬂecting surface.

4.1. BASICS OF VECTOR ALGEBRA

49

Solution: Use a coordinate system such that the reﬂecting surfaces are the xy-, xz-, and yz-planes, and with the incident ray in the ﬁrst octant, at velocity v, of components (vx , vy , vz ). Assume initially that all of vx , vy , and vz are negative, and that the ray’s ﬁrst reﬂection is from the xy-plane when z has been reduced to zero. This encounter changes vz to −vz (a positive quantity) and the ray continues (in the ﬁrst octant) to smaller values of x and y. The ray continues to zero values of each of x and y, with reﬂections that cause vy and vx to change in sign, so v is now (−vx , −vy , −vz ), a velocity parallel to, but in the opposite direction from that of its original incidence. If any of vx , vy , or vz are zero, there will be reﬂection from the corresponding reﬂector (respectively the yz-, xz-, or xy-plane), but that component of v is already equal to its negative. Thus our conclusion applies to all incident orientations, provided only that the reﬂection planes extend to suﬃciently large coordinate values (the reason the incident ray is speciﬁed as aimed at a point suﬃciently close to the intersection of the reﬂector planes). 4.1.9.

Consider a unit cube with one corner at the origin and its three sides lying along Cartesian coordinates axes. √ (a) Show that there are four body diagonals, each with length 3. Representing these as vectors, ﬁnd their components. √ (b) Show that the faces of the cube have diagonals of length 2 and determine their components.

Solution: Let the vertices of the cube be designated A through H, and represented by vectors of components A = (0, 0, 0), B = (1, 0, 0), C = (1, 1, 0), D = (0, 1, 0), E = (0, 0, 1), F = (1, 0, 1), G = (1, 1, 1), H = (0, 1, 1). With these choices, A, B, C, and D are in the plane z = 0, and are encountered in that order as one follows the cube edges in that plane. The remaining vertices are in the plane z = 1, with E directly above A, F above B, etc. The body diagonals are therefore AG, BH, CE, and DF , and the face diagonals are AC, BD, EG, F H, AF , AH, BE, BG, CF , CH, DE, and DG. (a) Body diagonal AG √ can be described as AG = G − A = (1, 1, 1)−(0, 0, 0) = (1, 1, 1), length 3. The other body diagonals are BH = (−1, 1, √ 1), CE = (−1, −1, 1), and DF = (1, −1, 1). These also have length 3. All these diagonals are indeterminate as to overall sign; e.g., AG can be (−1, −1, −1). (b) Face diagonal AC √ can be described as AC = C − A = (1, 1, 0) − (0, 0, 0) = (1, 1, 0), length 2. The other face diagonals are BD = (−1, 1, 0), EG = (1, 1, 0), F H = (−1, 1, 0), AF = (1, 0, 1), AH = (0, 1, 1), BE = (−1, 0, 1), BG = (0, 1, 1), CF = (0, −1, 1),√CH = (−1, 0, 1), DE = (0, −1, 1), and DG = (1, 0, 1). All have length 2. They are also indeterminate as to overall sign. Each face diagonal vector appears twice; those on opposite faces are equal.

50 4.1.10.

CHAPTER 4. VECTORS AND MATRICES Calculate the components of a unit vector that makes equal angles with the positive directions of the x-, y-, and z-axes.

Solution: ˆ = ux e ˆ ·e ˆ x + uy e ˆ y + uz e ˆz , has ux = U ˆx = cos α, where cos α is This vector, U a direction cosine. We also have uy = β and uz = γ. We require α = √ β = γ. Since cos2 α + cos2 β√+ cos2 γ = 1, we have cos α = cos β = cos γ = 1/3, so ˆ = [ˆ ˆy + e ˆz ] / 3. U ex + e

4.2

DOT PRODUCT

Exercises 4.2.1.

ˆz and e ˆx + 2ˆ Find the angle between the vectors 3ˆ ex − 2ˆ ey + e ey − 3ˆ ez .

Solution:

√ These vectors have magnitudes A = B = 14 and dot product A · B = −4. Writing A · B = AB cos θ, we have 14 cos θ = −4 , with solution cos θ = −2/7; θ = 1.8605 = 106.60◦ . 4.2.2.

ˆy , ﬁnd A · B. Given A = 2ˆ ex − 3ˆ ez and B = 3ˆ ex − e

Solution: A · B = (2)(3) + (−3)(−1) = 9 . 4.2.3.

ˆx − 2ˆ Show that e ey − 4ˆ ez and 2ˆ ex + 5ˆ ey − 2ˆ ez are perpendicular, and ﬁnd a third vector perpendicular to both.

Solution: Calling these vectors A and B, we check that they are perpendicular by computing A · B = (1)(2) + (−2)(5) + (−4)(−2) = 0. A vector C perpendicular to both must satisfy Cx − 2Cy − 4Cz = 0

and

2Cx + 5Cy − 2Cz = 0.

The vector C can be of arbitrary scale; one way to proceed is to set Cz = 1, leaving the two simultaneous equations Cx − 2Cy = 4

and

2Cx + 5Cy = 2.

These equations have solution Cx = 8/3, Cy = −2/3, so a possibility for C is (8/3, − 2/3, 1). Any positive or negative multiple of this will also be perpendicular to A and B. 4.2.4.

Show that |B|A + |A|B is perpendicular to |B|A − |A|B.

Solution: Form and expand the dot product: ( ) ( ) |B|A + |A|B · |B|A − |A|B = B 2 A · A − A2 B · B = 0 . The terms −|B| |A|A · B and |A| |B|B · A cancel.

4.3. SYMBOLIC COMPUTING, VECTORS 4.2.5.

51

The vector r, starting at the origin, terminates at and speciﬁes the point in space (x, y, z). If a is an arbitrary constant vector and r is allowed to take on all values that satisfy one of the following conditions, characterize the surface deﬁned by the tip of r if (a) (r − a) · a = 0 , (b) (r − a) · r = 0 .

Solution: (a) A plane through r = a with its normal in the direction of a. 2

(b) Write (r − a) · r = (r − a/2) − a2 /4 = 0 . The possible values of r lie on a sphere of radius a/2 centered at a/2. 4.2.6.

A pipe comes diagonally down the south wall of a building, making an angle of φ with the vertical. Coming into a corner, the pipe turns and continues diagonally down a west-facing wall, still making an angle of φ with the vertical. Given that the angle between the south-wall and west-wall sections of the pipe is 120◦ , ﬁnd the angle φ.

Solution: Use coordinates in which z is vertical, x is horizontal in the plane of the south wall, and y is horizontal, in the plane of the west wall. 45◦ . The portion of the pipe on the south wall is in a direction corresponding to a unit vector with z component cos φ, x component sin φ, y component zero. The unit vector describing the west-wall orientation has z component cos φ, x component zero, and y component sin φ. The dot product of these vectors is therefore cos2 φ, and is also − cos 120◦ (minus because 120◦ is the interior angle between the two directions). Thus cos2 φ = 1/2, so φ = 45◦ .

4.3

SYMBOLIC COMPUTING, VECTORS

Exercises 4.3.1.

ˆx + Ay e ˆy + Az e ˆz and B = Using a symbolic computing system, deﬁne vectors A = Ax e ˆ x + By e ˆ y + Bz e ˆz and then form (A + B) · (A − B). Show that your result is equal to Bx e A2 − B2 . Note. In maple, the components of A and B are assumed complex, but this exercise assumes them to be real. Tell maple this with assume(Ax,real) and similar commands for all other components of A and B. This issue does not arise in mathematica.

Solution: In maple, assume(Ax,real); assume(Ay,real); assume(Az,real); assume(Bx,real); assume(By,real); assume(Bz,real); A := Vector([Ax,Ay,Az]):

B := Vector([Bx,By,Bz]):

C := (A + B) . (A - B): Expand(C - (A . A) + (B . B) );

0

52

CHAPTER 4. VECTORS AND MATRICES In mathematica, A = {Ax, Ay, Az};

B = {Bx, By, Bz};

G = (A + B) . (A - B); 0

Expand[ G - A . A + B . B ]

4.3.2.

Use your symbolic computing system to verify the result of Exercise 4.2.1. Go from the cosine of an angle to the angle itself using the function arccos or ArcCos.

Solution: In maple, A := Vector([3, -2, 1.]): AA := A . A:

B := Vector([1., 2, -3]):

BB := B . B:

Ctheta := AdotB / sqrt(AA * BB): Theta := arccos(Ctheta); ThetaDeg := evalf(Theta*180/Pi); Θ := 1.860548028 T hetaDeg := 106.6015495 We introduced a decimal point in the deﬁnitions of A and B to force decimal evaluation. In mathematica, a = {3, -2, 1.}; aa = a . a;

b = {1., 2, -3};

bb = b . b;

cTheta = aDotb / Sqrt[aa * bb]; theta = ArcCos[cTheta]

1.86055

106.602

We introduced a decimal point in the deﬁnitions of A and B to force decimal evaluation.

4.3.3.

Using your symbolic computing system, deﬁne the four vectors       5 3 1      3 −5 0 , A= , B= , C= 1 0 −5

 0  −1  . D= 3

(If any of these symbols have a ﬁxed meaning in your symbolic system and therefore cannot represent user-deﬁned quantities, change them as needed to avoid errors—e.g., to lowercase letters.) (a) Calculate the dot products for all pairs of the above vectors (including those with both members the same). (b) Based on the results of part (a), what can you say about the directions of B, C, and D? (c) Based on your answer to part (b), ﬁnd a relation connecting the vectors B, C, and D.

4.3. SYMBOLIC COMPUTING, VECTORS

53

Solution: In maple, deﬁne A := Vector([5,3,1]): C := Vector([1,0,-5]):

B := Vector([3,-5,0]): d := Vector([0,-1,3]):

In mathematica, deﬁne a = {5, 3, 1}; b = {3, -5, 0}; c = {1, 0, -5}; d = {0, -1, 3};

(a) These vectors have (in maple) the following dot products: AA := A.A; BB := B.B; CC := C.C; dd := d.d; AA := 35 AB := A.B;

AC := A.C;

AB := 0 BC := B.C;

BB := 34

dd := 10

AC := 0

Bd := B.d;

BC := 3

CC := 26

Cd := C.d;

Bd := 5

Cd := −15

In mathematica, the formulas are the same except that = is used instead of :=, the variables are all lowercase, and the results are not labeled by their names. (b) We note that B, C, and D are all orthogonal to A, meaning that all lie in a plane perpendicular to A. (c) Since a plane can contain at most two linearly independent vectors, D must be a linear combination of B and C. To obtain a zero ﬁrst component of D, we note that it must be of the form k(B−3C). To obtain the remaining components of D at proper scale, we take k = 1/5.

4.3.4.

Continuing with the vectors deﬁned in Exercise 4.3.3, use your symbolic system to compute (A − 3B) · C and (A − 3C) · B.

Solution: In maple, these computations are (A - 3*B) . C;

−9

(A - 3*C) . B;

−9

In mathematica, (a - 3*b) . c

−9

(a - 3*c) . b

−9

The two calculations give the same result because A · B = A · C = 0.

54

CHAPTER 4. VECTORS AND MATRICES

4.4

MATRICES

Exercises 4.4.1.

Given the matrices ( 3 4 A= 1 2

−1 0

3 4

)

 3 1 , 1

2 B =  −1 4

,

2 C =  −1 1

3 −2 0

 1 3 , −1

evaluate or identify as meaningless each of the following matrix products: A2 , AB, BA, B2 , AC, CA, BC, CB, C2 .

Solution: The matrix dimensions are inconsistent with the multiplications A2 ,AB, B2 , AC, CA, BC, and CB. The valid multiplications are       9 14 −2 18 5 10 2 0 10 1 1  , CB =  12 −2  , C2 =  3 1 −10  . BA =  −2 −2 13 18 −4 16 −2 2 1 3 2 4.4.2.

 1  4   A=  2 , 3 

Given the matrices

B=

(

4

3

2

1

)

,

evaluate or identify as meaningless each of the following expressions: 2AB, 3BA, [A, B], A2 , B2 .

Solution: The products A2 , B2 , and [A, B] are meaningless; note that although AB and BA are both deﬁned, the diﬀerence, [A, B] = AB − BA is not. The valid multiplications are   8 6 4 2  32 24 16 8  , 2AB =  3BA = 69.  16 12 8 4  24 18 12 6 (

4.4.3.

Evaluate (2 5)

1 3

2 −1

1 3

2 −1

)(

2 1

) .

Solution: 33 (

4.4.4.

Evaluate (x y)

)(

x y

) .

Solution: x2 + 5xy − y 2 4.4.5.

A matrix A with elements aij = 0 for j < i is called upper triangular. This nomenclature is appropriate because the elements below the principal diagonal vanish. Show that the product of two upper triangular matrices is also an upper triangular matrix.

4.4. MATRICES

55

Solution: Let A and B be upper triangular; this means that aµj = 0 if µ > j and that bjν = 0 if j > ν. Then C = AB

has elements

cµν =

aµj bjν .

j

The j summation in the formula for cµν will have no nonzero terms unless there is at least one value of j such that µ ≤ j ≤ ν. There will be no such j unless µ ≤ ν, so cµν = 0 if µ > ν. This condition implies that C is upper triangular. 4.4.6.

By writing the matrices in terms of their elements, show that matrix operations satisfy (a) The distributive laws A(B + C) = AB + AC and (B + C)A = BA + CA, (b) The associative law A(BC) = (AB)C.

Solution:

(a) A(B + C) =

aij (bjk + cjk ) =

j

(B + C)A =

aij cjk = AB + AC ,

j

∑ ∑ ∑ (bij + cij )ajk = bij ajk + cij ajk = BA + CA .

j

( aµj

j

4.4.7.

j

j

(b) A(BC) =

aij bjk +

) bjk ckν

=

j

k

 

 aµj bjk  ckν = (AB)C .

j

k

Show that if and only if A and B commute will it be true that (A + B)(A − B) = A2 − B2 .

Solution: Expand the matrix product: (A + B)(A − B) = A2 + BA − AB − B 2 = A2 − B2 − [A, B] . The commutator is zero if and only if A and B commute. 4.4.8.

Show that the matrices  0 1 A= 0 0 0 0

 0 0 , 0

0 B= 0 1

0 0 0

 0 0 , 0

0 C= 0 0

0 0 1

 0 0  0

satisfy the commutation relations [B, A] = C ,

[A, C] = 0 ,

[B, C] = 0 .

Solution: Expand the commutators and explicitly evaluate the matrix multiplications. 4.4.9.

The three Pauli spin matrices are ( ) 0 1 σ1 = , 1 0 Show that

( σ2 =

0 i

−i 0

)

( ,

σ3 =

1 0

0 −1

) .

56

CHAPTER 4. VECTORS AND MATRICES (

1 0

(a) σi2 =

0 1

(b) σ1 σ2 = iσ3 ,

) ,

i = 1, 2, 3.

σ2 σ3 = iσ1 ,

σ3 σ1 = iσ2 .

i ̸= j.

(c) σi σj + σj σi = 0,

Property (c) indicates that diﬀerent σi anticommute (this means that commutation changes the sign of the matrix product).

Solution: These properties can be proved by explicit evaluations. These exercises refer to the following matrices: √  √   1 3 1 3 −    2 2 2 2 , B =  √ A= √   1  1 3 3 − 2 2 2 2 ( D=

4.4.10.

0

1

2

3

)

( ,

F=

3

−4i

4i

2

)

  ,  (

,

G=

 3 2  , i  − 2

i  2 C=  √3 2

0

i

0

4

)

( ,

H=

3

i

0

0

) .

(a) Compute AB, BA, and their commutator [A, B] ≡ AB − BA. (b) Repeat for AC, CA, and [A,C]. (c) Repeat for BC, CB, and [B,C].

Solution:

(

(a) AB =

1 0 0 1

)

( ,

BA =

1

0

0

1

)

( ,

[A, B] =

0 0 0 0

) .

√ ) (3 + i)/4 (1 − i) 3/4 (b) AC = , √ (1 − i) 3/4 (−3 − i)/4 √ ( ) ( ) √ (−3 + i)/4 (1 + i) 3/4 3/2 −i 3/2 CA = , [A, C] = . √ √ −i 3/2 −3/2 (1 + i) 3/4 (3 − i)/4 (

√ ) (−3 + i)/4 (1 + i) 3/4 (c) BC = , √ (1 + i) 3/4 (3 − i)/4 √ ) ( ( (3 + i)/4 (1 − i) 3/4 −3/2 , [B, C] = CB = √ √ i 3/2 (1 − i) 3/4 (−3 − i)/4 (

4.4.11.

) √ i 3/2 3/2

.

Which of the matrices A through H are (1) symmetric, (2) orthogonal, (3) unitary, (4) Hermitian?

Solution: C is symmetric. A and B are both orthogonal and unitary. F is Hermitian. 4.4.12.

Given that a is a real column vector and b is a real row vector, write a matrix product that corresponds to the dot product a · b.

4.5. SYMBOLIC COMPUTING, MATRICES

57

Solution: Two possibilities are ba and aT bT . 4.4.13.

Prove that matrices G and H are singular without using the formulas for the determinant or matrix inverse that are introduced in later sections of this book.

Solution: ( ) 1 Let c = and r = (0 1). 0 Note that Gc = 0. If G−1 existed, we could multiply that equation by it (on the left), reaching c = 0, a contradiction. We must conclude that G−1 does not exist. From the easily veriﬁed equation rH = 0, multiplication on the right by H−1 produces a contradiction, showing that H−1 also does not exist. 4.4.14.

By carrying out the indicated matrix operations, verify that (DF)T = FT DT , that (DF)∗ = D∗ F∗ , and that (DF)† = F† D† .

Solution: These results follow directly when the indicated operations are carried out. 4.4.15.

Show that if matrices U and V are unitary, so are the matrix products UV and VU.

Solution: If UV is unitary, then we must have UV(UV)† = 1. Writing (UV† = V† U† , we have UVV† U† , which simpliﬁes to a unit matrix because UU† = VV† = 1.

4.5

SYMBOLIC COMPUTING, MATRICES

Exercises 4.5.1.

Using a symbolic computing system, form the two matrices ( ) ( 0 1 1 X= and Y = −1 0 3

2 4

) .

Then form X−1 , Y−1 , XT , YT , (XY)−1 , (XY)T , and verify that (XY)−1 = Y−1 X−1 and that (XY)T = YT XT .

Solution: In maple, the sequence of commands that carry out this exercise is with(LinearAlgebra): X := Matrix(2,2,[[0,1],[-1,0]]); Y := Matrix(2,2,[[1,2],[3,4]]); Xinv := MatrixInverse(X); Yinv := MatrixInverse(Y); Xtr := Transpose(X); Ytr := Transpose(Y); XY := X . Y;

58

CHAPTER 4. VECTORS AND MATRICES XYinv := MatrixInverse(XY); Yinv . Xinv; XYtr := Transpose(XY); Ytr . Xtr; In mathematica, the sequence of commands for this exercise is X = {{0, 1}, {-1, 0}} Y = {{1, 2}, {3, 4}} Xinv = Inverse[X] Yinv = Inverse[Y] Xtr = Transpose[X] Ytr = Transpose[Y] XY = X . Y XYinv = Inverse[XY] Yinv . Xinv XYtr = Transpose[XY] Ytr . Xtr

4.5.2.

The names HermitianTranspose, MatrixInverse, and ConjugateTranspose are inconveniently long. Write procedures for your symbolic computing system that use Adj instead of HermitianTranspose or ConjugateTranspose, with Conj standing for complex conjugation (both maple and mathematica) and Inv for MatrixInverse or Inverse. Present examples to show that these shortened names work properly.

Solution: Here are examples of alternative procedure deﬁnitions. In maple, these deﬁnitions must have been preceded by with(LinearAlgebra). maple:

Inv := proc(M); MatrixInverse(M) end proc;

mathematica: Inv[M_] := Inverse[M] 4.5.3.

Repeat Exercise 4.4.10 using symbolic computation.

Solution: Deﬁne A, B, and C using one of a := Matrix(2,2,[[1/2, sqrt(3)/2],[-sqrt{3}/2,1/2]]: b := Matrix(2,2,[[1/2,-sqrt(3)/2],[ sqrt{3}/2,1/2]]: c := Matrix(2,2,[[I/2, sqrt(3)/2],[ sqrt{3}/2,-I/2]]: a = {{1/2, Sqrt[3]/2},{-Sqrt[3]/2,1/2}} b = {{1/2,-Sqrt[3]/2},{ Sqrt[3]/2,1/2}} c = {{I/2, Sqrt[3]/2},{ Sqrt[3]/2,-I/2}} Then carry out the following (except for ﬁnal punctuation, the same in both symbolic systems):

4.5. SYMBOLIC COMPUTING, MATRICES

4.5.4.

59

(a) a . b

b . a

a . b - b . a

(b) a . c

c . a

a . c - c . a

(c)

c . b

b . c - c . b

b . c

Verify that matrices G and H of Exercise 4.4.13 are singular using symbolic computation.

Solution: In maple, > with(LinearAlgebra): > G := Matrix(2,2,[[0,I],[0,4]]): > H := Matrix(2,2,[[3,I],[0,0]]): 0, 0

> Determinant(G), Determinant(H); In mathematica, G = {{0,I}{0,4}}; H = {{3,I},{0,0}};

4.5.5.

Det[G]

0

Det[H]

0

Repeat Exercise 4.4.14 using symbolic computation.

Solution: Illustrate with 2 × 2 matrices. In maple, d := Matrix(2,2,[[d11,d12],[d21,d22]]): f := Matrix(2,2,]]f11,f12],[f21,f22]]): df := d . f: dfT := Transpose(df): dfA := HermitianTranspose(df): dfC := Transpose(dfA):

(This is conjugate)

Transpose(f) . Transpose(d)

(compare with dfT)

da := HermitianTranspose(d): fa := HermitianTranspose(f) fa . da;

(compare with dfA)

Transpose(fa) . Transpose(da);

(compare with dfC)

In mathematica, d = {{d11,d12},{d21,d22}}; f = {{f11,f12},{f21,f22}}; df = d . f dfT = Transpose[df]; dfA = ConjugateTranspose[df]; dfC = Conjugate[df]; Transpose[f] . Transpose[d]

Compare these with

Conjguate[d] . Conjugate[f]

dfT, dfC, and dfA.

ConjugateTranspose[f] . ConjugateTranspose[d]

60

CHAPTER 4. VECTORS AND MATRICES 

4.5.6.

1 Given X =  0 0

 0 1 , 0

0 0 −1

use symbolic computation to ﬁnd Y = X−1 , and check that XY = YX = 1. Is X Hermitian, orthogonal, or unitary (note that these designations are not mutually exclusive)?

Solution: In maple, > with(LinearAlgebra): > X := Matrix(3,3,[[1,0,0],[0,0,1],[0,-1,0]]): > Y := MatrixInverse(X): > X . Y,

Y . X;

(produces unit matrices)

In mathematica, x = {{1,0,0},{0,0,1},{0,-1,0}}; y = Inverse[x]; x . y

{{1,0,0},{0,1,0},{0,0,1}}

y . x

{{1,0,0},{0,1,0},{0,0,1}}

X is orthogonal and unitary, but not Hermitian. 

4.5.7.

0 Deﬁne A =  0 0

0 0 i

  1 0  −i and B =  0 0 0

0 0 0

 0 0 . 0

Using symbolic computation, (a) Verify that A is singular, but that A + B is nonsingular. (b) Form (A + B)−1 − B. Check that its lower-right 2 × 2 block is the inverse of the lower-right 2 × 2 block of A.

Solution: Deﬁne, using one of a := Matrix(3,3,[[0,0,0],[0,0,-I],[0,I,0]]); b := Matrix(3,3,[[1,0,0],[0,0,0],[0,0,0]]); a = {{0,0,0},{0,0,-I},{0,I,0}} b = {{1,0,0},{0,0,0},{0,0,0}} (a) Then compute (in maple) Determinant(a);

0 (is singular)

Determinant(a+b);

1 (nonsingular)

In mathematica Det[a]

0 (is singular)

Det[a+b]

1 (nonsingular)

4.6. SYSTEMS OF LINEAR EQUATIONS (b) Execute one of

61

 0 −i  0

MatrixInverse(a+b) - b:

0  0 0

0 0 i

Inverse[a+b] - b

{{0,0,0},{0,0,-i},{0,i,0}}

By hand computation, (

4.6

0 i

−i 0

)(

)

0 −i i 0

= 1.

SYSTEMS OF LINEAR EQUATIONS

Exercises For each of the following equation sets, (a) Compute the determinant of the coeﬃcients, using Eq. (4.61), (b) Row-reduce the coeﬃcient matrix to upper triangular form and either obtain the most general solution to the equations or explain why no solution exists. (c) Conﬁrm that the existence and/or uniqueness of the solutions you found in part (b) correspond to the zero (or nonzero) value you found for the determinant.

4.6.1.

x − y + 2z =

5

2x

3

+ z=

x + 3y − z = −6

Solution: The determinant of the coeﬃcients is 6. Row reduction leads to the unique solution x = 1, y = −2, z = 1. 4.6.2.

2x + 4y + 3z = 4 3x + y + 2z = 3 4x − 2y + z = 2

Solution: The determinant of the coeﬃcients is zero. The ﬁrst step of row reduction yields the three equations 2x + 4y + 3z = 4 0 − 5y +

5 z= 3 2

0 + 10y − 5z = −6

62

CHAPTER 4. VECTORS AND MATRICES The second and third of these are proportional, so we have in fact only two independent equations. Solving the ﬁrst two equations for arbitrary y, we get x = (5y + 1)/5, z = (−10y + 6)/5.

4.6.3.

2x + 4y + 3z =

2

3x + y + 2z =

1

4x − 2y + z = −1

Solution: The determinant of the coeﬃcients is zero. An attempt at row reduction leads to inconsistency. There is no solution. 4.6.4.

x − y + 2z = 0 2x

+ z=0

x + 3y − z = 0

Solution: The determinant of the coeﬃcients is 6. The unique solution is x = y = z = 0. √ √ √ 6x+2 3y+3 2z =0 √ √ 2 x + 2y + 6 z = 0 √ √ (1/ 2) x + y + 1.5 z = 0

4.6.5.

Solution: The determinant of the coeﬃcients is zero. The three equations are √ all √ in fact proportional. Solutions exist with arbitrary x and z, with y = −x/ 2−z 3/2. √ √ √ √ 6x+2 3y+3 2z = 3 √ √ 2 x + 2y + 6 z = 2 √ √ (1/ 2) x + y + 1.5 z = 1

4.6.6.

Solution: The determinant of the coeﬃcients is zero. The three left-hand sides are in fact all proportional, but in a diﬀerent ratio than the right-hand sides. The ﬁrst step in row reduction yields the inconsistent equation 0 = 1. There is no solution.

4.7. DETERMINANTS

4.7

63

DETERMINANTS

Exercises Using symbolic computation, obtain values for the following determinants.

4.7.1.

3 −1 2 1

1

−2

0

1

1

−1

3

1

2 2 0 2

Solution: Enter one of M := Matrix(4,4,[[3,1,-2,2],[-1,0,1,2],[2,1,-1,0],[1,3,1,2]]); M = {{3,1,-2,2},{-1,0,1,2},{2,1,-1,0},{1,3,1,2}} Then call Determinant(M) or Det[M]. The determinant has value −4.

4.7.2.

3 −2 2 1 −2 0

4

0

1

−2

0

−2

−1

1

1

1

−1

0 1

3

0

−2

1

1

3

2

−4

2

−1

−3

2 2 1 2 1 −3

Solution: Enter one of M := Matrix(6,6,[[3,4,0,1,-2,2],[-2,0,-2,-1,1,2],[2,1,1,-1,0,1], [1,3,0,-2,1,2],[-2,1,1,3,2,1],[0,-4,2,-1,-3,-3]]); M = {{3,4,0,1,-2,2},{-2,0,-2,-1,1,2},{2,1,1,-1,0,1}, {1,3,0,-2,1,2},{-2,1,1,3,2,1},{0,-4,2,-1,-3,-3}} Then call Determinant(M) or Det[M]. The determinant has value 642. 4.7.3.

x−3 3 4

3 x−2 1

1 x−1 4

Solution: Enter one of M := Matrix(3,3,[[x-3,3,4],[3,x-2,1],[4,1,x-1]]); M= {{x-3,3,4},{3,x-2,1},{4,1,x-1}} Then call Determinant(M) or Det[M]. The determinant has value x3 − 6x2 − 15x + 62.

4.7.4.

x−1 1 0 2

1

0

x−1

3

3

x−2

0

4

0 4 x−3 2

64

CHAPTER 4. VECTORS AND MATRICES Solution: Enter one of M := Matrix(4,4,[[x-1,1,0,2],[1,x-1,3,0],[0,3,x-2,4], [2,0,4,x-3]]); M = {{x-1,1,0,2},{1,x-1,3,0},{0,3,x-2,4},{2,0,4,x-3}} Then call Determinant(M) or Det[M]. The determinant has value x4 − 7x3 − 13x2 + 68x − 47.

4.8

APPLICATIONS OF DETERMINANTS

Exercises 4.8.1.

Using symbolic computation, write a procedure that will evaluate (in decimal form) the Hilbert determinant H of order n, whose elements are hij = (i + j − 1)−1 . When n = 3, this determinant has the form 1 1/2 1/3 1/2 1/3 1/4 . 1/3 1/4 1/5 Get results for n = 1 through n = 8. Note. This determinant becomes notoriously ill-conditioned as n increases, with a resultant value that is far smaller than any of its elements.

Solution: maple code for the Hilbert determinant is > DHilbert := proc(n) local j,k,H; > H := Matrix(n,n,[seq([ seq(1/(j+k),j=0 .. n-1)], k=1 .. n)]); > evalf(Determinant(H)) > end proc; mathematica code for the Hilbert determinant is DHilbert[n_] :=

Module[{H, j, k} ,

H = Table[Table[1/(j+k), {j, 0, n-1}], {k, 1, n}]; N[Det[H]]

]

These codes give the following results for n values from 1 through 8: 1. 0.0833333 0.000462963 1.65344 × 10−7 3.74930 × 10−12 5.36730 × 10−18 4.83580 × 10−25 2.73705 × 10−33

4.8. APPLICATIONS OF DETERMINANTS 4.8.2.

65

Solve the following equation system (keeping results to at least six decimal places): 1.0x1 + 0.9x2 + 0.8x3 + 0.4x4 + 0.1x5

= 1.0

0.9x1 + 1.0x2 + 0.8x3 + 0.5x4 + 0.2x5 + 0.1x6 = 0.9 0.8x1 + 0.8x2 + 1.0x3 + 0.7x4 + 0.4x5 + 0.2x6 = 0.8 0.4x1 + 0.5x2 + 0.7x3 + 1.0x4 + 0.6x5 + 0.3x6 = 0.7 0.1x1 + 0.2x2 + 0.4x3 + 0.6x4 + 1.0x5 + 0.5x6 = 0.6 0.1x2 + 0.2x3 + 0.3x4 + 0.5x5 + 1.0x6 = 0.5 .

Solution: Enter the coeﬃcient matrix M and the column matrix b containing the righthand sides; then form M−1 b. In maple, > M:=Matrix(6,6,[[1,.9,.8,.4,.1,0],[.9,1,.8,.5,.2,.1], > [.8,.8,1,.7,.4,.2],[.4,.5,.7,1,.6,.3], > [.1,.2,.4,.6,1,.5],[0,.1,.2,.3,.5,1]]): > b := Matrix(6,1,[[1],[.9],[.8],[.7],[.6],[.5]]): > MInv := MatrixInverse(M):

MInv . b;

In mathematica, M = {{1,.9,.8,.4,.1,0},{.9,1,.8,.5,.2,.1},{.8,.8,1,.7,.4,.2}, {.4,.5,.7,1,.6,.3},{.1,.2,.4,.6,1,.5},{0,.1,.2,.3,.5,1}}; b = {{1},{.9},{.8},{.7},{.6},{.5}}; Minv = Inverse[M];

Minv . b

The above coding produces output that can be written     x1 1.882821  x2   −0.361789       x3   −0.968894   =   x4   0.442206       x5   0.410216  x6 0.392188 4.8.3.

Using a symbolic computing system, solve the set of equations x + 3y + 3z = 3 ,

x−y+z =1,

2x + 2y + 4z = 4 .

What can you conclude from the results?

Solution: In maple, the solve command yields x = 3y, z = 1 − 2y. In mathematica, Solve yields y = x/3, z = 1 − 2x/3. These results indicate that the left-hand sides of the equations exhibit linear dependence but that the right-hand sides have values that do not make the equations inconsistent. There are therefore only two independent equations, so there is a manifold of solutions parameterized (in maple) by the arbitrary value of y, or equivalently, in mathematica, by the arbitrary value of x.

66

CHAPTER 4. VECTORS AND MATRICES R3

R1 + V1 –

R6

R4

R2 + V2 –

R5

Figure 4.10: Battery-powered circuit for Exercise 4.8.4 4.8.4.

For the electric circuit shown in Fig. 4.10, ﬁnd the currents I1 to I6 that ﬂow through the respective resistors R1 to R6 . Take V1 = 2 volt, V2 = 5 volt, R1 = 2 ohm, R2 = 4 ohm, R3 = 6 ohm, R4 = 8 ohm, R5 = 3 ohm, R6 = 5 ohm. For each current indicate its direction of ﬂow as one of “up”, “down”, “left”, or “right”. Hint. There are only three independent current loops.

Solution: Let I1 represent the current ﬂow clockwise around the upper right loop, with I2 the clockwise ﬂow around the lower right loop, and I3 the counterclockwise ﬂow around the left loop. Where a circuit element is in more than one loop, the net current is the sum of the individual-loop contributions. This arrangement assures continuity of current ﬂow. The values of the In are determined by the requirement (Kirchhoﬀ’s law) that the net change in voltage around any loop must add to zero. Each battery contributes a voltage change given by its value of V , and the current ﬂow through each resistor causes a voltage drop IR. For our three loops, (I1 + I3 )R1 + I1 R3 + (I1 − I2 )R4 = V1 (I2 + I3 )R2 + (I2 − I1 )R4 + I2 R5 = V2 (I1 + I3 )R1 + I3 R6 + (I2 + I3 )R2 = V1 + V2 Inserting the given values for the Rn and Vn , these equations become 16I1 − 8I2 + 2I3 = 2 −8I1 + 15I2 + 4I3 = 5 2I1 + 4I2 + 11I3 = 7

4.8. APPLICATIONS OF DETERMINANTS

67

In matrix form, these equations are      16 −8 2 I1 2      4   I1  =  5  ,  −8 15 2 4 11 I3 7 with solution I1 = 86/373, I2 = 123/373, I3 = 177/373. These loop currents correspond to the following net currents through the resistors: R1 : 263/373 amp, up R2 : 300/373 amp, up R3 : 86/373 amp, right R4 : 37/373 amp, right R5 : 123/373 amp, left R6 : 177/373 amp, down. For each of the following equation sets, (a) Obtain by hand computation its general solution, (b) Check your answer using symbolic computation.

4.8.5.

3x + y + 2z = 0 4x − 2y + z = 0 2x + 4y + 3z = 0

Solution: The coeﬃcients in this equation set have zero determinant. The equations therefore have a manifold of solutions. Solve the ﬁrst two equations for x and y in terms of z: 3x + y = −2z 4x − 2y = −z, obtaining x = y = −z/2. 4.8.6.

2x + y + 2z = 0 4x + y + z = 0 2x

− 3z = 0

Solution: The coeﬃcients in this equation set have a determinant of value 4. Since it is not zero, the equations have the unique solution x = y = z = 0. Determine whether each of the following sets of functions exhibits linear dependence.

4.8.7.

sin x,

sin 2x,

sin 3x.

Hint. Can you ﬁnd any value of x for which the Wronskian is easily evaluated and has a nonzero value?

Solution:

sin x sin 2x sin 3x 2 cos 2x 3 cos 3x The Wronskian is cos x − sin x −4 sin 2x −9 sin 3x

.

68

CHAPTER 4. VECTORS AND MATRICES 1 0 −1 0 . At x = π/2 this determinant is 0 −2 −1 0 9 The determinant is nonzero, the three functions are linearly independent.

4.8.8.

1, x, (3x2 − 1)/2.

Solution:

1 The Wronskian is 0 0

x (3x2 − 1)/2 1 3x = 3. 0 3

These functions are linearly independent. 4.8.9.

cosh x,

sinh x, ex .

Solution:

cosh x The Wronskian is sinh x cosh x

ex cosh x ex . sinh x ex sinh x

This determinant has two identical rows and is identically zero. The functions are linearly dependent. This result is hardly a surprise since cosh x and sinh x are linear combinations of ex and e−x . 4.8.10.

sin x,

cos x, eix .

Solution:

sin x cos x The Wronskian is cos x − sin x − sin x − cos x

eix ieix . −eix

This determinant has two rows that are proportional and is therefore zero. The functions are linearly dependent. 4.8.11.

sin x,

cos x,

sin x cos x.

Solution: We can save eﬀort by noticing that sin x cos x = (sin 2x)/2. The Wronskian is sin x cos x (sin 2x)/2 cos x − sin x cos 2x . − sin x − cos x −2 sin 2x It is nonzero at x = π/4, so the functions are linearly independent.

4.8. APPLICATIONS OF DETERMINANTS 4.8.12.

sin2 x,

cos2 x,

sin 2x,

69

cos 2x.

Solution: The Wronskian is sin2 x 2 sin x cos x 2 cos2 x − 2 sin2 x −8 sin x cos x

cos2 x

sin 2x

−2 cos x sin x

2 cos 2x

−2 cos x + 2 sin x

−4 sin 2x

8 sin x cos x

−8 cos 2x

2

2

−2 sin 2x . −4 cos 2x 8 sin 2x cos 2x

The second and fourth rows of this determinant are proportional, so it is zero. The functions are linearly dependent. This should have been expected, since cos 2x = cos2 x − sin2 x.

Chapter 5

MATRIX TRANSFORMATIONS 5.1

VECTORS IN ROTATED SYSTEMS

Exercises 

5.1.1.

Consider the 3-D rotation matrix U = 

−0.8 0 −0.6

0 1 0

 0.6 0 . −0.8

(a) Verify that the vector whose components are the ﬁrst column of U is orthogonal to the vector described by the third column of U, and verify that each of these columns describes a vector of unit magnitude. (b) Because all the columns of U are mutually orthogonal, we can conclude that U is an orthogonal matrix. Explain why this is so, and carry out matrix operations conﬁrming that the matrix U is orthogonal. ˆy in the rotated coordinates? (c) What is the orientation of e (d) Describe in words the rotation corresponding to U. Be speciﬁc about the sense of the rotation (assume a right-handed coordinate system).

Solution: (a) Orthogonality: (−0.8)(0.6) + 0 + (−0.6)(−0.8) = 0. Normalization: (−0.8)2 + (−0.6)2 = 1, (0.6)2 + (−0.8)2 = 1. (b) A matrix is orthogonal if its transpose is also its inverse. The transpose of U has rows that are the columns of U, and the matrix product UUT has elements that are dot products of a column of U with a row of UT . The orthonormality of the columns of U cause UUT to be a unit matrix. A formal check that U is orthogonal: 

−0.8 UUT =  0 −0.6

0 1 0

 0.6 −0.8 0  0 −0.8 0.6 70

  0 −0.6 1 1 0 = 0 0 −0.8 0

0 1 0

 0 0  1

5.1. VECTORS IN ROTATED SYSTEMS

71

(c) Form Uˆ ey : 

−0.8  0 −0.6

0 1 0

    0.6 0 0 0  1  =  1 . −0.8 0 0

ˆy . We ﬁnd that the rotation does not change e ˆz is in the (d) The rotation is about the y axis. In the primed coordinates, e second quadrant, at an angle cos−1 (−0.8) = 143◦ from the new z direction. Looking from positive y, the coordinates were rotated 143◦ counterclockwise. 5.1.2.

Find the matrix V that produces the rotation inverse to that caused by the matrix U of Exercise 5.1.1. Verify that U and V are mutually inverse by applying them in succession to the vector which (in row form) is a = (1.2345, −0.7507, 0.6733) .

Solution: The matrix V is U−1  −0.8  0 −0.6 

−0.8  0 0.6 5.1.3.

= UT . Checking,     0 0.6 1.2345 −0.58362 1 0   −0.7507  =  −0.75070  , 0 −0.8 0.6733 −1.27934 0 1 0

    −0.6 −0.58362 1.2345 0   −0.75070  =  −0.7507  . −0.8 −1.27934 0.6733

Apply the U of Exercise 5.1.1 three times in succession to the vector a given in Exercise 5.1.2. Verify that the result has the same magnitude as a.

Solution: 

 −0.195665 b = U3 a =  −0.750700  . 1.392494 

5.1.4.

0.5  0.5 In 4-D space consider the matrix U =   0.5 −0.5

|a|2 = 2.54087,

0.5 0.5 −0.5 0.5

0.5 −0.5 0.5 0.5

|b|2 = 2.54807.

 0.5 −0.5  . −0.5  −0.5

Does this matrix describe a rotation of the 4-D coordinate system? How do you know?

Solution: Yes. We know this because UUT = 1 and det(U) = 1. 5.1.5.

Find the matrix U that corresponds to a rotation of 45◦ about the y-axis of a right-handed 3-D coordinate system. The positive sense of such a rotation tilts the z-axis toward the original position of the x-axis (can you see why?).

Solution: A rotation through angle θ about the y-axis leaves invariant the y component of

72

CHAPTER 5. MATRIX TRANSFORMATIONS ˆ + sin θ x ˆ ). any vector A and converts the original z component Az to Az (cos θ z ˆ − sin θ z ˆ). Thus, Also, the original Ax becomes Ax (cos θ x √ √   1/ 2 0 1/ 2   0 1 0 . U= √ √ −1/ 2 0 1/ 2 The positive sense of a rotation about the y axis is counterclockwise when viewed from positive y.

5.2

VECTORS UNDER COORDINATE REFLECTIONS

Exercises 5.2.1.

0.408248 Does the matrix U =  −0.707107 −0.577350

0.816497 0 0.577350

 0.408248 0.707107  −0.577350

describe a rotation, a rotation and reﬂection, or neither (to within the precision of its speciﬁcation)?

Solution: Check to see if U is orthogonal, and if so, ﬁnd the sign of its determinant. In the present case, we ﬁnd UUT ≈ 1 and det(U) ≈ −1. These values show U to be a rotation and reﬂection (if it were only a reﬂection, it would be diagonal with all diagonal elements ±1). 5.2.2.

Find the matrix U that corresponds to the following three successive operations: (1) Reﬂection through the xy-plane; (2) Rotation about the current x axis by 90◦ (the positive sense of ˆy toward e ˆz ); (3) Reﬂection through the current xy-plane. Is this overall rotation tilts e operation a rotation, and rotation and reﬂection, or neither?

Solution: ˆz to −ˆ Step (1) changes e ez . The transformation matrix is U1 (given below). ˆz to e ˆx and e ˆx to −ˆ Step (2) changes e ez ; transformation matrix U2 . Step (3) is the same as Step (1), with transformation matrix U3 = U1 . Doing the three  1 U3 U2 U1 =  0 0

transformations,  0 0 0 0 1 0  0 1 0 −1 −1 0

 1 1 0 0  0 1 0 0 0

  0 0 0 = 0 −1 1

0 1 0

 −1 0 . 0

This is a rotation (the matrix has determinant +1).

5.3

TRANSFORMING MATRIX EQUATIONS

Exercises 5.3.1.

Using H =

(

1 2

2 4

) ( 0.6 ,U= 0.8

−0.8 0.6

) ( , and x =

3 −1

) ,

5.4. GRAM-SCHMIDT ORTHOGONALIZATION

73

(a) Verify that U describes a rotation. (b) Find the vector y = Hx. (c) Transform the equation of part (b) by appropriate application of U. Call the results x′ , y′ , and H′ . (d) Verify that y′ = H′ x′ and also that |x′ | = |x| and |y′ | = |y|. (e) Verify that the angle between x and y is the same as that between x′ and y′ . Note. The results of parts (d) and (e) are consistent with the notion that the original and transformed equations describe the same phenomenon in diﬀerent coordinate systems.

Solution: (a) Check that UUT = 1 and that det(U) = +1. ( )( ) ( ) 1 2 3 1 (b) y = = . 2 4 −1 2 ( ) ( ) ( ) 2.6 −1 1 −2 T ′ ′ ′ (c) x = Ux = , y = Uy = , H = UHU = 1.8 2 −2 4 (d) Matrix multiplications show that y′ = H′ x′ . Also, xT x = [x′ ]T x′ = 10,

yT y = [y′ ]T y′ = 5.

(e) It suﬃces to verify that xT y = [x′ ]T yT . These both have value +1. (

5.3.2.

Repeat part (b) of Exercise 5.3.1 for the same H and U but with x =

2 −1

) .

Can you predict the value of y′ before transforming with U?

Solution: ( )( ) ( ) 1 2 2 0 y= = . 2 4 −1 0 Clearly, y′ = 0.

5.4

GRAM-SCHMIDT ORTHOGONALIZATION

Exercises 5.4.1.

ˆ i of Example 5.4.1 are orthonormal. Check that the vectors b

Solution: Write these vectors in the forms     2 16 1   1  1 1 , b1 = √ , b2 = √ 14 378 3 −11

  1 1  −5  . b3 = √ 27 1

To check normalization, add the squares of the components, e.g., for b1 , (22 + 12 + 32 )/14 = 1. To check orthogonality, it suﬃces to work with the integer parts of the vectors. For example, b1 · b2 = (2)(16) + (1)(1) + (3)(−11) = 0.

74 5.4.2.

CHAPTER 5. MATRIX TRANSFORMATIONS Verify that the transformation matrix found in Example 5.4.2 is orthogonal.

Solution: The columns of the transformation matrix are the same vectors that appear in Exercise 5.4.1 and were there shown to be orthogonal.

5.5

MATRIX EIGENVALUE PROBLEMS

Exercises First ﬁnd the eigenvalues and corresponding normalized eigenvectors of the matrices in these exercises by hand computation. Use the Gram-Schmidt process to convert any degenerate sets of eigenvectors to orthonormal form. Then verify all your eigenvalues and eigenvectors (including their normalization and the orthogonality of degenerate eigenvectors) using symbolic computation. 

5.5.1.

  

2.23

0

0

0

1.23

1

0

1

1.23

  . 

Solution: The secular equation is [ ] (2.23 − λ) (1.23 − λ)2 − 1 = 0,

λ = 2.23, 2.23, 0.23 .

By inspection, we ﬁnd one eigenvector for λ = 2.23 is (1, 0, 0)T . √ √ The unique normalized eigenvector for λ = 0.23 is (0, −1/ 2, 1/ 2)T . The third eigenvector (also for λ = 2.23)√can√be orthogonal to both those already found. In normalized form, it is (0, 2, 2)T . 

5.5.2.

  

3.14

1

1

1

3.14

1

1

1

3.14

  . 

Solution: The secular equation is (3.14 − λ)3 − 3(3.14 − λ) + 2 = 0 . If the left-hand side of this equation is plotted against λ, one sees that there are two roots near λ = 2 and one near λ = 5. The exact values are λ = 5.14 and a double root at √ λ = 2.14. √ The √ root at λ = 5.14 corresponds to the normalized eigenvector (1/ 3, 1/ 3, 1/ 3). The two eigenvectors for λ = 2.14 are any pair of vectors orthogonal to each other and to the eigenvector the inﬁnite number of √ √already found. √ One √ of √ possible sets is (0, −1/ 2, 1/ 2) and (−2/ 6, 1/ 6, 1/ 6).

5.6. HERMITIAN EIGENVALUE PROBLEMS

5.6

75

HERMITIAN EIGENVALUE PROBLEMS

Exercises 5.6.1.

Show that the eigenvectors found in Example 5.5.4 are orthogonal as well as normalized.

Solution: A direct approach is to check that a matrix U with the eigenvectors as either rows or columns is orthogonal, i.e., that UUT = 1. In maple, the eigenvectors are the columns of the following matrix: > U := Matrix(4,4,[ [0.627963, -0.325058, -0.627963, -0.325058], >

[0.673887,

0.673887,

0.214186,

0.214886],

>

[0.214186, -0.214886,

0.673887, -0.673887],

>

[0.325058, -0.627963,

0.325058,

0.627963] ]):

We check orthonomality by computing > with(LinearAlgebra): > U . Transpose(U); The result (not shown) is an approximate unit matrix. In mathematica, the eigenvectors, as columns, are u = { {0.627963, -0.325058, -0.627963, -0.325058}, {0.673887,

0.673887,

0.214186,

0.214886},

{0.214186, -0.214886,

0.673887, -0.673887},

{0.325058, -0.627963,

0.325058,

0.627963} };

We check orthonormality by computing u . Transpose[u] The result (not shown) is an approximate unit matrix.

5.7

MATRIX DIAGONALIZATION

Exercises 5.7.1.

For the matrix    A= 

1.23 0.00 0.71 −0.26

0.00 2.34 1.46 −3.01

0.71 1.46 3.45 0.00

−0.26 −3.01 0.00 4.56

   , 

working with at least six signiﬁcant ﬁgures of precision, (a) Find all its eigenvalues, and for each identify the corresponding eigenvector, (b) From the eigenvectors form a matrix U such that A′ = UAU−1 is diagonal and show that the diagonal elements of A′ are the eigenvalues of A.

76

CHAPTER 5. MATRIX TRANSFORMATIONS Solution:

(a) This is clearly an exercise to be done using symbolic computing. In maple, > with(LinearAlgebra): > A :=

Matrix(4,4, [[1.23,0.00,0.71,-0.26],[0.00,2.34,1.46,-3.01],

>

[0.71,1.46,3.45,0.00],[-0.26,-3.01,0.00,4.56]]):

> E,V := Eigenvectors(A): > E := simplify(E);

 −0.259929  1.18043   E :=   3.65844  6.90105

The columns of V are the eigenvectors, corresponding in order with the eigenvalues in E. > V := simplify(V);  0.250672 −0.942569 −0.210066  0.760348 0.263678 −0.0862002 V :=   −0.347199 0.125246 −0.891575 0.488352 0.162329 −0.391837

 −0.06787181 −0.587294   −0.262413  0.762644

In mathematica, A = {{1.23,0.00,0.71,-0.26},{0.00,2.34,1.46,-3.01}, {0.71,1.46,3.45,0.00},{-0.26,-3.01,0.00,4.56}}; {val,vec} = Eigensystem[A]; val

{6.90105, 3.75844, 1.18043, -0.259929}

The rows of vec are the eigenvectors, corresponding in order with the eigenvalues is val. vec

{{ 0.0678182, 0.587294, 0.262413, -0.762644}, {-0.210066, -0.0862002, -0.891575, -0.391837}, { 0.942569, -0.263678, -0.125246, -0.162329}, { 0.250672, 0.760348, -0.347199, 0.488352}}

(b) The matrix U−1 = UT must contain (as columns) the eigenvectors of A. Therefore (in maple) UT is the matrix V; in mathematica, U is the matrix vec. 5.7.2.

Given the three matrices 

1 A= 0 0

  0 0 0.64 1 0  , B =  0.48 0 −1 0

0.48 0.36 0

  0 0.96 −0.28 0  , C =  −0.28 −0.96 1 0 0

 0 0 , 0

(a) Find an orthogonal transformation that will simultaneously diagonalize A and B and identify their simultaneous eigenvectors, for each giving the eigenvalues of A and B.

5.8. MATRIX INVARIANTS

77

(b) Find an orthogonal transformation that will simultaneously diagonalize A and C and identify their simultaneous eigenvectors, for each giving the eigenvalues of A and C. (c) How do you know it will not be possible to ﬁnd an orthogonal transformation that will simultaneously diagonalize A, B, and C?

Solution: (a) Any orthogonal transformation on the ﬁrst two rows and columns of B will leave A unaltered, so we diagonalize the upper 2 × 2 block of B. The normalized eigenvectors are the columns of UT , in the same order as the eigenvalues of B: 1, 0, 1.     0.8 −0.6 0 0.8 0.6 0 0.8 0 , 0 . UT =  0.6 U =  −0.6 0.8 0 0 1 0 0 1 Applying this transformation to A, 

1 A′ = UAUT =  0 0

 0 0 1 0 , 0 −1

we verify that its eigenvalues remain (in order) 1, 1, and −1. UCUT is not diagonal. (b) Diagonalizing the upper 2 × 2 block of C (which does not aﬀect A, we ﬁnd √ √ √  √    2/10 −7 2/10 0 2/10 7 2/10 0 √ √ √  √    VT =  7 2/10 2/10 0  , V =  −7 2/10 2/10 0  . 0

0

1

0

0

1

The columns of VT are the simultaneous eigenvectors of A and C, corresponding (in order) to the following eigenvalues of C: −1, 1, and 0. We can form A′′ = VAV−1 and conﬁrm that A′′ remains diagonal, with eigenvalues 1, 1, −1. VBVT is not diagonal. (c) B and C do not commute, and therefore it will not be possible to diagonalize them simultaneously.

5.8

MATRIX INVARIANTS

Exercises 5.8.1.

Use symbolic computing to ﬁnd the determinant and trace of   2 7 11    3 8 −1  . 6

9

12

78

CHAPTER 5. MATRIX TRANSFORMATIONS Solution: In maple, > with(LinearAlgebra): > M := Matrix(3,3,[[2,7,11],[3,8,-1],[6,9,12]]): −315, 22

> Determinant(M), Trace(M); In mathematica, M = {{2,7,11},{3,8,-1},{6,9,12}} Det[M] Tr[M]

−315 22

Chapter 6

MULTIDIMENSIONAL PROBLEMS 6.1

PARTIAL DIFFERENTIATION

Exercises 6.1.1.

For z = x/(x2 + y 2 ), ﬁnd ∂z/∂x and ∂z/∂y.

Solution:

6.1.2.

For z = uvw ln

∂z y 2 − x2 = 2 ∂x (x + y 2 )2 √

∂z 2xy =− 2 ∂y (x + y 2 )2

u2 + v 2 + w2 , ﬁnd ∂z/∂u, ∂z/∂v, and ∂z/∂w.

Solution: √ ∂z u2 vw = vw ln u2 + v 2 + w2 + 2 , ∂u u + v 2 + w2 √ uv 2 w ∂z = uw ln u2 + v 2 + w2 + 2 , ∂v u + v 2 + w2 √ uvw2 ∂z = uv ln u2 + v 2 + w2 + 2 ∂w u + v 2 + w2 6.1.3.

For z = cos(x2 + y), verify that

∂2z ∂2z = . ∂x∂y ∂y∂x

Solution: ∂z = −2x sin(x2 + y) ∂x ∂2z = −2x cos(x2 + y) ∂y∂x

79

∂z = − sin(x2 + y) ∂y ∂2z = −2x cos(x2 + y) ∂x∂y

80 6.1.4.

CHAPTER 6. MULTIDIMENSIONAL PROBLEMS For z = x2 − y 2 , with x = r cos θ, y = r sin θ, ﬁnd the following partial derivatives: ( ( ( ) ( ) ( ) ( ) ) ) ∂z ∂z ∂z ∂z ∂z ∂z , , , , , , ∂x y ∂x r ∂x θ ∂y x ∂y r ∂y θ (

∂z ∂r

(

) , x

∂z ∂r

(

) , y

∂z ∂r

(

) , θ

∂z ∂θ

(

) , x

∂z ∂θ

(

) , y

∂z ∂θ

) . r

Solution: For many of these derivatives, it is useful to note that r2 = x2 + y 2 and that y/x = tan θ. From these we get ( ) ( ) ( ) ( ) ∂y x ∂y ∂x y ∂x =− = , =− = , ∂y θ ∂x r y ∂x θ ∂y r x (

∂x ∂r

) = y

(

r , x

∂y ∂r

) = x

r , y

From the deﬁnitions of x and y in terms of r and θ we also have ( ) ( ) ( ) ( ) ∂x ∂x ∂y ∂y = −y , = cos θ , = x, = sin θ . ∂θ r ∂r θ ∂θ r ∂r θ Now, using the chain rule as needed, ( ) ∂z = 2x , ∂x y ( ) ( ) ( ) ( ) ( ) ∂z ∂z ∂z ∂y x = + = 2x − 2y − = 4x , ∂x r ∂x y ∂y x ∂x r y ( ) ( ) ( ) ( ) ( y ) 2z ∂z ∂z ∂z ∂y = + = 2x − 2y = ∂x θ ∂x y ∂y x ∂x θ x x ( ) ∂z = −2y , ∂y x ( ) ( ) ( ) ( ) ( y) ∂z ∂z ∂x ∂z = + = 2x − − 2y = −4y , ∂y r ∂x y ∂y r ∂y x x ( ) ( ) ( ) ( ) ( ) ∂z ∂z ∂x ∂z x 2z = + = 2x − 2y = ∂y θ ∂x y ∂y θ ∂y x y y ( ) ( ) ( ) ( ) ∂z ∂z ∂y r = = (−2y) = −2r , ∂r x ∂y x ∂r x y ( ) ( ) ( ) (r) ∂z ∂z ∂x = = (2x) = 2r , ∂r y ∂x y ∂r y x ( ) ( ) ( ) ( ) ( ) ∂z ∂z ∂x ∂z ∂y 2z = + = 2x cos θ − 2y sin θ = , ∂r θ ∂x y ∂r θ ∂y x ∂r θ r (

∂z ∂θ

)

( =

x

∂z ∂y

) ( x

∂y ∂θ

)

( = −2y x

r2 x

) =−

2r2 y , x

6.1. PARTIAL DIFFERENTIATION

(

( 2) r 2r2 x = = 2x − =− , y y y y y ( ) ( ) ( ) ( ) ( ) ∂z ∂z ∂x ∂z ∂y = + = (2x)(−y) − 2y(x) = −4xy . ∂θ r ∂x y ∂θ r ∂y x ∂θ r

6.1.5.

∂z ∂θ

)

(

2

If z 2 = ep

−q 2 ,

∂z ∂x

) (

81

∂x ∂θ

)

with p = es and q = e2s , ﬁnd dz/ds.

Solution: Let primes denote derivatives with respect to s. Then 2zz ′ = ep

2

−q 2

[2pp′ − 2qq ′ ] .

] ep −q [ 2 2p − 4q 2 . But p = p and q = 2q, so z = 2z This can be simpliﬁed if desired. 2

6.1.6.

2

If u = ep

−q 2 ,

2

with p = r cos θ and q = r sin θ, ﬁnd (∂u/∂r)θ and (∂u/∂θ)r .

Solution: Let pr stand for (∂p/∂r)θ , with corresponding deﬁnitions for pθ , qr , qθ , ur , and uθ . We have pr = cos θ = p/r, pθ = −r sin θ = −q, qr = sin θ = q/r, and qθ = r cos θ = p. Then

6.1.7.

ur = ep

2

−q 2

(2ppr − 2qqr ) = ep

uθ = ep

2

−q 2

(2ppθ − 2qqθ ) = ep

2 2 (p − q 2 ) , r

2

−q 2

2

−q 2

(−4pq) .

Given that ye−xy = cos x, ﬁnd dy/dx.

Solution: Diﬀerentiating, writing y ′ = dy/dx, y ′ e−xy − xyy ′ e−xy − y 2 e−xy = − sin x . Solving for y ′ , y ′ e−xy (1 − xy) = y 2 e−xy − sin x 6.1.8.

−→

y′ =

y 2 − exy sin x . 1 − xy

Let x2 u + u2 v = 1 and uv = x + y. Find (∂x/∂u)v and (∂x/∂u)y .

Solution: Obtain x in terms of u and v: x2 = ( Now compute

∂x ∂u

) = v

−1 − u2 v . 2xu2

1 − u2 v . u

82

CHAPTER 6. MULTIDIMENSIONAL PROBLEMS Alternatively, obtain an equation relating x to u and y, and then diﬀerentiate,writing xu to denote (∂x/∂u)y : x2 u+u2 v = x2 u+u(uv) = x2 u+u(x+y) = 1 −→ 2uxxu +x2 +(x+y)+uxu = 0 . Solving for xu : (∂x/∂u)y =

6.1.9.

−x2 − x − y 1 =− 2 . u(2x + 1) u (2x + 1)

Evaluate: ∫ sin x 2 d (a) e−t dt, dx 0 ∫ sin x 2 d e−xt dt. (b) dx 0 You may leave unevaluated any integrals that cannot be reduced to elementary functions.

Solution: (a) e− sin (b) e

x

cos x .

−x sin2 x

6.1.10.

2

sin x

cos x −

t2 e−xt dt . 2

0 ∞

Let I =

n

e−x

t

dt.

Calculate dI/dx in the two following ways:

1

(a) Diﬀerentiate I with respect to x and after doing so evaluate the resulting integral. (b) First perform the integration in I and after doing so diﬀerentiate the result, thereby checking the result of part (a).

Solution:

(a)

dI = −nxn−1 dx = −ne−x e−x , xn n

(b) I =

n

t e−x

n

t

dt = −nxn−1 e−x

n

[ −2n ] x + x−n

1

[ −n−1 ] x + x−1 . dI e−x e−x = −nxn−1 n − n n+1 . dx x x n

n

The results of (a) and (b) are in agreement.

6.2

Exercises 6.2.1.

(a) Verify that f (x, y) = x2 + xy − y 2 has a critical point at (x, y) = (0, 0). Calculate its Hessian at (0,0) and from the eigenvalues of the Hessian determine the nature of the critical point. (b) Repeat the process of part (a) for f (x, y) = x2 − xy + y 2 .

6.3. CURVILINEAR COORDINATE SYSTEMS

83

Solution: (a) The ﬁrst derivatives of f are fx = 2x + y and fy = x − 2y. Both fx and fy are zero at (x, y) = (0, 0), so (0, 0) is a critical point of f . Taking the second derivatives (which are all constants), the Hessian is found to be ( ) 2 1 H= . 1 −2 √ The eigenvalues of H are ± 5. The eigenvalues are of diﬀerent signs, indicating that (x, y) = (0, 0) is neither a maximum nor a minimum, but is a saddle point. (b) The ﬁrst derivatives of this f are also zero at (x, y) = (0, 0). The Hessian is now ( ) 2 −1 H= , −1 2 with eigenvalues +1 and +3. These values show that (0, 0) is a minimum of f . 6.2.2.

For f (x, y) = x3 + y 3 − 6xy, ﬁnd all the critical points and classify each as a maximum, minimum, or saddle point.

Solution: It is assumed that x and y (being coordinates) are restricted to real values. The critical points are where fx = fy = 0, corresponding to the simultaneous equations x2 − 2y = 0 and y 2 − 2x = 0 . The only real solution to these equations is at x = y = 2, and at this critical point the Hessian is ( ) 12 −6 H= . −6 12 The eigenvalues of H are +1 and +3, indicating that (2, 2) is a minimum.

6.3

CURVILINEAR COORDINATE SYSTEMS

Exercises 6.3.1.

Express each of these Cartesian coordinate points (x, y, z) in cylindrical and spherical coordinates: (1, 0, 0), (1, 1, 1), (−1, 1, 0), (0, −1, −1), (−1, −1, 1).

Solution: In cylindrical coordinates (ρ, φ, z), √ √ (1, 0, 0) → (1, 0, 0), (1, 1, 1) → ( 2, π/4, 1), (−1, 1, 0) → ( 2, 3π/4, 0), √ (0, −1, −1) → (1, 3π/2, −1), (−1, −1, 1) → ( (2), 5π/4, 1). In spherical coordinates (r, θ, φ),

√ √ (1, 0, 0) → (1, π/2, 0), (1, 1, 1) → ( 3, tan−1 2, π/4), √ √ (−1, 1, 0) → ( 2, π/2, 3π/4), (0, −1, −1) → ( 2, 3π/4, 3π/2), √ √ (−1, −1, 1) → ( 3, tan−1 2, 5π/4).

84 6.3.2.

CHAPTER 6. MULTIDIMENSIONAL PROBLEMS Convert each of these points expressed in the spherical coordinates (r, θ, φ) into Cartesian coordinates: (1, π/2, 7π/4), (2, π/3, π), (3, 2π/3, 3π/2), (1, 1, 1), (1, 2, 3).

Solution:

√ √ √ (1, π/2, 7π/4) → (1/ 2, −1/ 2, 0), (2, π/3, π) → (− 3, 0, 1), √ (3, 2π/3, 3π/2) → (0, −3 3/2, −3/2), (1, 1, 1) → (sin(1) cos(1), sin(1) sin(1), cos(1)), (1, 2, 3) → (sin(2) cos(3), sin(2) sin(3), cos(2)). 6.3.3.

Convert each of these points expressed in the cylindrical coordinates (ρ, φ, z) into Cartesian coordinates: (1, π/2, 0), (1, 0, π/2), (2, 3π/2, 3), (π, 1, −2), (2, π/3, −1).

Solution: (1, π/2, 0) → (0, 1, 0), (1, 0, π/2) → (1, 0, π/2), (2, 3π/2, 3) → (0 − 2, 3), √ (π, 1, −2) → (π cos(1), π sin(1), −2), (2, π/3, −1) → ( 3, 1, −1). 6.3.4.

In Cartesian coordinates two 3-D vectors are a = (1, 2, 3) and b = (−1, 1, 2), and assume them to be associated with the point (x, y, z) = (1, 1, 0). Convert a and b to cylindrical and to spherical coordinates. Then form the scalar product a · b using each of the three coordinate representations.

Solution: In cylindrical coordinates, at the location√of the√vectors, the Cartesian √ √repreˆ = (1/ 2, 1/ 2, 0), φ ˆ = (−1/ 2, 1/ 2, 0), sentations of the unit vectors are ρ ˆ = (0, 0, 1). and z √ √ ˆ · a = 1/ 2, ˆ · a = 3/ 2, aφ = φ The components of a are therefore aρ = ρ ˆ · a = 3. Similar calculations yield the cylindrical-coordinate representaaz = z tion of b. Summarizing, 3 1 ˆ+√ φ ˆ + 3ˆ a= √ ρ z, 2 2

b=

√ ˆ + 2ˆ 2φ z.

In spherical representations ˆr = √ √ coordinates, the unit vectors have √ Cartesian √ ˆ = (0, 0, −1), and φ ˆ = (−1/ 2, 1/ 2, 0). (1/ 2, 1/ 2, 0), θ √ ˆ The components √ of a are therefore ar = ˆr · b = 3/ 2, aθ = θ · a = −3, ˆ · a = 1/ 2. After similar calculations for b, we have aφ = φ 3 ˆ + √1 φ ˆ, a = √ ˆr − 3θ 2 2

ˆ+ b = −2θ

ˆ. 2φ

Scalar products: Cartesian, 1(−1) + 2(1) + 3(2) = 7, √ √ Cylindrical, 0 + (1/ 2)( 2) + 3(2) = 7, √ √ Spherical, 0 + (−3)(−2) + (1/ 2)( 2) = 7. 6.3.5.

Repeat the computations of Exercise 6.3.4 assuming the two vectors to be associated with the Cartesian coordinate point (1,1,1).

Solution: At the location of the vectors, the cylindrical unit vectors are the same as in Exercise 6.3.4 so the answers for cylindrical coordinates are the same as

6.4. MULTIPLE INTEGRALS

85

found that Exercise. In spherical the unit√ vectors √ in √ √ √ √ coordinates, √ √ are ˆr = ˆ = (1/ 6, 1/ 6, −2/ 6), φ ˆ = (−1/ 2, 1/ 2, 0). The (1/ 3, 1/ 3, 1/ 3), θ spherical components of a and b are therefore √ √ √ √ √ √ ar = 2 3, aθ = −3/ 6, aφ = 1/ 2, br = 2/ 3, bθ = −4/ 6, bφ = 2. We ﬁnd (in spherical coordinates) the scalar product √ √ √ √ √ √ a · b = (2 3)(2/ 3) + (−3/ 6)(−4/ 6) + (1/ 2)( 2) = 7 . 6.3.6.

In spherical polar coordinates (r, θ, φ) two points P and Q are respectively at (1, π/4, π/3) and (3, 5π/8, 7π/6). Find the distance between P and Q.

Solution: The computation is simplest if we convert P and Q to Cartesian coordinates. P = ( sin(π/4) cos(π/3), sin(π/4) sin(π/3), cos(π/4) ) = (0.353553, 0.612372, 0.707107) Q = ( 3 sin(5π/8) cos(7π/6), 3 sin(5π/8) sin(7π/6), 3 cos(5π/8) ) = (−2.400309, −1.385819, −1.148050) Now form [P − Q| = 3.875324.

6.4

MULTIPLE INTEGRALS

Exercises 6.4.1.

Evaluate by changing the order of integration. Check your work using symbolic computation. ∫

dy

dx 0

x

y − x −y e . y

Solution: Changing the order of integration, ∫

y

dy 0

0

[ ] ∫ ∞ [ ] 1∫ ∞ e−y y 1 dx e−y − dy y e−y − e−y = dx = dyy e−y = . y 2 2 0 2 0

To check using maple, execute > int(int((y-x)/y*exp(-y),y=x .. infinity), x=0 .. infinity); mathematica will not evaluate this integral in its original form, but it can check the result after the order of the integrations has been interchanged. ∫

6.4.2.

Evaluate

dx 0

2

dy y 2 e−y sin xy .

x

Check this result using symbolic computation.

86

CHAPTER 6. MULTIDIMENSIONAL PROBLEMS Solution: Reverse the order of integration, reaching ∫ y ∫ ∫ ∞ 2 dy y 2 e−y sin xy dx = 0

0

dy y 2 e−y

0

=

2

1 − cos y 2 y

y e−y [1 − cos y 2 ] dy . 2

0

Change variable to t = y 2 , so y dy = dt/2: [∫ ∞ ] ∫ ∞ 1 e−t dt − e−t cos t dt . 2 0 0 The ﬁrst of the above integrals evaluates to unity. An easy way to evaluate the second integral is to identify it as the real part of ∫ ∞ 1+i 1 = . e(i−1)t dt = 1 − i 2 0 [ ] 1 1 1 Our overall result is 1− = . 2 2 4 To check using maple, execute > int(int(y^2*exp(-y^2)*sin(x*y),y=x .. infinity), >

x=0 .. infinity):

Using mathematica, Integrate[Integrate[y^2*E^(-y^2)*Sin[x*y], {y, x, Infinity}], {x, 0, Infinity}]; Both symbolic systems give the correct result, 1/4. 6.4.3.

∫ ( c+

Evaluate

x2 + y 2

) dA ,

S

where S is a disk of radius c centered at the origin.

Solution: In polar coordinates, dA = r dr dθ and x2 + y 2 = r2 , so this integral becomes ]c [ 2 ∫ c ∫ 2π ∫ c r3 5πc3 cr + = . r dr dθ(c + r) = 2π (cr + r2 ) dr = 2π 2 3 0 3 0 0 0 6.4.4.

Find the center of mass of a uniform solid hemisphere H of radius a.

Solution: Put the ﬂat side of the hemisphere H on the xy-plane with the sphere center at the origin and the curved surface at positive z. By symmetry the center of mass will be on the z-axis and given by the expression (

2πa3 3

)−1 ∫ z d3 r . H

6.4. MULTIPLE INTEGRALS

87

Perform the integration by summing over disks at each z, of thickness dz and volume π(a2 − z 2 )dz. Each disk can be treated as a point mass at its center, so the center of mass will be on the z axis, given by [ ] ∫ a 3 3 a4 a4 3a ⟨z⟩ = 3 z(a2 − z 2 ) dz = 3 − = . 2a 0 2a 2 4 8 ∫∫∫

6.4.5.

x2 y 2 z 2 dx dy dz

Evaluate

for a cylinder of height h and radius a with its center at the origin and its axis in the xdirection.

Solution: Use cylindrical coordinates with the cylindrical z coordinate in the x-direction of the original Cartesian system. Then x2 y 2 z 2 becomes ρ4 z 2 cos2 φ sin2 φ, and the volume element is ρ dρ dz dφ. The integral is ∫

h/2 −h/2

6.4.6.

(

a

cos2 φ sin2 φ dφ

z 2 dz

ρ5 dρ =

0

0

h3 12

)( )( 6) π a πh3 a6 = . 4 6 288

Write the Jacobian matrices J = ∂(x, y)/∂(s, t) and K = ∂(s, t)/∂(x, y) in forms such that the matrix product J K = 1, showing that K = J−1 .

Solution: Write J and K as ( J=

∂x/∂s

∂x/∂t

∂y/∂s

∂y/∂t

)

( ,

K=

∂s/∂x

∂s/∂y

∂t/∂x

∂t/∂y

) .

The matrix product JK has elements that are chain-rule expressions: ( ) ( ) (∂x/∂x)y (∂x/∂y)x 1 0 JK = = . (∂y/∂x)y (∂y/∂y)x 0 1 6.4.7.

Introduce a change of variables that makes this double integral elementary, and then evaluate it. Use symbolic computation to check your work. ∫ ∞ ∫ ∞ y dy dx e−(x+y) . x 0 y

Solution: Change variables to s = x + y, t = y/x; the range of integration is then 0 ≤ x < ∞, 0 ≤ t ≤ 1. The double integral becomes ∫ I=

1

t dt 0

0

e−s

∂(x, y) ds . ∂(s, t)

It is easiest to compute the Jacobian by evaluating ∂s/∂x ∂s/∂y x + y (1 + t)2 ∂(s, t) = = = ∂(x, y) ∂t/∂x ∂t/∂y x2 s

88

CHAPTER 6. MULTIDIMENSIONAL PROBLEMS and then take its reciprocal. Therefore, ∫ 1 ∫ ∞ ∫ 1 t dt t dt 1 −s I= se ds = = ln 2 − . 2 2 2 0 (1 + t) 0 0 (1 + t) It is also possible to obtain this result by reversing the order of integration, performing the y integral, writing the resulting x integrand in the form f (x)e−x , then expanding f (x) in a power series and integrating in x term-by-term.

6.5

LINE AND SURFACE INTEGRALS

Exercises 6.5.1.

∫ (2xy − y 2 ) dx , where the integration is on the curve y = x3/2

Evaluate the line integral from x = 0 to x = 4.

Solution:

[

4

(2x x3/2 − x3 ) dx =

Write entirely in terms of x: 0

2x7/2 x4 − 7/2 4

6.5.2.

Evaluate the line integral

]4 = x=0

64 . 7

ˆx − xy e ˆy , for a straight-line path from B · dr, where B = x e

(x, y) = (0, 0) to (x, y) = (2, 1).

Solution: The integral can be written ∫ ∫ B · dr = (x dx − xy dy) . Setting x = 2y in the second term and identifying the integration limits, ∫ ∫ 2 ∫ 1 4 B · dr = x dx − 2y 2 dy = . 3 0 0 6.5.3.

A wire of uniform density lies along the portion of the parabola y = x2 /2 that extends from (x, y) = (0, 0) to (x, y) = (2, 2). Find the position of its centroid. Hint: Use symbolic computation to evaluate the necessary integrals.

Solution: The integrals that are needed are ∫ 2 ∫ 1 2 L= ds , ⟨x⟩ = x ds , L x=0 x=0 √ √ Here ds = (1 + y ′2 ) dx = 1 + x2 dx.

1 ⟨y⟩ = L

2 x=0

In maple, these integrals are > L := int(sqrt(1+x^2), x = 0 .. 2): > X := (1/L)*int(x*sqrt(1+x^2), x = 0 .. 2): > Y := (1/L)*int((x^2/2)*sqrt(1+x^2), x = 0 .. 2): > evalf(X), evalf(Y);

1.14725, 0.819960

x2 ds . 2

6.5. LINE AND SURFACE INTEGRALS

89

In mathematica, l = Integrate[Sqrt[1+x^2], {x,0,2} ]; xx = (1/l)*Integrate[x*Sqrt[1+x^2], {x,0,2} ]; yy = (1/l)*Integrate[x^2/2*Sqrt[1+x^2], {x,0,2} ]; 1.14725, 0.81996

{ N[xx], N[yy] } 6.5.4.

For the curved surface of a cylinder of radius r0 with axis on the z axis and extending from z = −h to z = +h, evaluate ∫ (x2 + y 2 − z 2 ) dA .

Solution: Using cylindrical coordinates with dA = r0 dφ dz and x2 + y 2 − z 2 = ρ2 − z 2 , which is r02 − z 2 on the curved surface, ∫

I = r0

dφ −h

0

6.5.5.

h

dz(r02

) ( 2h3 2 . − z ) = 2πr0 2hr0 − 3 2

For the curved cylindrical surface of Exercise 6.5.4, evaluate ∫ (a) (x2 − y 2 )z 2 dA , ∫ (b)

(x2 − xy)z 2 dA .

Solution:

(a) Zero by symmetry;

∫ 2 2

x z dA =

y 2 z 2 dA.

(b) The xy term vanishes because it is an odd function of x (and also y). Thus, we compute ∫ I=

∫ x2 z 2 dA = r03

h

dφ cos2 φ 0

6.5.6.

z 2 dz = πr03

−h

2h3 . 3

For the curved surface of the hemisphere of radius r0 and z ≥ 0, evaluate ∫ (x2 + y 2 )z 2 dA .

Solution: Use spherical coordinates, for which dA = r02 sin θ dφ dθ. Write (x2 + y 2 )z 2 = (r02 − z 2 )z 2 = r04 (cos2 θ − cos4 θ), so the surface integral becomes, setting cos θ = t, ∫

π/2

dθ sin θ(cos2 θ − cos4 θ) = 2πr06

dφ 0

6.5.7.

r06

0

1

(t2 − t4 ) dt = 0

4πr06 . 15

What does Green’s theorem tell us (for an arbitrary area A) if we choose P (x, y) = x and Q(x, y) = 0? What is the general result if we apply the theorem to P (x, y) = 0, Q(x, y) = y?

90

CHAPTER 6. MULTIDIMENSIONAL PROBLEMS Solution: If we set P (x, y) = x and Q(x, y) = 0, the integral over A becomes ∫ (1 − 0) dA = A (the area enclosed). A

If we take P (x, y) = 0 and Q(x, y) = y, the integral over A is −A. We therefore have the following line integrals for the area within a closed curve: I I y dx = − x dy = A . ∂A

6.5.8.

∂A

Verify a particular case of Green’s theorem by evaluation of the integrals on both sides of Eq. (6.46) when we choose ∂A to be the unit circle, P (x, y) = 0, and Q(x, y) = x3 − y 3 .

Solution: This case of Green’s theorem states I ∫ (x2 − y 2 ) dx = 2y dA = 0 . A

The integral over A vanishes due to symmetry. Write the line integral in terms of the angle φ: ∫ 2π ∫ 2π ( 2 ) ( ) cos φ − sin2 φ (− sin φdφ) = 2 cos2 φ − 1 d cos φ = 0 . 0

0

This zero result conﬁrms Green’s theorem.

6.6

REARRANGEMENT OF DOUBLE SERIES

Exercises 6.6.1.

Verify that the sums claimed to be equivalent in Example 6.6.1, ∞ ∑ xp y q p! q! p,q=0

and

∞ ∑ n ∑ xp y n−p , p! (n − p)! n=0 p=0

actually give identical results for x = 0.1, y = −0.3.

Solution: In maple, > x := 0.1:

y := -0.3:

> evalf(sum(sum(x^p*y^q/p!/q!, p=0 .. infinity), q=0 .. infinity));

0.818731

> evalf(sum(sum(x^p*y^(n-p)/p!/(n-p)!,p=0 .. n), n=0 .. infinity));

0.818731

6.7. DIRAC DELTA FUNCTION

91

In mathematica, x = 0.1; y = -0.3; N[Sum[Sum[x^p*y^q/p!/q!, {p,0,Infinity} ], 0.818731

{q,0,Infinity} ] ] N[Sum[Sum[x^p*y^(n-p)/p!/(n-p)!, {p,0,n} ],

0.818731

{n,0,Infinity} ] ] 6.6.2.

Verify the equivalence of the following two double summations by comparing the coeﬃcients of xp tq for all p and q when both are no larger than 6. n ∞ ∑ ∑ n=0 k=0 ∞ [n/2] ∑ ∑ n=0 k=0

(−1)k (2n)! (2x)n−k tn+k , − k)!

22n k! n! (n

(−1)k (2n − 2k)! (2x)n−2k tn . − k)! (n − 2k)!

22n−2k k! (n

The notation [n/2] denotes the largest integer less than or equal to n/2.

Solution: Expand, using symbolic computing, with ﬁnite limits that include all terms with both variables at powers no larger than 6. Then subtract one expansion from the other and check that the diﬀerence contains no terms with both x and t to powers ≤ 6. In maple, code that accomplishes this can be (not showing the output) > S1 := expand(add(add((-1)^k*(2*m)!/2^(2*m)/k!/m!/(m-k)! >

*(2*x)^(m-k)*t^(m+k),

k = 0 .. m), m = 0 .. 6));

> S2 := 0: for n from 0 to 6 do kk := floor(n/2); > S2 := S2 + add((-1)^k*(2*n-2*k)!/2^(2*n-2*k)/k!/(n-k)! >

/(n-2*k)!*(2*x)^(n-2*k)*t^n,

k = 0 .. kk) end do: S2;

> S1 - S2; In mathematica, suitable code can be (suppressing the output) S1 = Sum[Sum[(-1)^k*(2*m)!/2^(2*m)/k!/m!/(m-k)!*(2*x)^(m-k) *t^(m+k), {k, 0, m}], {m, 0, 6}] S2 = Sum[Sum[(-1)^k*(2*n-2*k)!/2^(2*n-2*k)/k!/(n-k)! /(n-2*k)!*(2*x)^(n-2*k)*t^n,{k, 0, n/2}], {n, 0, 6}] S1 - S2

6.7

DIRAC DELTA FUNCTION

Exercises 6.7.1.

Show, that δ(ax) = δ(x)/|a|. Make a detailed argument justifying the absolute value signs that enclose a. Your argument should not depend upon the speciﬁc representation describing δ(x).

92

CHAPTER 6. MULTIDIMENSIONAL PROBLEMS Solution: Let c be a ﬁnite positive number, and change the x integral below by writing it in terms of y = ax: ∫ c ∫ ac dy f (x)δ(ax) dx = f (y/a)δ(y) . a −c −ac If a < 0 the integration is in the direction opposite to that deﬁning a delta function; we can reverse the limits if we replace a by |a|.

6.7.2.

Apply an integration by parts to establish Eq. (6.59).

Solution: Letting c be a ﬁnite positive number, consider ∫ c ∫ c c f (x)δ ′ (x) dx = f (x)δ(x) − f ′ (x)δ(x) dx = 0 − f ′ (0) . −c

−c

6.7.3.

−c

By considering the expansion of f (x) about a point xi such that f (xi ) = 0, justify Eq. (6.57).

Solution: The generalized function δ(f (x)) will only be nonzero at the zeros of f ; if there is a zero at xi , the leading contribution to f in its neighborhood will be f ′ (xi )(x − xi ), and the behavior of δ(f (x)) will be that of δ(x − xi )/|f ′ (xi )|, as indicated by Eq. (6.56). Similar contributions arise from all the zeros of f (x). 6.7.4.

Justify Eq. (6.58).

Solution: Near x = x1 , δ[(x − x1 )(x − x2 )] approaches δ[a(x − x1 )] with a = x1 − x2 , and therefore approaches δ(x−x1 )/|x1 −x2 |. Near x = x2 , we have δ(x−z2 )/|x2 −x1 |. These two contributions correspond to Eq. (6.58). 6.7.5.

Show that the Heaviside and Dirac delta functions are related by u′ (x − x0 ) = δ(x − x0 ) by verifying the identity ∫ b ∫ b u′ (x − x0 )f (x) dx = δ(x − x0 )f (x) dx a

a

for a general “well-behaved” f (x) and arbitrary values of a and b.

Solution: Since u(x) is constant everywhere except at x = 0, we have u′ (x) = 0 for all nonzero x. It is also the case that δ(x) is zero for all nonzero x, so what we need to prove reduces to [∫ x0 +ε ] ′ u (x − x0 )f (x) dx − f (x0 ) = 0 . lim ε→0

x0 −ε

Integrating by parts, we reach [ ∫ lim u(ε)f (x0 + ε) − u(−ε)f (x0 − ε) − ε→0

x0 +ε x0 −ε

] u(x − x0 )f ′ (x) dx − f (x0 ) .

The ﬁrst term within the limit approaches f (x0 ) and the second is zero, as is the third (the integral), which has a ﬁnite integrand and an inﬁnitesimal interval of integration. The fourth term, −f (x0 ), cancels the ﬁrst, showing the overall limit to be zero.

Chapter 7

VECTOR ANALYSIS 7.1

VECTOR ALGEBRA

Exercises 7.1.1.

ˆy − e ˆz , on a path from A = (1, 1, 0) to A particle is moved, subject to a force F = 2ˆ ex + e B = (2, 0, 1). Compute the work required for this process: (a) If the path is a straight line from A to B, and (b) If the path consists of the two straight-line segments A–O and O–B, where O = (0, 0, 0).

Solution: The force is independent of position, so the work may be computed as F · d. (a) d = B − A = (1, −1, 1), so F · d = (2, 1, −1) · (1, −1, 1) = 2 − 1 − 1 = 0 . (b) There are two distances: d1 = (−1, −1, 0) and d2 = (2, 0, 1). Thus, F · d1 + F · d2 = (2, 1, −1)·)(−1, −1, 0) + (2, 1, −1) · (2, 0, 1) = −3 + 3 = 0 . 7.1.2.

Find the angle between the normals to the two planes 2x + 5y − z = 10 and x − y + z = 6.

Solution: The two normals are in the directions √ (2, 5, −1) and (1, −1, 1); √ unit vectors in ˆ 1 = (2, 5, −1)/ 30 and n ˆ 2 = (1, −1, 1)/ 3. these directions are n ˆ1 · n ˆ 2 = cos θ, we ﬁnd Writing n cos θ = 7.1.3.

(2, 5, −1) · (1, −1, 1) −4 √ √ = √ = −0.4216; 30 3 3 10

θ = 2.006 radians = 114.9◦ .

Find the distance between the point (1, 2, −1) and the plane x − y + z = 5.

Solution: Project the vector P from the given point to an arbitrary point Q on the plane ˆ to the plane. Its length is the required distance. Here onto the normal n 93

94

CHAPTER 7. VECTOR ANALYSIS √ ˆ = (1, −1, 1)/ 3. P = (1, 2, −1), take Q = (5, 0, 0), and n ˆ| = | |(Q − P) · n

7.1.4.

(4, −2, 1) · (1, −1, 1) 7 √ | = √ = 4.041 3 3

Compute the distance of closest approach of the circle x2 + y 2 = 1 (in the xy-plane) to the plane 2x + 2y + z = 5, and identify the points of closest approach: (x0 , y0 ) on the circle and (x1 , y1 , z1 ) in the plane.

Solution: Using the method of Exercise 7.1.3, the distance between a point on the circle Q = (x, y, 0) and the plane, on which a point P is (0, 0, 5), is (−x, −y, 5) · (2, 2, 1) 5 − 2x − 2y = ˆ | = |(P − Q) · n . 3 3 We no longer need the absolute value signs because x and y are no larger than 1. We need the minimum of this distance subject to the condition that x2 +y 2 = 1. One could solve this constrained minimum√problem by using the method of Lagrange multipliers or by substituting y = 1 − x2 and then minimizing with respect to x. Note that the solution we seek will have positive values of x and y. In the present problem the solution will be that√which maximizes x + y, so (by symmetry) we see that it√will be√x0 = y0 = 1/ 2, and the distance of closest approach will be (5 − 2/ 2 − 2/ 2)/3 = 0.724 . To ﬁnd the closest approach point on the plane, compute √ √ (2, 2, 1) (x1 , y1 , z1 ) = (1/ 2, 1/ 2, 0) + 0.724 = (1.190, 1.190, 0.241). 3 Note that the projected vector from the circle to the plane was found to be ˆ ; that is why the second term in the above in the (positive) direction of n equation has a positive sign. We can check that (x1 , y1 .z1 ) is on the plane: 2(1.190) + 2(1.190) + 1(0.241) = 5.001. 7.1.5.

Find the equation of a plane through A = (0, 1, 2), B = (1, −1, 1), and C = (2, 0, 0). Is this plane unique? Hint. Start by ﬁnding a direction perpendicular to A − B and A − C.

Solution: The direction perpendicular to both A − B and A − C is that of (A − B) × (A − C) = (−1, 2, 1) × (−2, 1, 2) = (3, 0, 3) . This is the direction normal to the plane containing A, B, and C, so that plane has equation 3x + 3z = k, where k must be chosen such that any one of the points is in the deﬁned plane. Taking A, we require 3(0) + 3(2) = k, so k = 6, equivalent to x + z = 2. This plane is unique; the solution would only fail to be unique if the cross product were zero, indicating that all three points are colinear.

7.1. VECTOR ALGEBRA 7.1.6.

95

Find the distance from the point (1, −2, 1) to the line passing through the points (0, 0, 0) and (2, 1, 2).

Solution: ˆt, with a ˆ = (2, 1, 2)/3. Letting The line has the parametric equation A = a P = (1, −2, 1), the perpendicular distance from the line to P is √ (2, 1, 2) × (1, −2, 1) |(5, 0, −5)| 50 = = . |a × P| = 3 3 3 7.1.7.

Find a vector that lies in the intersection of the two planes x + y − z = 3 and 2x − y + 3z = 4.

Solution: Such a vector must be perpendicular to the normals to both planes, i.e., perpendicular to both n1 = (1, 1, −1) and n2 = (2, −1, 3). It must be in (or opposite to) the direction of n1 × n2 = (1, 1, −1) × (2, −1, 3) = (2, −5, −3) . 7.1.8.

A particle of mass m undergoes rotation at 8 revolutions per second around a circle of radius 1 meter in the xy-plane and centered at the origin, with the travel in the clockwise direction as viewed from positive z. Find the vector describing the angular momentum of the particle about the origin.

Solution: The angular momentum L is r × p, where p is the linear momentum, equal to mv. The velocity has magnitude 8(2π), so p has magnitude 16πm. The vectors r and p are always perpendicular to each other, and (in a right-handed coordinate system) when r is in the +x direction p is directed toward −y. Thus L has the constant value −16πmˆ ez . 7.1.9.

Compute the torque about the origin if a force F = 2ˆ ex + 3ˆ ey is applied at the point r = ˆz . eˆx − e

Solution: τ = r × F = (1, 0, −1) × ((2, 3, 0) = (3, −2, 3) . 7.1.10.

A lever of length 3 m pivots about a fulcrum 1 m from its end A (and therefore 2 m from its ˆx + e ˆy is applied at A, what force at end B). The lever can rotate in the xy-plane. If a force e B must be applied to maintain the lever at equilibrium irrespective of the lever’s orientation?

Solution: Whatever may be the value of rA (the position of A relative to the origin), rB = −2rA . Equilibrium is achieved if the net torque about the origin vanishes. This condition is r A × FA + r B × FB = 0

−→

rA × (FA − 2FB ) = 0 .

This equation is satisﬁed for all rA only if FB = FA /2.

96 7.1.11.

CHAPTER 7. VECTOR ANALYSIS Show that a necessary and suﬃcient condition that three nonvanishing vectors A, B, and C be coplanar is that A · (B × C) = 0.

Solution: A · (B × C) is the volume of the parallelepiped deﬁned by A, B, and C. Its volume is zero if and only if these three vectors are coplanar. 7.1.12.

ˆy − 3ˆ ˆx − 2ˆ ˆz , evaluate Given that A = 2ˆ ex + e ez , B = e ey + 5ˆ ez , and C = −3ˆ ex + 2ˆ ey − e (both by hand and using your symbolic computing system) (a) (A · C) B,

(b) (A × B) · C,

(c) (A × B) × C .

Solution: In maple, > with(VectorCalculus): > A:=Vector([2,1,-3]): B:=Vector([1,-2,5]): C:=Vector([-3,2,-1]): (a) > (A . C) * B;

−ex + 2ey − 5ez

(b) > (A &x B) . C;

−18

(c)

23ex + 14ey − 41ez

> (A &x B) &x C;

In mathematica, A = {2, 1, -3}; (a) (A

7.1.13.

B = {1, -2, 5};

G = {-3, 2, -1}; −1, 2, −5

. G) * B

(b) Cross[A, B] . G

−18

(c)

{23, 14, −41}

Cross[Cross[A, B], G]

Letting A, B, and C be the displacements forming a triangle (in directions such that A + B + C = 0, use the properties of the vector cross product to derive the law of sines, i.e., sin β sin γ sin α = = , A B C where α, β, γ are respectively the angles of the triangle opposite the sides A, B, C.

Solution: The angles α, β, γ are opposite the respective sides of lengths A, B, and C. Take the cross product C × (A + B + C) = C × A + C × B = 0 . The triangle can connect as either a clockwise or counterclockwise circuit (see the two diagrams).

α C

β

γ B

γ

A

β

B

α

A C In the left-hand diagram, C × A = +AC sin β and C × B = −BC sin α; in the

7.1. VECTOR ALGEBRA

97

right-hand diagram the signs in both these equations are reversed. In either case, sin β sin α AC sin β = BC sin α , or = . B A Note. The angle in the cross-product formula is the exterior angle, but it and the interior angle of the triangle have the same value of the sine. By symmetry, the above equality extends also to sin γ/C. 7.1.14.

The vector triple product A × (B × C) is found by symbolic computation as ˆx [−Ay By Cx − Az Bz Cx + Ay Bx Cy + Az Bx Cz ] e ˆy + [Ax By Cx − Ax Bx Cy − Az Bz Cy + Az By Cz ] e ˆz . + [Ax Bz Cx + Ay Bz Cy − Ax Bx Cz − Ay By Cz ] e Verify that this expression is equivalent to Eq. (7.13).

Solution: Expand Eq. (7.13) completely. 7.1.15.

Use your symbolic computing system to prove the Lagrange identity, (A × B) · (U × V) = (A · U)(B · V) − (A · V)(B · U) .

Solution: In maple, > with(VectorCalculus): > A := Vector([Ax,Ay,Az]): B := Vector([Bx,By,Bz]): > U := Vector([Ux,Uy,Uz]): V := Vector([Vx,Vy,Vz]): > simplify( (A &x B).(U &x V)-( (A.U)*(B.V)-(A.V)*(B.U) ) ); 0 In mathematica, A = {Ax,Ay,Az}; B = {Bx,By,Bz}; U = {Ux,Uy,Uz}; V = {Vx,Vy,Vz}; Cross[A,B] . Cross[U,V} - ( (A.U)*(B.V)-(A.V)*(B.U) ); 0

Simplify[%] 7.1.16.

Use your symbolic computing system to prove the Jacobi identity, A × (B × G) + B × (G × A) + G × (A × B) = 0 .

Solution: In maple, > with(VectorCalculus): > A := Vector([Ax,Ay,Az]): B := Vector([Bx,By,Bz]): > G := Vector([Gx,Gy,Gz]): ;

> simplify(A &x (B &x G) + B &x (G &x A) + G &x (A &x B)); 0

98

CHAPTER 7. VECTOR ANALYSIS In mathematica, A = {Ax,Ay,Az}; B = {Bx,By,Bz}; G = {Gx,Gy,Gz}; Cross[A, Cross[B,G]] + Cross[B, Cross[G,A]] + Cross[G, Cross[A,B]]; {0, 0, 0}

7.1.17.

ˆx + 2ˆ A force F = e ey − 3ˆ ez acts at the point (1, 4, −2). Find the torque of F (a) about the origin; (b) about the z-axis; ˆz . (c) about the axis (through the origin) p = 2ˆ ex − 3ˆ ey + e

Solution: (a) Calculate τ = r × F, with r = (1, 4, −2) and F = (1, 2, −3): τ = (1, 4, −2) × (1, 2, −3) = (−8, 1, −2) . (b) This is the projection of the above τ on the z-axis: does not depend upon the z coordinate of r. (c) This is the projection of τ on p. It is ˆ )ˆ (τ · p p= 7.1.18.

2ˆ ez . Note that it

3 (−8, 1, −2) · (2, −3, 1) (2, −3, 1) = − (2, −3, 1) . 14 2

Let a rigid body undergo rotation described by the angular velocity ω (ω is in the direction of the rotational axis, with a magnitude equal to that of the angular velocity, in radians per unit time). A point P of the rigid body is reached by a displacement r from an arbitrary point O on the rotational axis. (a) Show that the linear velocity of P is v = ω × r, and that this result is independent of the location of O on the rotational axis. (b) The angular momentum about a point O of a mass m at a point P moving at velocity v is L = r×(mv), where r is the displacement from O to P. Assuming P to be undergoing the rotational motion described in part (a), ﬁnd L in terms of ω and r. (c) Assume the rigid body of part (a) is moving at angular velocity 8 radians per second ˆx − e ˆy , and that the about an axis through the point (2, −1, 2), with the axial direction e point P is (1, −5, 2) (with all coordinates given in meters). Find the angular momentum about the rotational axis (in MKS units) of a 2.75 kg mass at P.

Solution: (a) A shift of O along the rotation axis corresponds to the replacement of r ˆ But by r + tω. ˆ = ω × r, ω × (r + tω) so v is not changed. (b)

L = r × m(ω × r) = m [r × (ω × r)] .

√ ˆ with ω ˆ = (1, −1, 0)/ 2. Also, r = (1, −5, 2) − (2, −1, 2) = (c) Here ω = 8ω (−1, −4, 0) and m = 2.75. The angular momentum about the rotational axis is [ ] ˆ · L)ω ˆ =m ω ˆ · [r × (ω × r)] ω ˆ. (ω Inserting values for all the deﬁned quantities, we obtain (in MKS units) ˆ 275 ω.

7.2. VECTOR DIFFERENTIAL OPERATORS

7.2

99

VECTOR DIFFERENTIAL OPERATORS

Exercises 7.2.1.

Show that ∇(uv) = v∇u + u∇v.

Solution: ∂uv ∂u ∂v = v+u . ∂x ∂x ∂x This is the x component of v∇u + u∇v. Similar relationships apply for the y and z components. The x component of ∇(uv) is

7.2.2.

Evaluate ∇x2 yz 3 at (2, 5, −1).

Solution: Apply the diﬀerentiation operator. The result is ˆ x + x2 z 3 e ˆy + 3x2 yz 2 e ˆz . 2xyz 3 e Evaluate the above for (2, 5, −1): −20ˆ ex − 4ˆ ey + 60ˆ ez . 7.2.3.

Assuming A and B to be constant vectors, show that ∇[A · (B × r)] = A × B.

Solution: Write out A · (B × r) = Ay (By z − Bz y) + Ay (Bz x − Bx z) + Az (Bx y − By x) . Diﬀerentiate with respect to x. We get Ay Bz − Az By , which we identify as the x component of A × B. The y and z components are obtained in a similar fashion. 7.2.4.

Given the electrostatic potential ψ =

p·r , with p a constant, ﬁnd E = −∇ψ. 4πε0 r3

Solution: Write p · r = px x + py y + pz z and r = (x2 + y 2 + z 2 )1/2 and then take the ˆr and gradient in Cartesian coordinates. Expressing the result in terms of p · e r where possible, we can bring the result to −∇ψ = − 7.2.5.

Find the derivative

ˆr )ˆ p − 3(p · e er . 3 4πε0 r

d(x2 y + yz) ˆx + 2ˆ ˆz . at (1, −1, 3) in the direction of e ey − e ds

Solution: From the direction of ds, we ﬁnd

dx 1 dy 2 dz 1 =√ , =√ , = −√ . ds 6 ds 6 ds 6

Then dx dy dz 2xy + 2(x2 + z) − y d(x2 y + yz) √ = 2xy + (x2 + z) +y = . ds ds ds ds 6 Evaluating at (1,-1,3),

d(x2 y + yz) 7 =√ . ds 6

100 7.2.6.

CHAPTER 7. VECTOR ANALYSIS Find the derivative

dr 3 at (1, 1, 1) in each of the six directions ds ˆy − e ˆz , s2 = −ˆ ex − e

ˆx + e ˆy + e ˆz , s1 = e ˆx − e ˆz , s4 = e

ˆy − e ˆz , s5 = e

ˆx − e ˆy , s3 = e

ˆx − 2ˆ ˆz . s6 = e ey + e

Compare these directional derivatives with ∇r3 at (1, 1, 1) and give a qualitative explanation of the directional derivatives.

Solution: Obtain a unit vector in the direction of each si and read from it the derivatives dx/dsi , dy/dsi , and dz/dsi . Then construct the directional derivative using the chain rule. For example: √ √ ˆy + e ˆz )/ 3 we ﬁnd dx/ds1 = dy/ds1 = dz/ds1 = 1/ 3. From ˆs1 = (ˆ ex + e √ From ˆs2 = −ˆs1 we get dx/ds2 = dy/ds2 = dz/ds2 = −1/ 3. √ √ √ ˆy ) 2 we have dx/ds3 = 1/ 2, dy/ds3 = − 2, dz/ds3 = 0. From ˆs3 = (ˆ ex − e We also need ∂r x = , ∂x r

∂r y = , ∂y r

∂r z = . ∂z r

Then, evaluating for r = (1, 1, 1), dr3 = 3r2 dsi

[(

∂r ∂x

)(

dx dsi

)

( +

∂r ∂y

)(

dy dsi

)

( +

∂r ∂z

)(

dz dsi

)]

[ ] √ dx dy dz =3 3 + + , dsi dsi dsi which we now further evaluate for each si . The results are dr3 = 9, ds1

dr3 = −9, ds2

dr3 dr3 dr3 dr3 = = = 6 = 0. ds3 ds4 ds5 ds

We note that ∇r3 = 3r2 ˆr; its magnitude at (1, 1, 1) is 9. This is the directional derivative in the (1, 1, 1) direction and minus the directional derivative in the (−1, −1, −1) direction. The vectors s3 , s4 , s5 , and s6 are in directions perpendicular to the gradient. 7.2.7.

Prove that

d dA dB (A · B) = ·B+ · A. dt dt dt

Solution: Expand A·B and apply d/dt. Collect terms to the form given by the right-hand side of the equation. 7.2.8.

Evaluate ∇ · (zˆ ex + yˆ ey + xˆ ez ).

Solution: We get ∂z ∂y ∂x + + = 0 + 1 + 0 = 1. ∂x ∂y ∂z

7.2. VECTOR DIFFERENTIAL OPERATORS 7.2.9.

101

( ) 2 Evaluate ∇ · e−r ˆ r .

Solution: This is a case of Example 7.2.7 with f (r) = e−r /r. From that example, ( ) 2 2 2 e−r 3e−r −r 2 ′ + r −2e − 2 ∇ · [f (r)r] = 3f (r) + rf (r) = = 2e−r (r−1 − r) . r r 2

7.2.10.

Show that ∇ · [f (r)A] = f (r)∇ · A + [∇f (r)] · A.

Solution: ∂ ∂Ax ∂f (r) Look at [f (r)Ax ] = f (r) + Ax . ∂x ∂x ∂x The ﬁrst term plus the corresponding terms from the y and z derivatives combine to produce f (r)∇ · A. The second term plus its y and z partners yields the result [∇f (r)] · A. 7.2.11.

ˆx + x e ˆy and G = y e ˆx − x e ˆy , Verify the results of Example 7.2.12, namely that for F = y e ˆz . we have ∇ × F = 0 and ∇ × G = −2 e

Solution: All the individual derivatives entering the x and y components of ∇ × F and ∇ × G vanish. The z components of these curls involve the nonvanishing derivatives ∂Fy ∂Fx ∂Gy ∂Gx = = 1, = −1, = 1. ∂x ∂y ∂x ∂y These derivatives combine to give the stated results. 7.2.12.

Evaluate ∇ × (zˆ ex + yˆ ey + xˆ ez ).

Solution: Writing the expression to be evaluated as ∇ × F, we ﬁnd ∂Fz ∂Fy ∂Fy ∂Fx = = = = 0, ∂y ∂z ∂x ∂y

∂Fx ∂Fz = = 1. ∂z ∂x

These combine to yield ∇ × F = 0. 7.2.13.

Show that for m a constant vector, and for B such that ∇ × B = ∇ · B = 0, ∇ × (B × m) = ∇(m · B) .

Solution: Write out the x component of each side of the equation. Left:

Right:

∂Bx ∂By ∂Bz ∂Bx my − mx − mx + mz ∂y ∂y ∂z ∂z ∂Bx ∂By ∂Bz mx + my + mz ∂x ∂x ∂x

Now apply the conditions on B to manipulate the right-hand side expansion: Use ∇ · B = 0 to substitute ∂By ∂Bz ∂Bx =− − , ∂x ∂y ∂z

102

CHAPTER 7. VECTOR ANALYSIS and use ∇ × B = 0 to make the substitutions ∂By ∂Bx = , ∂x ∂y

∂Bz ∂Bx = . ∂x ∂z

With these changes the x component of the right-hand side expansion becomes identical to the x component of the left-hand side expansion. A similar procedure applies to the y and z components. 7.2.14.

Show that A × (∇ × A) =

1 2

∇(A2 ) − (A · ∇)A.

Solution: Write out the x component of each side of the equation. [ ] [ ] ∂Ax ∂Az ∂Ay ∂Ax Left: Ay − − Az − , ∂x ∂y ∂z ∂x ] [ ∂Ax ∂Ay ∂Az ∂Ax ∂Ax ∂Ax Right: Ax + Ay + Az − Ax + Ay + Az . ∂x ∂x ∂x ∂x ∂y ∂z The x components of the two sides are manifestly identical. Similar results can be obtained for the y and z components. Use your symbolic computation system to evaluate the following expressions, and then check your work with hand computation. Here f = xyz and B = xyˆ ex − yzˆ ey + xzˆ ez .

(a) ∇f ,

7.2.15.

(b) (∇f ) · (∇f )

(c) (∇f ) · B .

Solution: In maple, > with(VectorCalculus): > SetCoordinates(cartesian[x,y,z]): > B := VectorField([x*y,-y*z,x*z]): > f := x*y*z: > fg := Gradient(f);

f g := (yz)ex + zxey + (y)ez

> fg . fg;

y 2 z 2 + z 2 x2 + y 2 x2

> fg . B;

y 2 zx − z 2 xy + yx2 z

In mathematica, with(VectorCalculus): > SetCoordinates(cartesian[x,y,z]): > B := VectorField([x*y,-y*z,x*z]): > Divergence(B);

y−z+x

> cB := Curl(B);

cB := (y)ex − zey − xez

> cB . cB;

x2 + y 2 + z 2

In mathematica, with(VectorCalculus): > SetCoordinates(cartesian[x,y,z]): > A := VectorField([Ax(x,y,z), Ay(x,y,z), Az(x,y,z)]): > P := phi(x,y,z): > LHS := Divergence(P * A): > RHS := P * Divergence(A) + A . Gradient(P): > simplify(LHS - RHS);

0

104

CHAPTER 7. VECTOR ANALYSIS In mathematica, LHS := Divergence(A &x B): > RHS := B . Curl(A) - A . Curl(B): 0

> simplify(LHS - RHS); or

B = {Bx[x, y, z], By[x, y, z], Bz[x, y, z]}; LHS = Div[Cross[A, B]]; RHS = B

.

Curl[A] - A . Curl[B]; 0

Simplify[LHS - RHS] 7.3.3.

Verify that the symbolic computation command Laplacian (operating on a scalar f ) produces the same result as the more explicit expressions Del.Del(f) or Div[Grad[f]].

Solution: After the commands of the two previous exercises, continue with one of > LHS := Laplacian(P): > RHS := Del . Del(P): > LHS - RHS;

0

or LHS = Laplacian[P]; RHS = Div[Grad[P]]; LHS - RHS 7.3.4.

0

Use symbolic computation to conﬁrm that all curls are solenoidal and that all gradients are irrotational (Vector Identities #7 and #8).

Solution: After the commands of the three previous exercises, continue with one of

7.3. VECTOR DIFFERENTIAL OPERATORS: FURTHER PROPERTIES > Del. (Del &x A);

0

0 ex

Div[Curl[A]]

0

{0, 0, 0}

105

or

7.3.5.

Conﬁrm Vector Identity #9.

Solution: After the commands of the four previous exercises, continue with one of > Q := psi(x,y,z): 0

0

7.3.6.

Evaluate (for r ̸= 0) ∇2

xer r

) , where r2 = x2 + y 2 + z 2 . Check your work using symbolic

computation.

Solution: After the commands of the ﬁve previous exercises, continue with one of > r := sqrt(x^2+y^2+z^2): (see below)

> simplify(Laplacian(x * exp(r)/r)); or r = Sqrt[x^2 + y^2 + z^2];

Simplify[Laplacian[x * E^r/r]] (see below) √ 2 After subtituting r for (x + y 2 + z 2 ), both symbolic systems give xer (r2 + 2r − 2) . r3 7.3.7.

Use symbolic computation to conﬁrm Vector Identity #10.

Solution: After the commands of the six previous exercises, continue with one of > LHS := Del &x (Del &x A): > RHS := Gradient(Del . A) - Laplacian(A): > LHS - RHS;

0 ex

or LHS = Curl[Curl [A]]; RHS = Grad[Div[A]] - Laplacian[A]; Simplify[LHS - RHS]

{0, 0, 0}

106

7.4

CHAPTER 7. VECTOR ANALYSIS

INTEGRAL THEOREMS

Exercises Use the divergence theorem or Stokes’ theorem wherever they can make these exercises easier.

7.4.1.

Given V = (x2 + y 2 )ˆ ex + 3yˆ ey − ez , for the surface of a unit cube in the ∫ 2xzˆ ﬁrst octant (0 ≤ x, y, z ≤ 1), compute V · dσ.

Solution: Use the divergence theorem. ∇ · V = 2x + 3 − 2x = 3 .

∫ Therefore, 7.4.2.

V · dσ = 3 times the volume of the unit cube, i.e., 3.

For the vector in the xy-plane A = (2x2 − y 2 )ˆ ex + xyˆ ey , (a) Compute ∇ × A,

(b) For a rectangle deﬁned by 0 ≤ x ≤ a, 0 ≤ y ≤ b, compute

(∇ × A) · dσ.

H (c) Compute A · dr for the perimeter of the rectangle of part (b) and thereby conﬁrm Stokes’ theorem for these integrals.

Solution: ˆz . (a) ∇ × A = 3y e (b) ∫For the given rectangle, with dσ taken in the +z direction, the integral is 3y dA. Calculating it, ∫

∫ 3y dA =

a

b

dx 0

3y dy = 0

3ab2 . 2

(c) With the direction chosen for dσ, the line integral traverses the perimeter of the rectangle in the counterclockwise direction. Its four line segments make the following respective contributions: ∫ a ∫ b 2 (on y = 0) 2x dx; (on x = a) ay dy; 0

0

0

(2x − b ) dx; 2

(on y = b)

2

0

0 dy .

(on x = 0)

a

b

These integrals evaluate to 23 a3 + 12 ab2 −( 23 a3 −ab2 )+0 = 32 ab2 , as required by Stokes’ theorem. ∫

7.4.3.

ˆz , (∇ × V) · dσ, where V = 2xyˆ ex + (x2 + 2xy 2 )ˆ ey − x2 z 2 e

Evaluate S

and S is the portion of the surface deﬁned by z = 4 − x2 − y 2 for which z ≥ 0. Identify the direction of dσ for which your answer is correct.

Solution: The integral will have the same value if it is reduced to a disk bounded by the

7.4. INTEGRAL THEOREMS

107

curve in the plane z = 0 bounded by the circle x2 + y 2 = 4. Let’s take the direction of positive σ to be toward positive z. We then need to calculate ∫ I = (∇ × Vz ) dA , where the integral is over the area within x2 + y 2 = 4. Since (∇ × V)z = we have

∂Vy ∂Vx − = 2y 2 , ∂x ∂y

∫ I=

2y 2 dA = 8π .

I

7.4.4.

Use Stokes’ theorem to show that

∫ u∇v · dr =

(∇u × ∇v) · dσ. S

How is the region S deﬁned?

Solution: Use Vector Identity #4, with φ = u and ∇v = A. Then ∇ × (u∇v) = u∇ × (∇v) − ∇v × ∇u . The ﬁrst term on the right-hand side vanishes; the second can be written in the form +∇u × ∇v. Using Stokes’ theorem, the line integral in the statement of the exercise can be written I ∫ ∫ u∇v · dr = (∇ × (u∇v)) · dσ = (∇u × ∇v) · dσ . S

S

Then S is any area bounded by the closed curve deﬁning the line integral, and dσ is the normal to S in the direction deﬁned by the right-hand rule. I

7.4.5.

Using the result of Exercise 7.4.4, show that

I u∇v · dr = −

v∇u · dr.

Solution: Rewrite the relation of Exercise 7.4.4 with u and v interchanged and add these two equations together. The result rearranges to the formula desired here. 7.4.6.

If ∇2 ψ = 0 at all points of a volume V , show that ∫ ∇ψ · dσ = 0 , ∂V

where ∂V is the closed surface bounding V .

Solution: Note that ∇2 ψ = ∇ · (∇ψ) and apply the divergence theorem to the relation ∫ ∫ ∇ψ · dσ . ∇ 2 ψ d3 r = 0= V

∂V

108

CHAPTER 7. VECTOR ANALYSIS ∫

7.4.7.

2

r2 ∇2 e−r d3 r, where the integration is over the full 3-D space.

Evaluate

Hint. After using an appropriate integral theorem, work in Cartesian coordinates.

Solution: Because e−r vanishes strongly at large r, we can use Green’s theorem, as given in Eq. (7.54), to obtain ∫ ∫ 2 2 2 −r 2 3 I= r ∇ e d r = e−r ∇2 r2 d3 r . 2

But ∇2 r2 = 6 and ∫ ∫ 2 e−r d3 r =

e−x dx 2

−∞

e−y dy 2

−∞

( )3 2 e−z dz = π 1/2 .

−∞

Therefore, I = 6π 3/2 .

7.5

POTENTIAL THEORY

Exercises 7.5.1.

For each of the following force ﬁelds, determine whether it is conservative, and if it is, ﬁnd the scalar potential φ(x, y, z) such that F = −∇φ. ˆx + y 2 e ˆy + z 2 e ˆz (a) F = x2 e ˆx + z e ˆy + y e ˆz (b) F = y e ˆx + 2y cosh2 (xz) e ˆy + xy 2 sinh(2xz) e ˆz (c) F = y 2 z sinh(2xz) e

Solution: Forces (a) and (c) have ∇ × F = 0 and therefore can be described by a scalar potential φ. Compute φ by a path that goes in straight lines from (0, 0, 0) to (x, 0, 0), then to (x, y, 0) and ﬁnally to (x, y, z). Write ∫ x ∫ y ∫ z −φ = Fx (x, 0, 0) dx + Fy (x, y, 0) dy + Fz (x, y, z) dz . 0

0

0

(a) We have ∫

x

−φ(x, y, z) =

y

z

z 2 dz =

y 2 dy +

x2 dx + 0

0

0

x3 + y 3 + z 3 . 3

(c) We have ∫ −φ(x, y, z) =

x

0

= y2 +

y

2y cosh2 (0) dy +

0 dx + 0

z

xy 2 sinh(2xz) dz 0

y2 (cosh(2xz) − 1) = y 2 cosh2 (xz) . 2

Both these potentials can be shifted by an arbitrary constant amount. They can be checked by comparing −∇φ with F.

7.5. POTENTIAL THEORY 7.5.2.

109

For each of the following force ﬁelds, determine whether it is solenoidal, and if it is, ﬁnd a vector potential A(x, y, z) such that F = ∇ × A. ˆx + z e ˆy + x ez (a) F = y e ˆx − x e ˆy + z ez (b) F = y e ˆx − xy 2 e ˆy + xy ez (c) F = x2 y e

Solution: Forces (a) and (c) have ∇ · F = 0 and can be described by a vector potential A. Use Eq. (7.66) to obtain A, for simplicity setting x0 = y0 = 0. ] ( 2 ) [∫ y ∫ x ∫ x x2 y ˆy ˆz ˆy + ˆz . (a) A = e x dx + e y dy − z dx = e − xz e 2 2 0 0 0 Other choices are possible. [∫ y ] ∫ x ∫ x x2 y 2 x2 y 2 ˆy ˆz ˆy + ˆz . (c) A = e xy dx + e e e 0 dy − (−xy ) dx = 2 2 0 0 0 Other choices are possible. The above answers can be checked by computing ∇ × A. 7.5.3.

A sphere of radius a contains charge of uniform density ρ0 (MKS units). Find the magnitude of the electric ﬁeld: (a) At all points r < a within the sphere. (b) At all points r > a outside the sphere.

Solution: (a) To compute the electric ﬁeld at a point inside the sphere at radius r < a, ﬁrst construct a sphere of radius r and note that, as discussed at Eq. (7.70), the charge within the sphere will produce at the sphere a spherically symmetric outward electric ﬁeld that can be calculated assuming all the charge to be at the center of the sphere. The charge within a sphere of radius r is Q = 4πρ0 r3 /3, and it produces a ﬁeld of magnitude Q/4πε0 r2 . As also discussed at the same place, the portion of the charge that is outside radius r produces no electric ﬁeld within the sphere, so the total electric ﬁeld at r (with r < a) is E=

Q ρ0 r ˆr = ˆr . 2 4πε0 r 3ε0

(b) If r > a, all the charge can be regarded as at the origin, so Q = 4πρ0 a3 /3, and the electric ﬁeld is a3 ˆr . E= 3ε0 r2 7.5.4.

Use your symbolic computing system to compute ∇2 (1/r) and ∇2 (e−r /r), where r = √ x2 + y 2 + z 2 . For r = 0 compare the computer output with hand computations.

110

CHAPTER 7. VECTOR ANALYSIS Solution: In maple, > with(VectorCalculus): > SetCoordinates(cartesian[x,y,z]): > r := sqrt(x^2+y^2+z^2}: > simplify(Laplacian(1/r));

0

> simplify(Laplacian(exp(-r)/r)); √ 2 2 2 e− x +y +z √ x2 + y 2 + z 2 In mathematica, SetCoordinates(cylindrical[rho,phi,z]): > unitphi := VectorField([0,1,0]):

ˆφ ) (this is e

> unittheta := VectorField([0,1,0]):

ˆθ ) (this is e

( ) 1 > Del &x unitphi; ez ρ > SetCoordinates(spherical[r,theta,phi]): cos(θ) r sin(θ)

> Del . unittheta;

In mathematica, check the ﬁrst four exercises with code such as AA := Array(1..3,1..3,1..3); > for a from 1 to 3 do for b from 1 to 3 do > >

>

i = 1 .. 3), j = 1 .. 3), k = 1 .. 3)

>

end do end do end do:

The output will be written as elements of a table named AA. The nonzero

8.4. SYMBOLIC COMPUTATION

129

elements of the table can be accessed by the command > ArrayElems(AA); In mathematica, U = {{0.8,0.6,0},{-0.6,0.8,0},{0,0,1}}; Do[Do[Do[aa=Sum[Sum[Sum[U[[a,i]]*U[[b,j]]*U[[c,k]] *Signature[{i,j,k}], {i,1,3}], {j,1,3}], {k,1,3}]; Print[a," ",b," ",c," ",aa], {a,1,3}], {b,1,3}], {c,1,3}] The mathematica output will be printed, element by element. These procedures can be repeated for V.

Chapter 9

GAMMA FUNCTION 9.1

DEFINITION AND PROPERTIES

Exercises First use formal methods to reduce these exercises to expressions involving gamma functions, and then evaluate these expressions using symbolic computing where it is helpful. Compare your numerical answers for integrals with the results of numerical quadratures produced by symbolic computation.

9.1.1.

Simplify:

(a)

Γ(8) , Γ(6)

(b)

Γ(2/5) . Γ(12/5)

Solution: (a) Γ(8) = 7Γ(7) = 7 · 6 Γ(6);

Γ(8) = 42. Γ(6)

(b) Γ(12/5) = (7/5)Γ(7/5) = (7/5) · (2/5) Γ(2/5); ∫

9.1.2.

Evaluate:

(a)

Γ(2/5) 25 = . Γ(12/5) 14

x3/4 e−x dx ,

0

(b)

x1/2 e−2x dx ,

0

(c)

2

x3 e−x dx ,

0

1

(d)

(ln 1/x)1/3 dx ,

0

∫ (e)

4

e−x dx .

0

Solution: (a) This integral is Γ(7/4) = 0.919063 . (b) Make the substitution y = 2x. Integral becomes 2−3/2 Γ(3/2) = 0.313329 . (c) Make the substitution y = x2 . Integral becomes Γ(2)/2 = 1/2 . 130

9.1. DEFINITION AND PROPERTIES

131

(d) Make the substitution e−y = x. Integral becomes Γ(4/3) = 0.892980 . (e) Make the substitution y = x4 . Integral becomes Γ(1/4)/4 = 0.906402 . 9.1.3.

Verify the formulas of Eqs. (9.14)—(9.17).

Solution: Eq. (9.14): Note that for integer n, Γ(n + 1) = 1(2) · · · (n), so 2n Γ(n + 1) = 2(4) · · · (2n) = (2n)!! Eq. (9.15): Use the previous result, and write (2n − 1)!! =

Γ(2n + 1) (2n)! = n . (2n)!! 2 Γ(n + 1)

Eq. (9.16): Γ(n + 12 ) = (n − 12 )(n − 32 ) · · · ( 12 )Γ( 12 ) ( )( ) ( ) 2n − 1 2n − 3 1 √ (2n − 1)!! √ = ··· π= π. 2 2 2 2n ( ) (n + 12 )(n − 12 ) · · · (n + 32 − j) n + 12 = Eq. (9.17): . j j! The numerator is the ratio Γ(n + 32 )/Γ(n + 9.1.4.

3 2

− j).

Show that Γ(s) (for s > 0) may be written ∫ ∞ 2 Γ(s) = 2 e−t t2s−1 dt , 0

1

[ ln

Γ(s) = 0

( )]s−1 1 dt . t

Solution: For the ﬁrst of these formulas, substitute x = t2 , xs−1 = t2s−2 and dx = 2t dt in the Euler formula: ∫ ∞ ∫ ∞ 2 Γ(s) = xs−1 e−x dx = t2s−2 e−t (2t) dt . 0

0

This easily rearranges to the required result. For the second formula, set t = e−y , ln(1/t) = y, and dt = −e−y dy. After allowing for the fact that t = 0 corresponds to y = ∞ and t = 1 corresponds to y = 0, the Euler formula for Γ(y) is recovered. 9.1.5.

Show that, for n > −1,

1 0

xn ln x dx = −

1 . (n + 1)2

Solution: Make the substitution e−y = x. Then ln x = −y, dx = −e−y dy, and the integral becomes −Γ(2)/(n + 1)2 . 9.1.6.

Show that, for integer n, Γ( 12 − n) Γ( 12 + n) = (−1)n π .

Solution: Note that Γ(n + 12 ) = (n − 12 )(n − 32 ) · · · ( 12 ) Γ( 12 ) , Γ( 12 ) = (− 12 )(− 32 ) · · · (−n + 12 ) Γ(−n + 12 ) .

132

CHAPTER 9. GAMMA FUNCTION Divide the two members of the ﬁrst equation by the corresponding members of the second equation, observing that the right-hand aides of the equations have n factors that diﬀer only in sign. We get Γ(n + 12 ) (−1)n Γ( 12 ) = . Γ( 12 ) Γ(−n + 12 ) Setting Γ( 21 ) =

9.1.7.

π, the above reduces to the required result.

Show that dΓ(s) = ds

xs−1 e−x ln x dx .

0

Solution: Write Γ(s) as the integral in Eq. (9.4), and move the derivative within the integral, noting that dxs = xs ln x . ds

9.2

DIGAMMA AND POLYGAMMA FUNCTIONS

Exercises First use formal methods to reduce these exercises to expressions involving gamma-related functions, and then evaluate these expressions using symbolic computing where it is helpful. Compare your numerical answers for integrals with the results of numerical quadratures produced by symbolic computation.

9.2.1.

Using di- and polygamma functions, sum the series (a)

∞ ∑ n=1

1 , n(n + 1)

(b)

∞ ∑ n=2

1 . n2 − 1

Solution: (a) Recognize that ψ(2) = −γ +

∞ ∑

1 = −γ + 1. m(m + 1) m=1

The sum therefore has value 1. (b) Recognize that ψ(3) = −γ +

∞ ∑

∞ ∑ 2 1 = −γ + 2 = −γ + m(m + 2) (m − 1)(m + 1) m=1 m=2

so the sum has value 3/4. 9.2.2.

An expansion of ln Γ(s + 1) that apparently diﬀers from Eq. (9.24) is [ ] n ∞ ∑ s ln Γ(s + 1) = − ln(1 + s) + s(1 − γ) + (−1)n ζ(n) − 1 . n n=2 (a) Show that this expansion agrees with Eq. (9.24) for |s| < 1. (b) What is the range of convergence of this new expansion?

3 2

,

9.2. DIGAMMA AND POLYGAMMA FUNCTIONS

133

Solution: (a) Add the expansion of Eq. (9.24) to that of ln(1 + s). We get ln Γ(s + 1) + ln(1 + s) = −γs +

∞ ∑

(−1)n

n=2

ζ(n)sn ∑ sn − (−1)n . n n n=1

Combine the two sums, noting that the second sum has a term +s (for n = 1) that does not have a corresponding term in the ﬁrst sum. We get the required result. (b) To use the ratio test to establish the convergence range, note that for large n, ζ(n) − 1 ∼ 2−n . The ratio test indicates convergence if |s|/2 < 1, i.e., for −2 < s < 2. This result is easily understood if we note that ln Γ(s+1)+ln(1+s) = ln Γ(s+2), which is singular at s = −2. 9.2.3.

Show that

∞ ∑ n=1

[ ] 1 1 = ψ(1 + b) − ψ(1 + a) , (n + a)(n + b) (b − a)

where a ̸= b, and neither a nor b is a negative integer. It is of some interest to compare this summation with the corresponding integral, [ ] ∫ ∞ dx 1 = ln(1 + b) − ln(1 + a) . (x + a)(x + b) b−a 1

Solution: [ ] 1 1 1 1 = − . (x + a)(x + b) b−a x+a x+b

Write

Integrate and take a limit as follows: ] [∫ ∫ R R [ ] 1 dx lim − = lim ln(R+a)−ln(1+a)−ln(R+b)+ln(1+b) R→∞ R→∞ 1 x+b 1 x+a The terms containing R cancel in the limit, leaving the required result. 9.2.4.

Derive the diﬀerence relation for the polygamma function ψ (m) (s + 2) = ψ (m) (s + 1) + (−1)m

m! , (s + 1)m+1

Solution: Form the diﬀerence ψ(s + 2) − ψ(s + 1) = and diﬀerentiate this equation m times. 9.2.5.

Verify ∫ (a)

e−r ln r dr = −γ.

0

(b) 0

re−r ln r dr = 1 − γ.

1 , s+1

m = 0, 1, 2, . . . .

134

CHAPTER 9. GAMMA FUNCTION ∫

(c)

rn e−r ln r dr = (n − 1)! + n

0

rn−1 e−r ln r dr,

n = 1, 2, 3, ...

0

Hint. These may be veriﬁed by integration by parts, or by diﬀerentiating the Euler integral formula for Γ(n + 1) with respect to n.

Solution: Deﬁne

In =

tn−1 ln t e−t dt .

0

Diﬀerentiating the Euler integral formula for Γ(s) and setting s to an integer (n), we have In−1 dΓ(s) = In−1 , ψ(n) = ds s=n (n − 1)! (a) Setting n = 1 and using ψ(1) = −γ, we ﬁnd I0 = −γ, as required. (b) Setting n = 2 and using ψ(2) = −γ + 1, we ﬁnd I1 = 1 − γ. (c) For larger n, note that ψ(n + 1) = ψ(n) + 1/n, so In In−1 1 = + n! (n − 1)! n Multiplying all terms of this equation by n!, we recover the required formula. 9.2.6.

Find the value of s for which Γ(s) is a minimum. Hint. You can do this graphically, or by the method illustrated in Example 9.2.1.

Solution: The required value of s will be such that ψ(s) = 0. To ﬁnd this s value graphically, execute (in maple) > plot(psi(s), s = s1 .. s2) or (in mathematica) Plot[PolyGamma[s], {s, s1, s2}] with diﬀerent choices of s1 and s2 until the zero has been located to suﬃcient precision. A typical graph for this process is shown here:

9.3. STIRLING’S FORMULA

135

Alternatively, one can use the root-ﬁnding capability of the symbolic language to locate the root, by executing one of the following command sequences. In maple, > solve(Psi(s)=0, s): RootOf (Ψ( Z), 1.461632145)

> allvalues(%); In mathematica, FindRoot[PolyGamma[s]==0, {s,1}]

9.3

s → 1.46163

STIRLING’S FORMULA

Exercises 9.3.1.

Use Stirling’s formula to estimate 52!, the number of possible rearrangements of cards in a standard deck of playing cards. What can you say about the error if you use Eq. (9.28) for your estimate?

Solution: The number of rearrangements, N , is Γ(53). From Stirling’s formula, this is [ ] [ ] √ 1 1 52.5 −52 67 2π 52 e 1+ + · · · ≈ 8.0529 × 10 1+ + ··· 12 · 52 624 A direct call to evalf(52!} or N[52!] yields 8.0658 · · · × 1067 . 9.3.2.

Show that lim xb−a x→∞

Γ(x + a + 1) =1. Γ(x + b + 1)

Solution: Take logarithms of all terms; verify that at large x the logs add to zero. Initially, using Stirling’s formula, we have (b − a) ln x + (x + a + 12 ) ln(x + a) − (x + a) − (x + b + 21 ) ln(x + b) + (x + b) Write ln(x + a) as ln x + ln(1 + a/x) and ln(x + b) as ln x + ln(1 + b/x); then expand the quantities of form ln(1 + t) as t + O(t2 ). All terms not involving negative powers of x then cancel. 9.3.3.

Show that lim

n→∞

(2n − 1)!! 1/2 n = π −1/2 . (2n)!!

Solution: Write the double factorials as gamma functions and then take their logarithms using Stirling’s formula. We have √ (2n − 1)!! = 2n Γ(n + 12 )/ π ; ln Γ(n + 12 ) ≈ 12 ln 2π + n ln(n − 12 ) − (n − 21 ) (2n)!! = 2n Γ(n + 1) ;

ln Γ(n + 1) ≈

1 2

ln 2π + (n + 12 ) ln n + n

Next write ln(n + 21 ) = ln n + (1/2n) + · · · and combine the logarithms, also taking ln n1/2 = 12 ln n. √These logs all add to zero, the two factors 2n cancel, and all that is left is 1/ π.

136 9.3.4.

CHAPTER 9. GAMMA FUNCTION Find lim √ n→∞

Γ(n + 32 ) n Γ(n + 1)

.

Solution: Use Stirling’s formula. Initially, we have (n + 1) ln(n + 12 ) − (n + 12 ) −

1 2

ln n − (n + 21 ) ln n + n

Next write (n + 1) ln(n + 21 ) = (n + 1)[ln n + ln(1 + (1/2n) and expand the logarithm. Everything cancels in the limit of large n, so the logarithm of the limit is zero; the limit is 1.

9.4

BETA FUNCTION

Exercises 9.4.1.

Verify the following beta function identities: (a) B(a, b) = B(a + 1, b) + B(a, b + 1), a+b B(a, b + 1), (b) B(a, b) = b b−1 (c) B(a, b) = B(a + 1, b − 1), a (d) B(a, b)B(a + b, c) = B(b, c)B(a, b + c).

Solution: (a) B(a + 1, b) + B(a, b + 1) =

=

Γ(a + 1)Γ(b) Γ(a)Γ(b + 1) + Γ(a + b + 1) Γ(a + b + 1) [aΓ(a)]Γ(b) + Γ(a)[bΓ(b)] . (a + b)Γ(a + b)

This simpliﬁes to the required result. (b) B(a, b + 1) =

Γ(a)Γ(b + 1) Γ(a)[bΓ(b)] b = = B(a, b) . Γ(a + b + 1) (a + b)Γ(a + b) a+b

(c) B(a + 1, b − 1) = =

[aΓ(a)][(b − 1)−1 Γ(b)] Γ(a + 1)Γ(b − 1) = Γ(a + b) Γ(a + b) a B(a, b) . b−1

(d) B(a, b)B(a + b, c) =

Γ(a)Γ(b) Γ(a + b)Γ(c) , Γ(a + b) Γ(a + b + c)

B(b, c)B(a, b + c) =

Γ(b)Γ(c) Γ(a)Γ(b + c) . Γ(b + c) Γ(a + b + c)

These simplify to the same result.

9.4. BETA FUNCTION ∫

9.4.2.

1

Show that

137

(1 − x4 )−1/2 dx =

0

[Γ(5/4)]2 · 4 = 1.311028777. (2π)1/2

Solution: Set x4 = t and 4x3 dx = dt, equivalent to dx = (1/4)t−3/4 dt. The integral then takes the form ∫ 1 1 −3/4 I= t (1 − t)−1/2 dt . 4 0 This is a case of Eq. (9.37) with p = −3/4 and q = −1/2, so B( 41 , 12 ) Γ( 41 )Γ( 12 ) = . 4 4Γ( 34 )

I=

To reach the √ form given as the answer, √ apply the reﬂection formula to replace Γ( 43 ) by π 2/Γ( 14 ) and set Γ( 12 ) = π, leading to [Γ( 14 )]2 Γ( 12 ) √ I= = 4π 2

(

Γ( 14 ) 4

)2

[Γ( 54 )]2 · 4 4 √ = . (2π)1/2 2π

Evaluate using symbolic computing; use one of

9.4.3.

evalf(4*GAMMA(5/4)^2/\sqrt(2*Pi));

1.31102877

N[4*Gamma[5/4]^2/Sqrt[2*Pi]]

1.31103

Evaluate the following integrals: ∫ 1 dx √ (a) , 1 − x2 0 ∫

π/2

(c) 0

(b) 0

dθ √ sin θ

(d)

π/2

y 2 dy , (1 + y)6 (cos x)5/2 dx .

0

Solution: (a) Using Eq. (9.38), this integral is

Γ( 21 )Γ( 21 ) 1 π B( 12 , 12 ) = = . 2 2Γ(1) 2

Γ(3)2 2!2! 1 = = . Γ(6) 5! 30 √ π Γ( 41 ) 1 (c) Using Eq. (9.35), this integral is B( 12 , 14 ) = = 2.62206 . 2 2Γ( 34 ) √ π Γ( 47 ) 1 = 0.718884 . (d) Using Eq. (9.35), this integral is B( 74 , 12 ) = 2 2Γ( 94 ) (b) Using Eq. (9.39), this integral is B(3, 3) =

9.4.4.

5

Evaluate

x−1/3 (5 − x)10/3 dx .

0

Solution: Substitute x = 5y, reaching ∫ 4

5

1

y −1/3 (1 − y)10/3 dy .

0

This can be recognized as 625 B( 23 , 13 3 ) = 326.5585 .

138 9.4.5.

CHAPTER 9. GAMMA FUNCTION (a) Show that ∫

1

−1

(1 − x )

2 1/2 2n

x

   π/2, dx = (2n − 1)!!  ,  π (2n + 2)!!

(b) Show that ∫

1 −1

2 −1/2 2n

(1 − x )

x

   π, dx = (2n − 1)!!  ,  π (2n)!!

n=0 n = 1, 2, 3, . . . .

n = 0, n = 1, 2, 3, . . . .

Solution: These integrals are of the type given in Eq. (9.38), but with range (−1, 1) instead of (0, 1). If 2p + 1 is even, a formula equivalent to Eq. (9.38) is ∫ 1 x2p (1 − x2 )q dx . B(p + 12 , q + 1) = −1

√ (2n − 1)!! π , 2n √ ∫ 1 Γ(n + 21 ) π/2 (2n − 1)!!π 2 1/2 2n 1 3 (1 − x ) x dx = B(n + 2 , 2 ) = = n+1 (n + 1)! 2 (n + 1)! −1

Using the above and noting that Γ(n + 12 ) = (a)

= ∫

1

(b) −1

(2n − 1)!!π . (2n + 2)!!

(1 − x2 )−1/2 x2n dx = B(n + 12 , 12 ) = =

√ Γ(n + 21 ) π (2n − 1)!!π = n! 2n n!

(2n − 1)!!π . (2n)!!

The special cases shown for n = 0 are included in the general case because (−1)!! is deﬁned to be unity. 9.4.6.

Show that, for integer p and q, ∫ 1 (a) x2p+1 (1 − x2 )−1/2 dx = 0

1

(b)

x2p (1 − x2 )q dx =

0

(2p)!! , (2p + 1)!!

(2p − 1)!! (2q)!! . (2p + 2q + 1)!!

Solution: These integrals are cases of Eq. (9.38). √ Γ(p + 1)Γ( 12 ) p! π 1 √ = B(p + 1, 12 ) = , 2 2 π(2p + 1)!!2−p−1 2 Γ(p + 23 ) which simpliﬁes to the required result. Γ(p + 12 )Γ(q + 1) 1 (b) This integral is B(p + 12 , q + 1) = 2 2Γ(p + q + 32 ) √ q! π(2p − 1)!!2−p = √ , 2 π(2p + 2q + 1)!!2−p−q−1 (a) This integral is

which simpliﬁes to the required result.

9.4. BETA FUNCTION 9.4.7.

Show that

139

∞ 0

sinhα x coshβ x

dx =

1 B 2

(

α+1 β−α , 2 2

) ,

−1 < α < β.

Hint. Let sinh2 x = u.

Solution:

√ Make the substitutions sinh x = u1/2 , cosh x = u + 1, and then set 2 sinh x cosh x dx = du. This integral is a case of Eq. (9.39): ∫ ∞ ∫ ∞ ∫ ∞ sinhα x u(α−1)/2 du u(α−1)/2 dx = . cosh x sinh x dx = β (β+1)/2 (β+1)/2 2 (1 + u) cosh x 0 0 0 (1 + u) ( ) 1 α+1 β−α It evaluates to B , . 2 2 2 9.4.8.

From

π/2

0 π/2

lim ∫ n→∞

sin2n θ dθ =1,

sin2n+1 θ dθ

0

derive the Wallis formula for π: 2·2 4·4 6·6 π = · · · ··· . 2 1·3 3·5 5·7

Solution: The formulas needed here, both cases of Eq. (9.35), are ∫ π/2 1 (2n − 1)!! π sin2n θ dθ = B(n + 12 , 21 ) = , 2 2(2n)!! 0 ∫

π/2

sin2n+1 θ dθ = 0

(2n)!! 1 B(n + 1, 12 ) = . 2 (2n + 1)!!

Dividing the ﬁrst of these by the second, and taking the limit, we get lim

n→∞

[1 · 3 · · · (2n − 1)π][1 · 3 · · · (2n + 1)] = 1, 2[2 · 4 · · · 2n]2

which rearranges into the Wallis formula. [

9.4.9.

Show that lim

n→∞

] nx B(x, n)

= Γ(x) .

Solution: Writing the beta function in terms of gamma functions, the limit to be proved becomes nx Γ(n) = 1. lim L(n) = lim n→∞ n→∞ Γ(n + x) Take the logarithm of L(n) and show it approaches zero in the limit of large n. Using Stirling’s formula and dropping all terms that vanish for large n, ln L(n) = x ln n+(n− 21 ) ln(n−1)−(n−1)−(n+x− 12 ) ln(n+x−1)+(n+x−1) . Write

(

x ln(n + x − 1) = ln(n − 1) + ln 1 + n−1

) = ln(n − 1) +

x + ··· n−1

140

CHAPTER 9. GAMMA FUNCTION and cancel where possible. We then reach (again dropping terms that individually vanish at large n) [ ] n − 12 ln L(n) = x ln n + 1 − ln(n − 1) − . n−1 These terms combine to yield zero in the limit of large n.

9.4.10.

Show, by means of the beta function, that ∫ z π dx = , 1−α (x − t)α sin πα t (z − x)

0 < α < 1.

Solution: Change the variable to u = (x − t)/(z − t). Then x − t = u(z + t and z − x = (z − t)(1 − u). Also, dx = (z − t)du and the integration range in u is from 0 to 1. With these changes, the integral becomes ∫ 1 du = B(α, 1 − α) = Γ(α)Γ(1 − α) . 1−α uα (1 − u) 0 This product can now be reduced to the stated answer using the reﬂection formula, Eq. (9.11).

9.5

ERROR FUNCTION

Exercises 9.5.1.

Using symbolic computation, compute erf(1) and π −1/2 γ(1/2, 1) and show that they are equal.

Solution: In both symbolic systems, we need to compute γ(s, x) as Γ(s) − Γ(s, x). Thus, write one of > erf(1.), evalf(Pi^(-1/2)*(GAMMA(1/2)-GAMMA(1/2,1))); .8427007929, .8427007931 { Erf[1.], Pi^(-1/2)*(Gamma[1/2] - Gamma[1/2, 1.]) } {0.842701, 0.842701} ∫

9.5.2.

Integrals of the form Gp (α) =

2

tp e−αt dt arise in many diﬀerent contexts.

0

(a) Verify (and then check by symbolic computation) the formulas √ 1 π 1 G0 (α) = , G1 (α) = . 2 α 2α (b) By diﬀerentiating the formulas of part (a) with respect to α, obtain the more general formulas √ p! (2p − 1)!! π G2p+1 (α) = . G2p (α) = p+1 p+1/2 , 2αp+1 2 α (c) Using your symbolic computing system, ﬁnd (for general p) a formula for Gp (α). Show that the general formula reduces to the results of part (b) when p is a nonnegative integer. Comment. Does your symbolic system take note of the fact that Gp (α) is only deﬁned for speciﬁc ranges of p and α?

9.5. ERROR FUNCTION

141

Solution: (a) For G0 , change the variable to u = at2 and identify the u integral as proportional to Γ( 21 ). For G1 , that substitution proposes an integral proportional to Γ(1). Using maple, best results are obtained if α is ﬁrst assumed to be positive. Thus, > assume(a>0): > G0 := int(exp(-a*t^2), t=0 .. infinity): > G1 := G1 := int(t*exp(-a*t^2), t=0 .. infinity): In mathematica, no prior assumption is necessary but the output indicates the existence of a condition: G0 = Integrate[E^(-a*t^2), {t, 0, Infinity}] [√ ] ConditionalExpression 2√πa , Re[a] > 0 G1 = Integrate[t*E^(-a*t^2), {t, 0, Infinity}] [1 ] ConditionalExpression 2a , Re[a] > 0 (b) Each application of −(d/dα) raises the index p by 2, so G2p is obtained by diﬀerentiating G0 and G2p+1 results from the diﬀerentiation of G1 . (c) In maple, > assume(a > 0); assume(p > -1); > G2p := int(t^p*exp(-a*t^2), t = 0 .. infinity); ) ( 1 Γ 12 p + 21 √ 2 a 12 p a In mathematica, G2p = Integrate[ t^p*E^(-a*t^2), {t, 0, Infinity} ] [ ] [ ] p 1 ConditionalExpression 12 a− 2 − 2 Gamma 1+p , Re[p] > −1 && Re[a] > 0 2 These formulas apply for p either even or odd, and can be reconciled with the results of part (b) by writing the gamma function in terms of factorials or double factorials. 9.5.3.

2

As a function of α, ﬁnd the values of x for which e−αx has half of its maximum value (the maximum is reached at x = 0).

Solution: At x = 0 the value of e−αx is 1, so we need the values of x for which e−αx = 1/2. Taking logarithms of both sides of this equation, 2

2

√ −αx = ln(1/2) , 2

or

x=±

ln 2 . α

142

9.6

CHAPTER 9. GAMMA FUNCTION

EXPONENTIAL INTEGRAL

Exercises 9.6.1.

Show that E1 (z) may be written as E1 (z) = e−z

0

e−zt dt. 1+t

Solution: In Eq. (9.51) replace t by t + 1, and therefore change the lower integration limit to zero. A factor e−z can then be moved outside the integral, giving the result shown. 9.6.2.

A generalized exponential integral En (x) was deﬁned in Eq. (9.53). Show that En (x) satisﬁes the recurrence relation En+1 (x) =

1 −x x e − En (x), n n

n = 1, 2, 3, · · · .

Solution: Integrate by parts the integral representing En+1 (x), diﬀerentiating e−xt and integrating 1/tn+1 : ∞ ∫ ∞ −xt ∫ e e−xt x ∞ e−xt En+1 (x) = dt = dt . − tn+1 −ntn 1 n 1 tn 1 The integrated term simpliﬁes to e−x /n, and the ﬁnal integral is En (x), so the proposed equation is conﬁrmed. 9.6.3.

With En (x) as deﬁned in Eq. (9.53), show that for n > 1, En (0) = 1/(n − 1).

Solution: Setting x = 0, the expression for En (0) is integral giving the desired result. 9.6.4.

∫∞ 1

t−n dt, which is an elementary

Verify by symbolic computing that the exponential integral has the expansion ∫ ∞ −t ∞ ∑ e (−1)n xn dt = −γ − ln x − , t n · n! x n=1 where γ is the Euler-Mascheroni constant (gamma in maple, EulerGamma in mathematica). Proceed by comparing the above formula with Ei(1,x) or ExpIntegralE[1,x] for x = 0.1, 0.01, and 0.001.

Solution: In maple, make a procedure: > FF := proc(x); > >

- gamma - ln(x) - sum((-1)^n*x^n/n/n!, n=1 .. infinity) end proc:

Check that this works: > Ei(1,x)-FF(x);

0

9.6. EXPONENTIAL INTEGRAL

143

In mathematica, FF = -EulerGamma-Log[x]-Sum[(-1)^n*x^n/n/n!, {n,1,Infinity}]; This command identiﬁes the result as Gamma[0,x]. A check with Simplify produces nothing useful; use FullSimplify: FullSimplify [ExpIntegralE[1,x] - FF ]

0

To evaluate the test values, use one of > evalf(FF(0.1)), evalf(FF(0.01)), evalf(FF(0.001)); 1.822923958, 4.037929577, 6.331539364 {FF/.z -> 0.1, FF/.z -> 0.01, FF/.z -> 0.001} {1.82292, 4.03793, 6.33154} These results agree with those obtained from Ei(1,x) or ExpIntegralE[1,x] for the three x values.

Chapter 10

ORDINARY DIFFERENTIAL EQUATIONS 10.1

INTRODUCTION

Exercises 10.1.1.

The ODE y ′′ − y = 0 has the general solution y(t) = Aet + Be−t . Find a solution such that y(0) = 0 and y(ln 2) = 3/2.

Solution: At t = 0, we have A + B = 0; at t = ln 2, we have 2A + 12 B = 3/2. The values of A and B that solve these equations are A = 1, B = −1, so y(x) = ex − e−x . 10.1.2.

An object falling under the inﬂuence of gravity (and no air resistance or other frictional force) moves subject to the ODE y ′′ (t) = −g, where g is the (constant) acceleration of gravity, and y(t) is the vertical position of the object at time t. The general solution to this ODE is y = C1 + C2 t − gt2 /2. Find solutions giving y(t) if (a) At t = 0 the object is at rest at y = 0, (b) At t = t0 the object is at y = y0 moving upward at velocity y ′ = v0 .

Solution: (a) C1 = C2 = 0, so y = −gt2 /2. (b) A good way to solve this problem is to replace t by t − t0 in the solution, so it reads y(t) = C1 + C2 (t − t0 ) − g(t − t0 )2 /2; this amounts to setting the zero of t to t0 . In this form, C1 = y0 and C2 = v0 , so y = y0 + v0 (t − t0 ) − g(t − t0 )2 /2. If desired we can expand this expression and collect powers of t: y(t) = (y0 − v0 t0 − gt20 /2) + (v0 + gt0 )t − gt2 /2 . 10.1.3.

The ODE y ′′ + y = 0 has general solution y(t) = A sin t + B cos t. Find a formula for y(t) if y(0) = 0 and y ′ (0) = v0 .

144

10.2. SYMBOLIC COMPUTING

145

Solution: Since y(0) = B and y ′ (0) = A, we have A = v0 , B = 0 and therefore y = v0 sin t. 10.1.4.

A water droplet (assumed spherical) evaporates at a rate proportional to its surface area. Show that the rate at which its radius shrinks is constant.

Solution: d(4πr3 /3) dr d(volume) = = 4πr2 = −k(area) = −k(4πr2 ) . dt dt dt This equation shows dr/dt to be a constant. 10.1.5.

The general solution of the ODE x2 y ′′ (x) + xy ′ (x) − y(x) = x2 is x2 C2 + C1 x + . 3 x Find a solution such that y(0) = y(1) = 0. y(x) =

Solution: C1 = −1/3, C2 = 0, so y = 10.1.6.

x2 x − . 3 3

Check the “solutions” found for the ODE of Example 10.1.3 and verify that valid solutions exist only if C2 = C12 and C3 = 0.

Solution: Substitute into the ODE. For y = C1 x+C2 , the substitution into y ′2 +xy ′ −y = 0 yields C12 + xC1 − [C1 x + C2 ] = 0 , which is only satisﬁed if C12 − C2 = 0. For y = −x2 /4 + C3 , substitution yields [ 2 ] x2 x2 x − − − + C3 = 0 , 4 2 4 which is only satisﬁed if C3 = 0.

10.2

SYMBOLIC COMPUTING

Exercises 10.2.1.

Consider the ODE

x2 y ′′ (x) + 2xy ′ (x) − 6y(x) = x2 .

Using symbolic computing for all steps of this exercise, (a) Find the general solution to the above ODE. (b) Find a solution such that y(1) = 0 and y ′ (1) = 1/5. (c) Deﬁne a function y(x) corresponding to the solution found in part (b). (d) Diﬀerentiate y(t) from part (c) and form the expression t2 y ′′ + 2ty ′ − 6y. Show that this expression is equal to t2 .

146

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS Solution: (a) Execute one of the following pairs of commands: > ODE := x^2*diff(y(x),x,x)+2*x*diff(y(x),x)-6*y(x) = x^2: > dsolve(ODE); 1 2 C1 + x (5 ln(x) − 1) x3 25 ode = x^2*y’’[x]+2*x*y’[x]-6*y[x] == x^2; y(x) = x2 C2 +

DSolve[ode,y[x],x] }} {{ ) 1 ( 2 C[2] 2 2 −x + 5x Log[x] y[x]− > x C[1] + 3 + x 25 (b) After deﬁning ODE or ode, execute one of > dsolve({ODE,y(1)=0,D(y)(1)=1/5}); 1 2 1 2 x + x (5 ln(x) − 1) 25 25 DSolve[{ode,y[1]==0,y’[1]==1/5},y[x],x] {{ }} 1 2 y[x]− > x Log[x] 5 y(x) =

(c) It is necessary to extract the functional form of the solution from the symbolic output and use the result as the body of a function. In maple, > AA := dsolve({ODE,y(1)=0,D(y)(1)=1/5}: as in part (b) 1 2 1 2 > funct := op(2,AA); f unct := x + x (5 ln(x) − 1) 25 25 > yy := proc(u); eval(funct, x=u) end proc: In mathematica, aa = DSolve[{ode,y[1]==0,y’[1]==1/5},y[x],x]; funct = aa[[1]][[1]][[2]]

as in part (b) 1 2 x Log[x] 5

yy[x_] := Evaluate[%] (d) Using the function yy of part (c), execute one of the following: simplify(t^2*diff(yy(t),t,t)+2*t*diff(yy(t),t)-6*yy(t)); t2 Simplify[t^2*yy’’[t] + 2*t*yy’[t] - 6*yy[t]]

t2

The next two exercises show that the commands for symbolic solution of ODEs are able to generate solutions that involve special functions. Solve each ODE symbolically, and then show (by hand computation) that the solutions are correct. The special functions involved are deﬁned and discussed in Sections 9.5 and 9.6 of this text.

10.3. FIRST-ORDER EQUATIONS

147

y ′′ + 2xy ′ = x .

10.2.2.

Solution: Execute one of the following pairs of commands: > ODE := diff(y(x),x,x)+2*x*diff(y(x),x) = x: √ 1 1 > dsolve(ODE); y(x) = C1 π erf(x) + x + C2 2 2 ode = y’’[x]+2*x*y’[x] == x; x 1√ DSolve[ode,y[x],x] {{y[x] → + C[2] + π C[1]Erf[x]}} 2 2 xy ′′ + (x + 1)y ′ = x .

10.2.3.

Solution: Execute one of the following pairs of commnds: > ODE := x*diff(y(x),x,x)+(x+1)*diff(y(x),x) = x: > dsolve(ODE); y(x) = − ln(−x) − C1 ∗ Ei(1, x) + x + C2 ode = x*y’’[x]+(x+1)*y’[x] == x; DSolve[ode,y[x],x] {{y[x] → x + C[2] + C[1]ExpIntegralEi[−x] − Log[x]}}

10.3

FIRST-ORDER EQUATIONS

Exercises 10.3.1.

Show that an ODE that is separable is also exact when written in separated form. Hint. You need to identify each term as a perfect diﬀerential.

Solution: A separable ODE can be written in the form p(x)dx = q(y)dy. ∫ ∫ Setting P (x) = p(x) dx and Q(y) = q(y) dy (indeﬁnite integrals), then the ODE is equivalent to dQ = dP . 10.3.2.

Verify that the general solution of the ODE studied in Example 10.3.4 is correctly given by Eq. (10.34).

Solution: Compute y ′ = −2Cxe−x and replace Ce−x by y − 34 . 2

10.3.3.

2

Conﬁrm the details of the analysis presented in Example 10.3.5 as follows (a) Verify that the substitution y = vx leads to the ODE (v 2 − v) dx + (1 + v)x dv = 0 . (b) Verify that the general solution to this ODE can be brought to the form given in Eq. (10.37).

148

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS Solution: (a) Collect terms in the expression given in the text. (b) One way to check the solution starts by writing the ODE as dx x+y 1 x = = + . dy 2y 2 2y Then, from Eq. (10.37), make the substitutions dx C = 1 + 1/2 dy 2y

and

C x = 1 + 1/2 . y y

The result is an identity. Solve the following ODEs in y(x) subject to the indicated condition (or if no condition is indicated, obtain the general solution), both by hand computation and by symbolic computing. If the hand and symbolic solutions are diﬀerent in form, do whatever is necessary to make them equivalent. In some cases this may involve solving an algebraic equation of the form y = f (x) to bring the solution to the form x = F (y). If you are asked to obtain a solution subject to a condition, plot your solution for a range of x and y that includes the point for which y(x) was speciﬁed.

10.3.4.

xy ′ − y = 0 ,

with y(1) = 4.

Solution: This ODE is separable: dy dx = , with solution ln y = ln x + C , or y = C ′ x. y x The particular solution needed here is y = 4x. This result is easily conﬁrmed by the output of one of these symbolic commands: > dsolve({x*diff(y(x),x)-y(x)=0,y(1)=4}); DSolve[{x*y’[x] == y[x], y[1] == 4}, y[x], x] Here is a graph of the solution:

10.3.5.

y ′ − y = ex .

Solution: Make exact with integrating factor e−x . The ODE becomes ( −x )′ e y = 1 , with solution e−x y = x + C, or y = (x + C)ex .

10.3. FIRST-ORDER EQUATIONS

149

This result is easily conﬁrmed by the output of one of these symbolic commands: > dsolve(diff(y(x),x)-y(x)=exp(x)); DSolve[y’[x] - y[x] == E^x, y[x], x] 10.3.6.

(1 + y)y ′ = y ,

with y(1) = 1.

Solution: This ODE is separable: ( ) 1 + 1 dy = dx , with solution ln y + y = x + C . y The particular solution with y(1) = 1 is ln y + y = x. An attempt to use symbolic methods for this problem with x the independent variable leads to signiﬁcant complication because the solution is a troublesome function of x. It is better to ask the symbolic systems to ﬁnd x as a function of y. Do that using one of the following commands: dsolve({diff(x(y),y) = (y+1)/y, x(1) = 1}); DSolve[{x’[y] == (1 + y)/y, x[1] == 1}, x[y], y] Here is a graph of the solution:

10.3.7.

2xy ′ + y = 2x3/2 .

Solution: Make exact with integrating factor x1/2 . The ODE becomes (

x1/2 y

)′

= x , with solution x1/2 y = x2 /2 + C, or y =

x3/2 + Cx−1/2 . 2

This result is easily conﬁrmed by the output of one of these symbolic commands: > dsolve(2*x*diff(y(x),x)+y(x)=2*x^(3/2)); DSolve[2*x*y’[x] + y[x] == 2*x^(3/2), y[x], x] 10.3.8.

xy ′ = y 2 ,

with y(1) = 2.

Solution: This ODE is separable: dy dx 1 1 = , with solution − = ln x + C or y = . y2 x y ln x + C

150

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS 2 . 2 ln x + 1 This result is easily conﬁrmed by the output of one of these symbolic commands:

The solution needed here is y(x) =

> dsolve({x*diff(y(x),x)=y(x)^2,y(1)=2}); DSolve[{x*y’[x] == y[x]^2, y[1] == 2}, y[x], x] Here is a graph of the solution:

10.3.9.

(x + y 2 )y ′ − y = 0 .

Solution: This ODE becomes linear if y is made the independent variable. The ODE then reads x′ − x/y = y. Make exact with integrating factor y −1 . The ODE becomes ( −1 )′ x y x = 1 , with solution = y + C or x = y 2 + Cy . y The following symbolic code gives y as a function of x; the results are consistent with what we have just found. > dsolve((x+y(x)^2)*diff(y(x),x)=y(x)); DSolve[(x + y[x]^2)*y’[x] - y[x] == 0, y[x], x] 10.3.10.

y ′ + 2xy 2 = 0 ,

with y(2) = 1.

Solution: This ODE is separable: dy 1 1 = −2x dx , with solution − = −x2 + C or y = 2 . y2 y x + C′ 1 . −3 This result is easily conﬁrmed by the output of one of these symbolic commands:

The particular solution needed here is y =

x2

> dsolve({diff(y(x),x)+2*x*y(x)^2=0,y(2)=1}); DSolve[{y’[x] + 2*x*y[x]^2 == 0, y[2] == 1}, y[x], x] Here is a graph of the solution:

10.3. FIRST-ORDER EQUATIONS

10.3.11.

151

x ln x y ′ − 2y = ln x .

Solution: Make exact with integrating factor 1/ ln2 x. The ODE becomes (

y ln2 x

)′ =

y 1 1 , with solution 2 = − +C , or y = − ln x+C ′ ln2 x . ln x x ln2 x ln x

This result can be conﬁrmed by the output of this maple symbolic command: > dsolve(x*ln(x)*diff(y(x),x)-2*y(x)=ln(x)); The mathematica output of this symbolic command, DSolve[x*log[x]*y’[x] - 2*y[x] == log[x], y[x], x], gives the correct answer only if all the deﬁnite integrals are changed to indeﬁnite integrals and then evaluated.

10.3.12.

x(1 + y)y ′ = −y ,

with y(1/2) = 2.

Solution: This ODE is separable: dx C ′ e−y (1 + y) dy =− with solution ln y + y = − ln x + C or x = . y x y

The particular solution needed here is x =

e2−y . y

Symbolic code to obtain x as a function of y can be one of the following: > dsolve({diff(x(y),y) = -x(y)*(1+1/y),x(2)=1/2}); DSolve[{x’[y] == -x[y]*(1 + 1/y), x[2] == 1/2}, x[y], y] Here is a graph of the solution:

152

10.3.13.

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS

(x2 y + x3 )y ′ + y 3 + xy 2 = 0 .

Solution: This ODE is homogeneous in x and y; substitute y = vx, obtaining thereby x dv + (v 2 + v) dx = 0. The new ODE is separable: dv dx 1 = − , with solution ln v − ln(v + 1) = − ln x + C or v = ′ . +v x C x−1

v2

x . C ′x − 1 This result is easily conﬁrmed by the output of one of these symbolic commands: This corresponds to y = vx =

> dsolve((x^2*y(x)+x^3)*diff(y(x),x)+y(x)^3+x*y(x)^2=0); DSolve[(x^2*y[x] + x^3)*y’[x] + y[x]^3 + x*y[x]^2 == 0, y[x], x] 10.3.14.

y ′ cos x + y = cos2 x .

Solution: Make exact with integrating factor (1 + sin x)/ cos x. The ODE becomes [

(1 + sin x) y cos x

]′ = 1 + sin x with solution

or y =

(1 + sin x)y = x − cos x + C , cos x

cos x(x − cos x + C) . 1 + sin x

This result can be conﬁrmed by the output of one of these symbolic commands: > dsolve(diff(y(x),x)*cos(x)+y(x)=cos(x)^2); DSolve[y’[x]*Cos[x] + y[x] == Cos[x]^2, y[x], x] However, to verify the symbolic solutions, we need to note that sec x + tan x = cos x/(1 + sin x) or that e−2 tanh

−1

(tan x/2)

=

cos x . 1 + sin x

To prove this last result, use the formulas tanh−1 φ =

1 [ln(1 + φ) − ln(1 − φ)] 2

and

tan2 (x/2) =

1 − cos x , 1 + cos x

10.3. FIRST-ORDER EQUATIONS

153

thereby reaching

e

−2 tanh−1 (tan x/2)

(1 − tan(x/2)) = = 1 − tan2 (x/2) 2

1+

−2

1−cos x 1+cos x

1−

1−cos x 1+cos x

1−cos x 1+cos x

=

1 − sin x , cos x

which is equal to the claimed result (set the two quantities equal and note that the result is an identity). 10.3.15.

y2 y′ = x ,

with y(2) = 2.

Solution: This ODE is separable: y 2 dy = x dx with solution

y3 x2 = + C = or y = 3 2 (

Here we need the particular solution y =

3x2 +2 2

(

3x2 + C′ 2

)1/3 .

)1/3 .

This result can be conﬁrmed by the output of one of these symbolic commands: > dsolve({y(x)^2*diff(y(x),x)=x,y(2)=2}); DSolve[{y[x]^2*y’[x] == x, y[2] == 2}, y[x], x] Here is a graph of the solution:

10.3.16.

y ′ + x2 y 2 − y = 0 .

Solution: This is a Bernoulli equation; convert to a linear equation as shown in Eq. (10.41). The new variable u is y −1 ; the u ODE is u′ + u = x2 , which can be made exact using the integrating factor ex : ′

(ex u) = x2 ex , with solution ex u = ex (x2 − 2x + 2) + C, so y =

1 . x2 − 2x + 2 + Ce−x

This result is easily conﬁrmed by the output of one of these symbolic commands: > dsolve(diff(y(x),x)+x^2*y(x)^2-y(x)=0); DSolve[y’[x] + x^2*y[x]^2 - y[x] == 0, y[x], x]

154

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS

C

V

R

Figure 10.2: Electronic circuit for Exercise 10.3.19. 10.3.17.

2xy ′ + y = −3x .

Solution: Make exact with integrating factor x1/2 . The ODE becomes (

x1/2 y

)′

=−

3x1/2 with solution x1/2 y = −x3/2 + C or y = −x + Cx−1/2 . 2

This result is easily conﬁrmed by the output of one of these symbolic commands: > dsolve(2*x*diff(y(x),x)+y(x)=-3*x); DSolve[2*x*y’[x] + y[x] == -3*x, y[x], x] 10.3.18.

tan−1 x y ′ −

y = 0. 1 + x2

Solution: This ODE is separable: ( ) dy 1 dx , with solution ln y = ln tan−1 x + C = 2 y 1+x tan−1 x or y = C ′ tan−1 x . This result is easily conﬁrmed by the output of one of these symbolic commands: > dsolve(arctan(x)*diff(y(x),x) = y(x)/(1+x^2)); DSolve[ArcTan[x]*y’[x] - y[x]/(1 + x^2) == 0, y[x], x] 10.3.19.

An electric circuit containing a capacitor C, a resistor R, and a time-dependent applied voltage V (t) = V0 sin ωt is connected as shown in Fig. 10.2. At time t = 0 the switch is closed; prior to that time there is no charge q on the capacitor. The behavior of the circuit is described by the ODE q dq + = V (t) . R dt C (a) Find the charge q on the capacitor as a function of t. (b) The exponentially decaying portion of q(t) is referred to as its transient behavior. Identify (in terms of R and C) the time required for the transient to decay to 1/e times its maximum value.

Solution: (a) Divide the equation by R and apply the integrating factor et/RC . The equation can then be written )′ ( V0 qet/RC = sin(ωt) . R

10.4. ODES WITH CONSTANT COEFFICIENTS

155

Integrate this equation over the range (0, t) (this yields a solution with q = 0 at t = 0). We get qet/RC =

V0 ωRC 2 ωRC cos ωt − sin ωt − et/RC V0 C , 1 + ω 2 R2 C 2 1 + ω 2 R2 C 2

q(t) =

or

V0 ωRC 2 e−t/RC ωRC cos ωt − sin ωt − V0 C . 2 2 2 1+ω R C 1 + ω 2 R2 C 2

(b) The exponentially decaying term will have reached 1/e times its original value when t/RC = 1, i.e., when t = RC. 10.3.20.

The forward acceleration of a rocket is produced by the rearward discharge at high velocity of the mass of its fuel combustion products. When a mass −dm is discharged at a rearward velocity of magnitude u (relative to the rocket), the velocity of the rocket changes from v to v + dv and its mass m changes by an amount dm (obviously dm will be negative). Conservation of momentum (in a space-ﬁxed coordinate system) requires that the momentum discharged, −(v−u) dm, plus the change in the momentum of the remaining mass of the rocket, d(mv) = m dv + v dm, add to zero. Therefore, −(v − u) dm + m dv + v dm = 0,

or

m dv = −u dm .

Calculate the ﬁnal velocity of a rocket that is initially stationary, assuming that 90% of its original mass is expelled, and that all the discharged mass is expelled at the same velocity, u = 300 m/s. Neglect the eﬀects of gravity and air resistance.

Solution: We need to solve m dv = −u dm subject to m = m0 when v = 0. The general solution to this ODE is v = −u ln m + C. The particular solution we require has C = u ln m0 , so v = u ln(m0 /m). Inserting m0 /m = 10 and u = 300 m/s, the velocity when 10% of the mass remains is 691 m/s.

10.4

ODEs WITH CONSTANT COEFFICIENTS

Exercises 10.4.1.

(a) Starting from Eq. (10.71), show that if a mechanical or electrical system is adjusted by changing ω (at ﬁxed α), the oscillatory displacement (or charge) yp has maximum amplitude when ω = α; this value of ω is usually called the resonance angular frequency. (The resonance frequency is ω/2π.) The changing of ω is the process involved when a radio is tuned to the frequency of an incoming signal. (b) Find the value of α that maximizes the displacement (or charge) for a ﬁxed value of ω; it is NOT ω! This datum is signiﬁcant if we need to minimize the maximum magnitude of oscillation of a mechanical structure, such as a bridge, and could be regarded as the resonant angular frequency of the structure. (c) Show that if α is varied, as in part (b), the value of α that maximizes the oscillation amplitude of dyp /dt (which for an electric circuit is the current) IS at α = ω. This observation rationalizes the identiﬁcation of ω as the resonant angular frequency of an electric circuit, even if it is tuned by changing α.

Solution: (a) The amplitude of yp is proportional to y0 = √

1 (ω 2

α 2 )2

+ 4b2 α2

.

156

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS To maximize y0 , compute ( ) ∂y0 4ω(ω 2 − α2 ) =− , ∂ω α 2((ω 2 − α2 )2 + 4b2 α2 )3/2 which is zero when ω 2 − α2 = 0, i.e., when ω = α. (b) Here we require the derivative of y0 with respect to α. ( ) 4α(α2 − ω 2 ) + 8b2 α ∂y0 =− , ∂α ω 2((ω 2 − α2 )2 + 4b2 α2 )3/2 which is zero when 4α(α2 − ω 2 + 2b2 ) = 0, i.e., when √ α2 = ω 2 − 2b2 , equivalent to α = ω 2 − 2b2 . (c) The steady-state current Ip is dyp F α cos(αt − δ) . =√ dt (ω 2 − α2 )2 + 4b2 α2 Its amplitude is proportional to αy0 . To ﬁnd the maximum of αy0 as α is varied compute its derivative: ( ) ∂(αy0 ) 1 4α2 (α2 − ω 2 ) + 8b2 α =√ − 2 2 2 2 2 ∂α 2[(ω 2 − α2 )2 + 4b2 α2 ]3/2 (ω − α ) + 4b α ω =

(ω 2 − α2 )2 + 2α2 (ω 2 − α2 ) (ω 2 − α2 )(ω 2 + α2 ) = . [(ω 2 − α2 )2 + 4b2 α2 ]3/2 [(ω 2 − α2 )2 + 4b2 α2 ]3/2

The derivative is zero when α = ω. 10.4.2.

Radioactive nuclei decay with a probability k that is independent of time, and (in the absence of any process that replenishes their number), the number of nuclei present at the time t, N (t), is therefore described by the ODE dN = −kN . dt (a) Solve this ODE subject to the initial condition N (0) = N0 . (b) Show that the time τ required for half the N0 nuclei to decay (called the half-life of the isotope) is independent of N0 and ﬁnd the relation between τ and the decay constant k.

Solution: (a) Separate the variables and integrate: dN = k dt N

−→

ln N = −kt + C

−→

N = C ′ e−kt .

At t = 0, N = N0 if C ′ = N0 . (b) Set N = N0 /2 and solve for t. 1 = e−kτ 2

−→

− ln 2 = −kτ , or τ =

This result is independent of N0 .

ln 2 . k

10.4. ODES WITH CONSTANT COEFFICIENTS 10.4.3.

157

Solution: (a) Using the result of Exercise 10.4.2, write k0 = (ln 2)/τRa and k = (ln 2)/τRn . The the ODEs for the two-step decay are dNRa = −k0 NRa , dt dNRn = k0 NRa − kNRn . dt (b) NRa = N0 e−k0 t (as in Exercise 10.4.2). Then substitute: dNRn = −kNRn + k0 N0 e−k0 t . dt The general solution to this ODE is NRn = Ce−kt +

k0 N0 e−k0 t , k − k0

) k0 N0 ( k0 t e − ekt . k − k0 occurs when dNRn /dt = 0, which corresponds to

and the solution with NRn (0) = 0 is NRn = (c) The maximum in NRn

tmax =

ln k − ln k0 . k − k0

Working in years, k0 = ln 2/1600 = 0.00043 and k = ln 2/(38/365) = 6.66, tmax = 10.4.4.

ln 6.66 − ln 0.00043 = 1.45 year . 6.66 − 0.00043

Find the general solution of y ′′ (x) + y ′ (x) − 2y(x) = x .

Solution: First examine the related homogeneous ODE, which has solutions of the form emx . The possible values of m are the solutions of m2 + m − 2 = 0, namely m = 1 and m = −2, corresponding to ex and e−2x . We now need a particular solution to the full inhomogeneous ODE. We try a solution consisting of the inhomogeneous term and its derivative(s), i.e., y = ax + b, ﬁnding y = −(2x + 1)/4. Thus the general solution is y(x) = C1 ex + C2 e−2x −

x 1 − . 2 4

158

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS

L

L +q

V

V R

−q

+q

C

V

−q

C

R

R

Figure 10.4: Some electric circuits. 10.4.5.

Find the general solution of 2y ′′ (x) − 9y ′ (x) + 4y(x) = e2x .

Solution: First examine the related homogeneous ODE, which has solutions of the form emx . The possible values of m are the solutions of 2m2 − 9m + 4 = 0, namely m = 4 and m = 1/2, corresponding to e4x and ex/2 . We now need a particular solution to the full inhomogeneous ODE. We try a solution consisting of the inhomogeneous term and its derivative(s); in this case y = Ae2x , ﬁnding y = −e2x /6. Thus the general solution is y(x) = C1 e4x + C2 ex/2 − 10.4.6.

e2x . 6

Find the general solution of 2y ′′ (x) − 9y ′ (x) + 4y(x) = e4x + 2x2 .

Solution: First examine the related homogeneous ODE, which is the same as that of the ODE in Exercise 10.4.5, with solutions e4x and ex/2 . We next need a particular solution for each of the two inhomogeneous terms e4x and 2x2 ; we can add these together and append to their sum the general solution of the homogeneous equation. For the inhomogeneous term e4x we try a solution of the form (ax + b)e4x , (this form is needed because e4x is a solution of the homogeneous equation). We obtain y = (7x−2)e4x /49. For the inhomogeneous term 2x2 we try ax2 + bx + c, ﬁnding y = x2 /2 + 9x/4 + 73/16. Combining these, dropping −2e4x /49 because it can be absorbed into the C1 term of the overall solution, we get y(x) = C1 e4x + C2 ex/2 +

e4x x2 9x 73 + + + . 7 2 4 16

Exercises 10.4.7–10.4.9 deal with electric circuits of the types shown in Fig. 10.4. The switch is assumed to be open prior to time t = 0, when it is closed. For numerical computations, note that the ODEs for these circuits correspond to the use of the MKS units of L, R, C, V , I, q, and t, which are respectively henry, ohm, farad, volt, ampere, coulomb, and second.

10.4.7.

This exercise deals with the LR circuit with L = 10 henry, R = 1000 ohm, and V0 = 100 volt. The ODE for this circuit is LI ′ (t) + RI(t) = Vapp (t) = V0 sin αt . Make computations for the two angular frequencies α1 = 120π (60 hertz) and α2 = 2000π (1 Khertz).

10.4. ODES WITH CONSTANT COEFFICIENTS

159

(a) Solve the ODE subject to the initial condition I(0) = 0. Identify the transient and steady-state currents. (b) Plot the transient current for each α value. Suggested range of t (both frequencies), 0 to 0.1 sec. Explain how the condition I(0) = 0 causes there to be a nonzero transient, and discuss the similarities and diﬀerences in the transient terms for the two α values. Is the transient term oscillatory? Explain. (c) Plot the total current I(t) for each α value. Suggested ranges: For α1 , 0 to 0.1 sec and 10 to 10.1 sec; for α2 , 0 to 0.01 sec and 10 to 10.01 sec. Explain the diﬀerence between the earlier and later-time plots. (d) Compare the amplitudes of the steady-state oscillations at the two α values. Give a qualitative explanation of the diﬀerences. (e) Compare the phases of Vapp (t) and I(t) in the large-t limit (obtain this from the steadystate term). Explain the phase shift in terms of the properties of the circuit elements.

Solution: (a) The homogeneous equation has solution emt , with m the solution of Lm + R = 0, yielding e−Rt/L . To solve the inhomogenious equation, try the particular solution Ip (t) = A sin αt + B cos αt. To solve the inhomogeneous equation, try the particular solution Ip (t) = A sin αt + B cos αt. Substituting this form into the ODE, we ﬁnd A=

V0 R , R2 + α2 L2

B=−

V0 αL . R2 + α2 L2

We add to Ip (t) the amount of the solution to the homogeneous equation that is needed to make I(0) = 0. We get [ ] αLe−Rt/L R sin αt − αL cos αt I(t) = V0 + . R2 + α2 L2 R2 + α2 L2 The term with the negative exponential is the transient; the remainder is the steady-state current. (b) The plots shown below show the transient current to be exponentially decaying. It has similar behavior for both frequencies because its form is determined by the parameters in the homogeneous equation. However, the driving (inhomogeneous) term does determine the amplitude of the transient. In this circuit the transient cannot be oscillatory for any choice of the parameters because neither circuit element can store energy. At left: Transient current for α = 120π; at right, for α = 2000π.

160

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS (c) The plots of the total current are similar at earlier and later times except that at the earlier t the current includes a non-negligible transient component. These plots are for α = 120π, at left starting at t = 0; at right, after reaching steady state.

Here are similar plots for α = 2000π.

(d) At the larger α value the inductance (L) becomes more important in limiting the current ﬂow because its eﬀect is proportional to the rate at which the current changes. As a result, the steady-state current is smaller for the larger α. (e) If αL were zero, I and Vapp would be in phase (both varying as sin αt). To see what happens at nonzero αL, write αL sin δ = √ , 2 R + α2 L2

R cos δ = √ 2 R + α2 L2

or δ = tan−1

αL . R

We see that the steady-state current can be brought to the form sin(αt − δ) I∞ (t) = √ . R2 + α2 L2 The phase shift is controlled by the ratio αL/R. 10.4.8.

This exercise deals with the RC circuit in Fig. 10.4, with R = 1, 000 ohm, C = 10 × 10−6 farad (10 microfarad, written 10 µF), V0 = 100 volt. The ODE for this circuit is Rq ′ (t) +

1 q(t) = Vapp (t) = V0 sin αt , C

10.4. ODES WITH CONSTANT COEFFICIENTS

161

where q(t) is the charge on the plates of the capacitor at time t. Make computations for the two angular frequencies α1 = 120π (60 hertz) and α2 = 2000π (1 Khertz). (a) Solve the ODE subject to the initial condition q(0) = 0. Identify the transient and steady-state charges. (b) Diﬀerentiate q(t) to obtain the current I(t). Identify its transient and steady-state components. Explain why I(0) is zero. (c) Plot the transient charge and current for each α value. Suggested range of t (both frequencies), 0 to 0.1 sec. Explain how, even though q(0) = 0, there are nonzero transients, and discuss the similarities and diﬀerences in the transient terms for the two α values. Is the transient term oscillatory? Explain. (d) Plot the total charge q(t) and current I(t) for each α value. Suggested ranges: for α1 , 0 to 0.1 sec and 10 to 10.1 sec; for α2 , 0 to 0.01 sec and 10 to 10.01 sec. Explain the diﬀerence between the earlier and later-time plots. (e) Compare the amplitudes of the steady-state oscillations at the two α values. Give a qualitative explanation of the diﬀerences. (f) Compare the phases of Vapp (t) and I(t) in the large-t limit (obtain this from the steadystate term). Explain the phase shift in terms of the properties of the circuit elements.

Solution: (a) The homogeneous equation has solution emt , with m the solution of Rq ′ + q/C = 0, yielding e−t/RC . To solve the inhomogeneous equation, try the particular solution qp (t) = A sin αt + B cos αt. Substituting this form into the ODE, we ﬁnd A=

V0 C , 1 + α2 R 2 C 2

B=−

V0 αRC 2 . 1 + α2 C 2 R 2

We add to qp (t) the amount of the solution to the homogeneous equation that is needed to make q(0) = 0. The result is [

] αRCe−t/RC sin αt − αRC cos αt q(t) = V0 C + . 1 + α2 R 2 C 2 1 + α2 R 2 C 2 The negative exponential term is the transient; the remainder is the steadystate charge oscillation. (b) Diﬀerentiating q(t), [ I(t) = V0 αC

cos αt + αRC sin αt −e−t/RC + 2 2 2 1+α R C 1 + α2 R 2 C 2

]

Here also the negative exponential term is the transient and the remainder is the steady-state current. The current is zero at t = 0 because there is at that time no force (either from Vapp or q) to cause it to ﬂow. (c) Here are plots of the transients; they must occur because the steady-state situation corresponds to nonzero current at zero q. The transients are not oscillatory; they decay exponentially, and have shapes determined by R and C but not by α; the value of α, however, does inﬂuence the scale of the transients. At α = 120π, the transient charge on the capacitor is shown at left and the transient current at right.

162

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS

Here are the transient charge and current at α = 2000π.

(d) The plots of q and I show the eﬀect of the transient at the earlier times. Here are the plots of the charge q. The top row is for α = 120π; the bottom row is for α = 2000π. At left, for times starting at t = 0; at right, after reaching steady state.

Here are the plots of rhe current I. The top row is for α = 120π; the bottom row is for α = 2000π. At left, for times starting at t = 0; at right, after reaching steady state.

10.4. ODES WITH CONSTANT COEFFICIENTS

163

(e) The current becomes independent of α when αRC ≫ 1, and this condition is already reasonably well satisﬁed at α = 120π. There is therefore little diﬀerence in the current amplitudes at the two frequencies examined here. (f) The steady-state current can be characterized as having phase sin(αt − δ), with δ = tan−1 αRC (for an explanation see Exercise 10.4.7). At both frequencies δ is close to 90◦ . 10.4.9.

This exercise deals with the LRC circuit in Fig. 10.4, with L = 60 henry, R = 2, 000 ohm, C = 10−5 farad (10 µF). V0 = 1000 volt. The ODE for this circuit is Lq ′′ (t) + Rq ′ (t) +

1 q(t) = Vapp (t) = V0 sin αt , C

where q(t) is the charge on the plates of the capacitor at time t. Make computations for the two angular frequencies α1 = 120π (60 hertz) and α2 = 600π (300 hertz). (a) Solve the ODE subject to the initial conditions q(0) = 0, q ′ (0) = I(0) = 0. Identify the transient and steady-state charges. (b) Diﬀerentiate q(t) to obtain the current I(t). Identify its transient and steady-state components. (c) Plot the transient current for each α value. Suggested range of t (both frequencies), 0 to 0.5 sec. Explain how, despite the initial conditions, there are nonzero transients, and discuss the similarities and diﬀerences in the transient terms for the two α values. Is the transient term oscillatory? Explain. (d) Plot the total current I(t) for each α value. Suggested ranges: for α1 , 0 to 0.2 sec and 10 to 10.2 sec; for α2 , 0 to 0.06 sec and 10 to 10.06 sec. Explain the diﬀerence between the earlier and later-time plots. (e) Compare the amplitudes of the steady-state oscillations at the two α values. Give a qualitative explanation of the diﬀerences. (f) Compare the phases of Vapp (t) and I(t) in the large-t limit (obtain this from the steadystate term). Explain the phase shift in terms of the properties of the circuit elements.

164

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS Solution: √ (a) It is convenient to write our formulas using the notations ω0 = 1/ LC and k = R/2L, where ω0 is the resonance frequency of the circuit if R is set to zero. The homogeneous ODE (with V0 set to zero) has solutions q(t) = emt ,

with

m = −k ±

√ k 2 − ω02 .

These functions describe the transient portion of the overall solution to this exercise. With the present parameter values, the quantity within the square root sign is negative, indicating that the transient can be described as a damped oscillation. This behavior is possible because the capacitor can be charged and discharged; its initial condition (no charge, no current) is inconsistent with the steady-state solution. Anticipating that the square root term is imaginary, we write the transient part of the solution as q0 (t) = e−kt [A cos ω1 t + B sin ω1 t] , where we deﬁne ω12 = ω02 − k 2 . We next need a particular solution to the complete ODE; we can ﬁnd one of the assumed form q∞ (t) = a cos αt + b sin αt. Substituting this form into the ODE, we ﬁnd a solution when a=−

V0 2αk , L 4α2 k 2 + (ω02 − α2 )2

b=

V0 ω02 − α2 . L 4α2 k 2 + (ω02 − α2 )2

Writing now q(0) = A + a ,

q ′ (0) = −kA + ω1 B + αb = 0 ,

we can solve for A and B: A = −a; B = −

ka + αb ¿ ω1

The overall solution that satisﬁes the initial conditions is q(t) = q0 (t) + q∞ (t) , where q0 (t) and q∞ (t) are respectively the transient and steady-state terms. (b) Diﬀerentiating the formula for q(t), we have q ′ (t) = −ke−kt (A cos ω1 t + b sin ω1 t) − ω1 e−kt [A sin ω1 t − B cos ω1 t] − α [a sin αt − b cos αt] . (c) The transient currents for both α values are plotted below. At left, for α = 120π; at right, for α = 600π.

10.4. ODES WITH CONSTANT COEFFICIENTS

165

There are nonzero transient terms because the initial conditions are not consistent with the steady-state solution. The form of the transients is determined by the circuit parameters L R, and C, but their magnitude and phase depend upon Vapp . (d) Here are plots of I(t) = q ′ (t) for α = 120π. At left, starting at t = 0; at right, after reaching steady state.

These plots of I(t) are for α = 600π. At left, starting at t = 0; at right, after reaching steady state.

The plots at the earlier time exhibit the eﬀects of the transients. (e) The current decreases rapidly as α increases when α is signiﬁcantly larger

166

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS than ω0 (which in the current problem is approximately 40 Hertz). (f) The phase of the steady-state I(t) can be compared with that of Vapp by writing a cos αt+b sin αt as a constant times sin(αt−δ). From the formulas for a and b, we can (as in Exercise 10.4.7) ﬁnd δ = tan−1 2αk/(ω02 − α2 ).

10.4.10.

A parachutist falls subject to the gravitational force and to the air resistance provided by the parachute, assumed in this exercise to be proportional to the speed of descent. If the distance the parachutist falls is denoted by y, the ODE governing her motion is my ′′ = mg − ky ′ , where m is the mass of the parachutist, g is the acceleration of gravity, and k is the coeﬃcient of the air-resistance force. (a) Assuming the parachutist’s descent starts at t = 0, and that y(0) = y ′ (0) = 0, obtain formulas for y(t) and y ′ (t) (the velocity of descent). (b) Show that the velocity asymptotically approaches a constant value, called the terminal velocity, and identify its value in two diﬀerent ways: (i) From the solution of the ODE, and (ii) From the fact that the velocity will be constant when the net force on the parachutist is zero.

Solution: (a) The homogeneous equation my ′′ + ky ′ = 0 has a general solution which can be found systematically by assuming solutions of the form ept and solving for p. The p values, obtained from mp2 + kp = 0, are p = 0 and p = −k/m. A particular solution to the complete ODE is y = mgt/k. Thus, the general solution to our ODE is y(t) = C1 + C2 e−kt/m +

mgt , k

with y ′ (t) = −

kC2 −kt/m mg e + . m k

The solution with y(0) = y ′ (0) = 0 is y(t) =

) mgt m2 g ( −kt/m e − 1 + , k2 k

y ′ (t) =

mg ( −kt/m ) 1e . k

(b) From the solution to the ODE, we see that the exponential term in y ′ becomes negligible for large t, leaving mg/k as the terminal velocity. Alternatively, zero net force on the parachutist corresponds to mg − ky ′ = 0, which we can solve to ﬁnd y ′ = mg/k. 10.4.11.

A chain of length L hangs from a frictionless pulley (of negligible diameter relative to L). Nothing is attached to the ends of the chain, and at t = 0 it is stationary, but with its two ends diﬀering in vertical position by an amount 2h0 . (a) Construct the ODE for the motion of the chain, i.e., for h(t), recognizing that the net force on the chain arises from the diﬀerent amounts of chain on the two sides of the pulley, but the force must result in motion of the entire chain. (b) Obtain a solution to the ODE subject to the given initial conditions, with your solution valid for h ≤ L/2. (c) Calculate the kinetic energy of the chain (assuming its total mass to be M ) at the time that one end of the chain reaches the pulley. Write your answer in a form that does not involve any transcendental functions.

10.5. MORE GENERAL SECOND-ORDER EQUATIONS

167

Solution: (a) Newton’s law for motion of the chain (assumed for now to have unit mass per unit length) is Lh′′ = 2hg , where h is the displacement of the lower end of the chain from the point at which both ends are at the same height. The initial conditions are h(0) = h0 and h′ (0) = 0. (b) Our ODE has solutions of√ the form h(t) = ept , where, by substituting into the ODE we ﬁnd p = ± 2g/L. Taking a general solution of the form h = aept + be−pt , we ﬁnd from the initial conditions h(0) = a + b = h0

and h′ (0) = p(a − b) = 0 .

Noting that a = b, we see that it is convenient to write the solution as h(t) = A cosh pt, or more speciﬁcally h(t) = h0 cosh pt and

h′ (t) = ph0 sinh pt .

(c) We can obtain the kinetic energy of the chain in two ways. First, noting that h(t) is independent of M , the total mass of the chain, we can form M h′2 /2 when h = L/2; the result (for this h) is ] M (h′ )2 M p2 h20 M p2 h20 [ cosh2 pt − 1 = sinh2 pt = 2 2 2 [( ] )2 [ ] M gh20 L M g L2 = −1 = − h20 . L 2h0 L 4

KE =

Alternatively, we can calculate the kinetic energy from the diﬀerence in potential energy between the initial and ﬁnal positions of the chain. Taking the pulley as the zero point for potential energy, the chain initially has its center of mass at (L2 + 4h20 )/4L and ﬁnally at L/2; The diﬀerence of these quantities is ( ) L L h20 L h2 − + = − 0. 2 4 L 4 L Multiplying by M g, we conﬁrm our earlier result.

10.5

MORE GENERAL SECOND-ORDER EQUATIONS

Exercises 10.5.1.

For the ODE y ′′ + yy ′ = 0, ﬁnd a general solution containing two independent constants and, in addition, a singular solution that contains only one constant. Does your symbolic computing system ﬁnd the singular solution? Hint. Consider what happens to your solution procedure if one of the constants of integration is zero.

168

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS Solution: Following the procedure at Eq. (10.74), set y ′ = u and y ′′ = u(du/dy), leading to du du u + yu = 0, satisﬁed if u = 0 or = −y . dy dy From u = 0 (i.e., y ′ = 0), we get y = C; from the other equation we have u = y′ = −

C′ y2 + , 2 2

2 dy = dx , C ′ − y2

equivalent to

which (assuming C ′ to be nonzero) integrates to ( ′ ) 1 C +y x = ′ ln + C ′′ . C C′ − y Solving for y, this is equivalent to ′

(

y = C tanh

C ′ x + C ′′ 2

) .

However, if C ′ = 0, the integration of y ′ = −y 2 /2 leads to y = 2/(x + C ′′′ ). Because this is a nonlinear equation all the solutions we have found should be checked against the original ODE. They pass that test; noting that the tanh solution reduces to y = C ′ in the limit C ′′ = ∞, a complete solution is included when we write ( ′ ) C x + C ′′ 2 ′ y = C tanh or y = . 2 x+C The singular solution is not found by the current versions of either maple or mathematica. Find general solutions for the following ODEs.

10.5.2.

2yy ′′ − y ′2 = 0 .

Solution: Set y ′ = u, y ′′ = u(du/dy), so the ODE becomes 2yu(du/dy) − u2 = 0. One solution is u = 0, i.e., y = C; other solutions can be obtained from 2y(du/dy) − u = 0. Solving that equation: 2y du = u dy

−→

du dy = u 2y

−→

ln u =

ln y + C or u = Cy 1/2 . 2

Setting u = dy/dx and integrating again, dy = C dx y 1/2

−→

2y 1/2 = Cx + C ′ .

Absorbing the “2” into the constants, we have y = (Cx + C ′ )2 .

10.5. MORE GENERAL SECOND-ORDER EQUATIONS 10.5.3.

169

xy ′′ − y ′ − y ′ 2 = 0 . In addition, this ODE has the singular solution y = C.

Solution: Set y ′ = u, y ′′ = u′ . Then xu′ − u − u2 = 0. This ODE is separable: du dx = . u(u + 1) x

x du = (u + u2 )dx or

Decompose 1/u(u + 1) into partial fractions and integrate: ln u − ln(u + 1) = ln x + ln C

−→

u = Cx . u+1

If C ̸= 0 one can solve for u as follows and integrate again: ∫ Cx Cx ′ −→ y = dx , u=y = 1 − Cx 1 − Cx y=−

so

ln(1 − Cx) − x + C′ . C

However, if C = 0 we get the simple result u = 0, corresponding to y = C ′ . 10.5.4.

xy ′′ + y ′ = 0 .

Solution: Set y ′ = u and y ′′ = u′ , so the ODE becomes xu′ + u = 0. This equation is separable: du dx C =− −→ u = . u x x Integrating again, dy = 10.5.5.

C dx , x

−→

y = C ln x + C ′ .

x2 y ′′ − xy ′ − 3y = x4 .

Solution: This is an inhomogeneous Euler equation. The corresponding homogeneous equation has solutions y = xp , with p(p − 1) − p − 3 = 0, so p = 3 or p = −1, so the homogeneous equation has general solution y0 (x) = C1 x3 + C2 x−1 . The inhomogeneous equation has a particular solution Ax4 ; substitution into the ODE shows that A = 1/5. The general solution is then y0 (x) + x4 /5. 10.5.6.

x2 y ′′ − 3xy ′ + 4y = x .

Solution: This is an inhomogeneous Euler equation. The corresponding homogeneous equation has solutions y = xp , with p(p − 1) − 3p + 4 = 0, which has a double root at p = 2. The homogeneous equation therefore has general solution y0 (x) = C1 x2 + C2 x2 ln x .

170

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS A particular solution to the inhomogeneous equation is y = x, so this problem has general solution y0 (x) = x + C1 x2 + C2 x2 ln x .

10.5.7.

x2 y ′′ − 3xy ′ + 4y = x2 . Hint. Try solutions of the form xp lnq x.

Solution: This is an inhomogeneous Euler equation with the same corresponding homogeneous equation as Exercise 10.5.6, with the general solution given as y0 (x) of that Exercise. Because the inhomogeneous term has the same form as one of the terms of y0 , a particular solution of the inhomogeneous equation must have the form Ax2 ln2 x. Substituting this form, we ﬁnd A = 1/2, so the ODE has general solution y0 (x) + (x2 ln2 x)/2. 10.5.8.

The ODE satisﬁed by a quartic oscillator of mass m is of the form m

d2 x = −kx3 . dt2

Find a ﬁrst integral of this ODE that can be interpreted as the law of conservation of energy for the oscillator.

Solution: This problem is of the type illustrated in Eq. (10.87). Multiplying both sides of the ODE by dx/dt and integrating with respect to t, we get, giving dx/dt the name v, mv 2 kx4 =− +C, 2 4

equivalent to

mv 2 kx4 + =C. 2 4

The two left-hand-side terms can be interpreted respectively as the kinetic and potential energy of the oscillator, while C is its total energy. 10.5.9.

Consider the surface of the Earth to be a sphere of radius R. An object of mass m, located a distance r > R from the Earth’s center will experience a gravitational force due to the Earth in amount mgR2 F = . r2 Neglecting air resistance and other extraterrestrial masses, ﬁnd the minimum upward velocity (escape velocity) an object must have when launched from the Earth’s surface in order for it to escape from the earth’s gravitational ﬁeld.

Solution: Assuming the motion of the object to be vertically upward, it will move subject to the equation mgR2 d2 r m 2 = −F = − 2 . dt r Obtaining a ﬁrst integral as illustrated in Eq. (10.87), and writing v for dr/dt, we have ∫ mgR2 mv 2 (r) mgR2 =− dr = +C. 2 r r

10.6. GENERAL PROCESSES FOR LINEAR EQUATIONS

171

We now choose C to have the value corresponding to v = 0 when r = ∞, namely C = 0. Then, setting r = R, we ﬁnd mv 2 (R) = mgR , 2 10.5.10.

corresponding to

v(R) =

√ 2gR .

A particle of mass m initially at rest at x = 0 is subject, starting at time t = 0 to a force [ ( )] a−x F = (a − x) 1 + 2 ln . a By a method that does not require a complete solution of the equation of motion mx′′ = F show that the particle will move only from x = 0 to x = a.

Solution: Obtain a ﬁrst integral as illustrated in Eq. (10.87), writing the integral of F in a form that ensures that x′ = 0 at x = 0, [ ( )] ∫ x ∫ x m(x′ )2 a−x = F dx = (a − x) 1 + 2 ln dx . 2 a 0 0 Evaluating the deﬁnite integral, we have m(x′ )2 = −(a − x)2 ln 2

(

a−x a

) .

The formula for F shows that F > 0 at x = 0, so the particle will initially move to positive values of x. Since the formula for (x′ )2 remains nonzero for all x between x = 0 and x = a, x′ must stay positive over the interior of that interval. But both (x′ )2 and F become zero at x = a, so the particle will come to a permanent stop at x = a.

10.6

GENERAL PROCESSES FOR LINEAR EQUATIONS

Exercises 10.6.1.

Verify in detail the steps taken to ﬁnd y2 (x) in Example 10.6.1.

Solution: The steps are straightforward. 10.6.2.

Verify in detail the steps taken in Example 10.6.2 leading to Eqs. (10.100), (10.101), and (10.102).

Solution: The steps are straightforward. To check u1 in Eq. (10.101), diﬀerentiate it. 10.6.3.

The function y1 (x) = xi is a solution of the ODE x2 y ′′ + xy ′ + y = 0 . From the fact that all the coeﬃcients of the ODE are real, we can conclude that another solution of the ODE is y2 (x) = x−i . Conﬁrm that y2 (x) is the result of applying Eq. (10.89) for this ODE.

172

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS Solution: Here p(u) = 1/u. (Must make coeﬃcient of y ′′ unity.) Then ( ∫ t ) ∫ t ∫ t du 1 p(u) du = = ln t; exp − p(u) du = . u t Inserting y1 (x) = xi , we get ∫

x

i

y2 (x) = x

−2i 1/t i x , dt = x t2i −2i

which is proportional to x−i . 10.6.4.

Consider the inhomogeneous ODE x2 y ′′ + xy ′ + y = x2 . The associated homogeneous ODE was treated in Exercise 10.6.3. (a) Use the method of variation of parameters to ﬁnd the general solution of the inhomogeneous ODE. (b) Conﬁrm your result using symbolic computing. Your computer will probably produce a result that looks diﬀerent than your hand computations. Resolve any apparent discrepancy.

Solution: (a) From the solutions of the related homogeneous equation we have y1 = xi , y2 = x−i , y1′ = ixi−1 , y2′ = −ix−i−1 , and f (x) = 1 (must make coeﬃcient of y ′′ unity). Then Eqs. (10.97) assume the form u′1 xi + u′2 x−i = 0 , u′1 (ixi−1 ) + u′2 (−ix−i−1 ) = 1 . These equations are easily reduced to u′1 xi + u′2 x−i = 0 , u′1 xi − u′2 x−i =

x , i

with solution u′1 = x1−i /2i, u′2 = −x1+i /2i . These expressions are now integrated and the result inserted into Eq. (10.91): u1 =

x2−i , 2i(2 − i)

y = y1 u1 + y2 u2 =

u2 = −

x2+i , 2i(2 + i)

x2 x2 x2 − = . 2i(2 − i) 2i(2 + i) 5

Combining this result with the general solution of the homogeneous equation, we ﬁnally reach y(x) = C1 xi + C2 x−i +

x2 . 5

10.7. GREEN’S FUNCTIONS

173

(b) This is what is produced by the symbolic computing systems: In maple, > ODE := x^2*diff(y(x),x,x)+x*diff(y(x),x)+y(x) = x^2: > dsolve(ODE); y(x) = sin(ln(x)) C2 + cos(ln(x)) C1 +

1 2 x 5

In mathematica, Simplify[DSolve[ODE, y[x], x]] x2 + C[1] Cos[Log[x] ] + C[2] Sin[Log[x] ] 5 These formulas can be caused to agree with the analysis of part (a) by noting that y[x] →

2 cos(ln x) = ei ln x + e−i ln x = xi + x−i , 2i sin(ln x) = ei ln x − e−i ln x = xi − x−i . These are linear combinations of the general solutions of the homogeneous equation.

10.7

GREEN’S FUNCTIONS

Exercises 10.7.1.

Verify that the integrals for y(t) given in Eqs. (10.108) and (10.109) give the results there shown.

Solution: Write sin[ω(t − t′ )] sin αt′ =

) 1( cos[ωt − (ω + α)t′ ] − cos[ωt − (ω − α)t′ ] . 2

The right-hand side of this equation is easily integrated from 0 to t: [ ]t ∫ t 1 sin[ωt − (ω + α)t′ ] sin[ωt − (ω − α)t′ ] ′ ′ ′ sin[ω(t − t )] sin αt dt = − . 2 −(ω + α) −(ω − α) 0 0 This simpliﬁes to ω y(t) as given in Eq. (10.108). A similar process can be used to verify Eq. (10.109). 10.7.2.

Perform the integrations appearing in Eq. (10.113) and thereby conﬁrm the formula there given for y(x).

Solution: Evaluating Eq. (10.113), ( y(x) =

x2 − 1 x

)[

x′3 6

]x

[ +x

0

x′3 x′ − 6 2

]1 , x

174

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS which simpliﬁes to

10.7.3.

x(x − 1) . 3

(a) Find the Green’s function G(t, t′ ) for the ODE y ′′ (t) + 2y ′ (t) + y(t) = f (t) , subject to the initial conditions G(t, t′ ) = ∂G(t, t′ )/∂t = 0 for t < t′ . (b) Use the Green’s function to ﬁnd a solution to the ODE with the above initial conditions given that f (t) = t2 for t ≥ 0, f (t) = 0 otherwise. (c) Check your solution, including its initial conditions.

Solution: This Green’s function will generate a solution to the ODE that is zero until the time that the right-hand side ﬁrst becomes nonzero. The homogeneous equation has solutions e−t and t e−t , so for t > t′ we have G(t, t′ ) = a(t′ )e−t + b(t′ )te−t , ∂G(t, t′ ) = −a(t′ )e−t + b(t′ )(1 − t)e−t . ∂t To make G(t′ , t′ ) = 0, it is necessary that a(t′ ) = −t′ b(t′ ). To make

lim ′

(t−t )=0+

∂G(t, t′ ) ′ = 1 , we require b(t′ ) = et . ∂t

Thus the Green’s function is { ′

G(t, t ) =

t < t′

0, ′

(t − t′ ) et −t ,

t > t′ .

The solution we seek will be given by the integral ∫ t ∫ t ′ y(t) = G(t, t′ ) f (t′ ) dt′ = (t − t′ ) et −t t′2 dt′ . 0

0

This integral is elementary; evaluating it leads to y(t) = t2 − 4t + 6 − 2(t + 3)e−t

with derivative y ′ (t) = 2t − 4 + (2t + 4)e−t .

These forms show that y(0) = y ′ (0) = 0, conﬁrming that our solution satisﬁes the initial conditions. If we diﬀerentiate again to obtain y ′′ (t), we can also verify that y(t) is a solution of the ODE. 10.7.4.

For the ODE x2 y ′′ − xy ′ − 3y = f (x), (a) Find its Green’s function for the boundary conditions y(0) = y(1) = 0. (b) Use the Green’s function to ﬁnd the solution to this ODE and its boundary conditions for f (x) = x2 . (c) Check your solution, including veriﬁcation of the boundary conditions.

10.7. GREEN’S FUNCTIONS

175

Solution: (a) The homogeneous equation has solutions x3 and 1/x; a solution that is zero at x = 0 is y1 = x3 ; one that is zero at x = 1 is y2 = 1/x − x3 . Thus the Green’s function (unscaled) is { 3 −1 x (t − t3 ), t > x , G(x, t) = C and t3 (x−1 − x3 ), t < x , {

∂G =C ∂x

3x2 (t−1 − t3 ),

t > x,

t (−1/x − 3x ), 3

2

2

t < x.

Setting (∂G/∂x)x=t+ − (∂G/∂x)x=t− = 1, we ﬁnd C = −1/4t. (b) We can now ﬁnd y(x). The function multiplying the Green’s function is the inhomogeneous term when the coeﬃcient of y ′′ is unity; in the present case it is f (x)/x2 , which is unity. Therefore, x−1 − x3 y(x) = − 4

x

x3 t dt − 4

1

2

0

(t−2 − t2 ) dt =

x

x3 − x2 . 3

(c) With this form for y(x), we obviously have y(0) = y(1) = 0, so the boundary conditions are satisﬁed. To check that the ODE is satisﬁed, we can drop the x3 term (it satisﬁes the homogeneous equation), and the substitution into the ODE of y = −x2 /3 yields −2x2 /3+2x2 /3−3(−x2 /3) = x2 , as required. 10.7.5.

(a) Construct a Green’s function for the ODE x2 y ′′ + xy ′ + y = f (x) , subject to the boundary conditions y(1) = y(eπ/2 ) = 0. (b) For f (x) = x2 , use your Green’s function to solve the ODE. It may be helpful to note that this ODE was the topic of Exercises 10.6.3 and 10.6.4. (c) Verify that your answer to part (b) is a solution to the ODE that also satisﬁes the boundary conditions identiﬁed in part (a).

Solution: (a) The homogeneous equation has solutions that can be written y1 = sin(ln x) and y2 = cos(ln x). Note that y1 (1) = y2 (eπ/2 ) = 0, so these solutions are suitable for forming the Green’s function. We have { sin(ln x) cos(ln t) , t > x, G(x, t) = C and cos(ln x) sin(ln t) , t < x, C ∂G = ∂x x

{

cos(ln x) cos(ln t) ,

t > x,

− sin(ln x) sin(ln t) , t < x.

Setting (∂G/∂x)x=t+ − (∂G/∂x)x=t− = 1, and noting that this equation reduces to C − cos(ln t − ln x) = 1 , x we ﬁnd C = −t.

176

CHAPTER 10. ORDINARY DIFFERENTIAL EQUATIONS (b) We can now ﬁnd y(x). The function multiplying the Green’s function is the inhomogeneous term when the coeﬃcient of y ′′ is unity; in the present case it is f (x)/x2 , which is unity. Therefore, ∫ y(x) = − cos(ln x)

x

eπ/2

t sin(ln t) dt − sin(ln x) 1

t cos(ln t) dt . x

Making the substitution u = ln t, the integrals are seen to be elementary, and we get (after trigonometric simpliﬁcation) y(x) =

x2 cos(ln x) eπ sin(ln x) − − . 5 5 5

(c) Setting x = 1, we note that the ﬁrst two terms of y are equal and opposite and that the third term is zero, conﬁrming y(1) = 0. At x = eπ/2 , the ﬁrst and third terms are equal and opposite while the second term is zero, conﬁrming also that y(eπ/2 ) vanishes. To check that the ODE is satisﬁed, we may discard the trigonometric terms of y because they are solutions of the homogeneous equation. We therefore only need to check that x2 /5 is a solution of the inhomogeneous equation. The demonstration is elementary.

Chapter 11

GENERAL VECTOR SPACES 11.1

VECTORS IN FUNCTION SPACES

Exercises 11.1.1.

(a) Using your symbolic computing system, deﬁne the four expressions Z0 = 1 ,

Z1 = 2x − 1 ,

Z2 = 6x2 − 6x + 1 ,

Z3 = 20x3 − 30x2 + 12x − 1 ,

calling them (in either symbolic language) z[0], z[1], z[2], and z[3]. (b) Write a symbolic procedure ScalProd(U,V) or ScalProd[U,V] that will produce the value of the scalar product ⟨U |V ⟩, assuming U and V to be real. Deﬁne the scalar product to be ∫ 1

⟨U |V ⟩ =

U (x)V (x) dx . 0

(c) Using the scalar product and symbolic procedure deﬁned in part (b), verify that the z[i] are orthogonal but not all normalized. (d) Redeﬁne the z[i] in a way that preserves their orthogonality but makes them normalized. (e) Write a symbolic procedure xpand3 that depends upon your work in parts (b) through (d) and ﬁnds the coeﬃcients a0 through a3 in the expansion of a function f (x) according to 3 ∑ f (x) ≈ ai Zi (x) . i=0

Check your work by verifying that f=x^3 yields an expansion that can be veriﬁed by inserting the explicit forms for the Zi . Hint. A procedure returns a single quantity as output, so a set of coeﬃcients can only be returned if packaged into a suitable compound quantity such as a list. Note also that a compound quantity cannot be an output if its number of elements has not been deﬁned. maple users: do not employ the sum command. Write the four-term sum explicitly. (f) Form the four-term expansions of the three functions x, x2 , and x3 . Combine the coeﬃcients of these expansions to obtain an expansion of f (x) = 2x − 3x2 + x3 , and check your result in two ways: (i)

By using scalar products to obtain the expansion of f (x).

(ii) By inserting the explicit form of the Zi and simplifying the result.

177

178

CHAPTER 11. GENERAL VECTOR SPACES Solution: (a) Deﬁne the Zi in one of the following ways: > z[0]:=1: z[1]:=2*x-1: z[2]:=6*x^2-6*x+1: >

z[3]:=20*x^3-30*x^2+12*x-1:

z[0]=1; z[1]=2*x-1; z[2]=6*x^2-6*x+1; z[3]=20*x^3-30*x^2+12*x-1; (b) Assuming that u and v are real expressions containing x (and that x is undeﬁned), deﬁne ScalProd in one of the following ways: > ScalProd := proc(u,v); int(u*v, x=0 .. 1); end proc: ScalProd[u_,v_] := Integrate[u*v, {x,0,1}]; (c) In maple, execute the following code: > for i from 0 to 3 do for j from 0 to i do >

print(i,j,ScalProd(z[i],z[j])); end do; end do;

In mathematica, execute: Do[Do[Print[i," , ",j,"

ScalProd = ",ScalProd[z[i],z[j]]],

{j, 0, i}], {i, 0, 3}] Either of these commands produces output for all i ≥ j through 3; all the scalar products are zero except (0, 0), 1; (d) Redeﬁne zi as

(1, 1),

1 ; 3

(2, 2),

1 ; 5

(3, 3),

1 . 7

√ 2i + 1 zi . Execute one of

> for i from 0 to 3 do z[i] := z[i]*sqrt(2*i+1) end do: Do[z[i] = z[i]*Sqrt[2*i+1], {i,0,3}]; ∑3 (e) The relevant formula is f (x) ≈ i=0 ai zi , with ai = ⟨zi |f ⟩. In maple, > xpand3 := proc(f) local a,i; > for i from 0 to 3 do a[i] := ScalProd(z[i],f) end do; > [a[0],a[1],a[2],a[3]]; end proc; Test by executing > f := x^3: > aa := xpand3(f): > aa[1]*z[0]+aa[2]*z[1]+aa[3]*z[2]+aa[4]*z[3];

x3

The code is a bit clumsy because we need to transfer the output of xpand3 as a single object (a list), and list elements are numbered starting from 1.

11.1. VECTORS IN FUNCTION SPACES

179

In mathematica, xpand3[f_] := Module[{a,i}, Do[a[i] = ScalProd[z[i],f],{i,0,3}]; {a[0],a[1],a[2],a[3]} ] Test by executing f = x^3; aa = xpand3[f]; aa[[1]]*z[0]+aa[[2]]*z[1]+aa[[3]]*z[2]+aa[[4]]*z[3]]; x3

Simplify[%]

The code is a bit clumsy because we need to transfer the output of xpand3 as a single object (a list), and list elements are numbered starting from 1. (f) In maple, the expansion and its checks can be carried out with > a1 := xpand3(x):

a2 := xpand3(x^2): a3 :=xpand3(x^3):

> i:=’i’:

(to make sure i is undeﬁned)

> for i from 1 to 4 do aa[i]:=2*a1[i]-3*a2[i]+a3[i]; end do; aa1 =

1 , 4

aa2 = −

1√ 3, 60

> xpand3(2*x-3*x^2+x^3);

aa3 = −

1√ 5, 20

aa4 =

1 √ 7. 140

(agrees with the above aai )

> aa[1]*z[0]+aa[2]*z[1]+aa[3]*z[2]+aa[4]*z[3]; 2x − 3x2 + x3 (agrees) In mathematica, a1 = xpand3[x];

a2 = xpand3[x^2];

a3 = xpand3[x^3];

Do[aa[[i]] = 2*a1[[i]]-3*a2[[i]]+a3[[i]]; Print[aa[[i]]], {i,1,4}]; √ 1 1 7 3 1 √ − − √ 4 20 3 4 5

1 √ 20 7

(agrees with the above aai )

xpand3[2*x-3*x^2+x^3]

aa[[1]]*z[0]+aa[[2]]*z[1]+aa[[3]]*z[2]+aa[[4]]*z[3]; x(2 − 3x + x2 )

Simplify[%] 11.1.2.

(agrees)

Repeat Exercise 11.1.1 in its entirety for the four functions Z0 = 1 ,

Z1 = 2x ,

with a scalar product of the form ⟨U |V ⟩ =

Z2 = 4x2 − 2 ,

Z3 = 8x3 − 12x ,

2

U (x)V (x) e−x dx .

−∞

Note. This exercise is far less work if done entirely by symbolic computing.

180

CHAPTER 11. GENERAL VECTOR SPACES Solution: For discussion of the various steps and the code for the procedure xpand3, refer to the solution of Exercise 11.1.1. The normalized functions are √ √ √ 1 x 2 (4x2 − 2) 2 (8x3 − 12x) 3 z0 = 1/4 , z1 = 1/4 , z2 = z3 = π π 4π 1/4 12π 1/4 The expansions of x, x2 , x3 , and 2x − 3x2 + x3 are

√ √ 1/4 π 1/4 3π π 1/4 3 2π 1/4 3 x = z0 + √ z2 , x = z1 + z3 2 4 2 2 √ √ 1/4 7 2π 1/4 3π 1/4 3π 1/4 3π z0 + z1 − √ z2 + z3 2x2 − 3x2 + x3 = − 2 4 2 2

π 1/4 x = √ z1 , 2

2

Without annotation, maple code for this problem is > z[0]:=1; z[1]:=2*x; z[2]:=4*x^2-2; z[3]:=8*x^3-12*x; > ScalProd := proc(u,v); >

int(u*v*exp(-x^2), x=-infinity .. infinity);

> end proc: > for i from 0 to 3 do for j from 0 to i do >

print(i,j,ScalProd(z[i],z[j])); od; od;

> z[0] := z[0]/sqrt(sqrt(Pi)): z[1] := z[1]/sqrt(2*sqrt(Pi)): > z[2] := z[2]/sqrt(8*sqrt(Pi)); z[3] := z[3]/sqrt(48*sqrt(Pi)); > a1 := xpand3(x); a2 :=xpand3(x^2}; a3 :=xpand3(x^3); > for i from 1 to 4 do aa[i]:=2*a1[i]-3*a2[i]+a3[i]; end do; > xpand3(2*x-3*x^2+x^3); > aa[1]*z[0]+aa[2]*z[1]+aa[3]*z[2]+aa[4]*z[3]; Unannotated mathematica code for this problem is z[0] = 1; z[1] = 2*x; z[2] = 4*x^2 - 2; z[3] = 8*x^3 - 12*x; ScalProd[u_,v_]:=Integrate[u*v*E^(-x^2),{x,-Infinity,Infinity}] Do[Do[Print[i," ",j,"

",ScalProd[z[i],z[j]]],{j,0,i}],{i,0,3}]

z[0] = z[0]/Sqrt[Sqrt[Pi]]; z[1] = z[1]/Sqrt[2*Sqrt[Pi]]; z[2] = z[2]/Sqrt[8*Sqrt[Pi]]; z[3] = z[3]/Sqrt[48*Sqrt[Pi]]; a1 = xpand3[x]; a2 = xpand3[x^2]; a3 = xpand3[x^3]; Do[aa[[i]] = 2*a1[[i]] - 3*a2[[i]] + a3[[i]], {i, 1, 4}]; aa xpand3[2*x - 3*x^2 + x^3] aa[[1]]*z[0] + aa[[2]]*z[1] + aa[[3]]*z[2] + aa[[4]]*z[3]; Simplify[%] 11.1.3.

Write symbolic code verifying that (for nonnegative integers n and m) ∫ π π sin nx sin mx dx = δmn . 2 0

11.1. VECTORS IN FUNCTION SPACES

181

Solution: maple code produces the following: > int(sin(n*x)*sin(m*x),x=0 .. Pi); n cos(πn) ∗ sin(πm) − m sin(πn) cos(πm) −n2 + m2 > int(sin(n*x)*sin(n*x),x=0 .. Pi); 1 cos(πn) sin(πn) − πn 2 n The ﬁrst command is indeterminate for n = m but for n ̸= m (both integers) it evaluates to zero. The second command deals with the case n = m, and for nonzero integer n it reduces to π/2. −

mathematica code for this problem is Integrate[Sin[n*x]*Sin[m*x], {x, 0, Pi}] nCos[nπ]Sin[mπ] − mCos[mπ]Sin[nπ] m2 − n2 Integrate[Sin[n*x]*Sin[n*x], {x, 0, Pi}] π Sin[2nπ] − 2 4n The ﬁrst command is indeterminate for n = m but for n ̸= m (both integers) it evaluates to zero. The second command deals with the case n = m, and for nonzero integer n it reduces to π/2. 11.1.4.

Evaluate, using symbolic computing (for nonnegative integers n and m) ∫ π ∫ π ∫ 2π cos nx cos mx dx , sin nx cos mx dx , sin nx cos mx dx . 0

0

0

Based on this exercise and Exercise 11.1.3, explain the conditions under which the set of functions sin nx and cos nx can form the basis for an expansion in orthonormal functions.

Solution: Execute symbolic commands to conﬁrm the relations  ∫ π n=m=0  π cos nx cos mx dx =  π δnm n ̸= 0. 0 2   n=m ∫ π  0 sin nx cos mx dx = n[−1 + (−1)n+m ]  0  n= ̸ m m 2 − n2 ∫

sin nx cos mx dx = 0 . 0

In maple, the output of > int(cos(n*x)*cos(m*x),x=0 .. Pi); > int(cos(n*x)*cos(n*x),x=0 .. Pi);

182

CHAPTER 11. GENERAL VECTOR SPACES conﬁrms the listed result for all cases except n = m = 0. In that trivial case the integral evaluates to π. The commands > int(sin(n*x)*cos(m*x),x=0 .. Pi); > int(sin(n*x)*cos(n*x),x=0 .. Pi); > int(sin(n*x)*cos(m*x),x=0 .. 2*Pi); > int(sin(n*x)*cos(n*x),x=0 .. 2*Pi); yield output that can be reduced to the listed expressions. In mathematica, Integrate[Cos[n*x]*Cos[m*x], {x, 0, Pi}] Integrate[Cos[n*x]*Cos[n*x], {x, 0, Pi}] conﬁrms the listed result for all cases except n = m = 0. In that trivial case the integral evaluates to π. The commands Integrate[Sin[n*x]*Cos[m*x], {x, 0, Pi}] Integrate[Sin[n*x]*Cos[n*x], {x, 0, Pi}] Integrate[Sin[n*x]*Cos[m*x], {x, 0, 2*Pi}] Integrate[Sin[n*x]*Cos[n*x], {x, 0, 2*Pi}] yield output that can be reduced to the listed expressions.

11.1.5.

You are given two sets of orthonormal functions, {φi (x)} and {χi (x)}. If the expansions of a function f (x) in the two sets are ∑ ∑ f (x) = ai φi (x) , f (x) = bi χi (x) , i

i

(a) Find a formula relating the bi to the ai , (b)

Possibly using a resolution of the identity, show that your formula for the bi is consistent with a direct expansion of f (x) in the χi set.

Solution: (a) Expand the φi in terms of the χi : ∑ φi (x) = cji χj (x) ,

cji = ⟨χj |φi ⟩ .

j

Insert this expansion into the left-hand formula for f (x): ∑ f (x) = ai cji χj (x) . ij

Comparing with the right-hand formula and arranging into the standard form for a matrix multiplication: ∑ bj = cji ai . i

(b) We have, inserting into ∑ the left-hand equation ai = ⟨φi |f ⟩ and the resolution of the identity j |χj ⟩⟨χj |, ∑ ∑ ∑ ∑ f= ai φi = |φi ⟩⟨φi |f ⟩ = |χj ⟩⟨χj |φi ⟩⟨φi |f ⟩ = cji ai χj . i

i

ij

The ﬁnal expression corresponds to our new formula for bj .

ij

11.1. VECTORS IN FUNCTION SPACES 11.1.6.

183

Write symbolic code that will, all as a single procedure, (a) Obtain the coeﬃcients that describe the expansion of an arbitrary explicit expression f (x) on the interval 0 ≤ x ≤ 1 in terms of the four functions Zi of Exercise 11.1.1, using the scalar product deﬁned in that exercise. (b) Combine the coeﬃcients and the Zi to obtain an expression that represents the expansion of f (x). (c) Plot f (x) and its expansion on the same graph, for the interval 0 ≤ x ≤ 1. The maple command for getting these two plots on the same graph is plot([f1,f2],x=0 .. 1); in mathematica, use Plot[{f1,f2},{x,0,1}]. In addition, provide an alternative version of your procedure that plots the single function f1 − f2 so that if the error is small you will still be able to see its behavior. Use your procedures to examine the four-term expansions of ex and x/(x + 1), and comment on the qualitative behavior of the error in the expansion. In any case where the diﬀerence between the function and its expansion is not apparent when both are plotted, examine this diﬀerence directly before making your comments.

Solution: First set up the normalized z[i] and the procedure ScalProd as in the solution to Exercise 11.1.1 and then load the plotting procedure plot3. In maple, > z[0]:=1: z[1]:=2*x-1: z[2]:=6*x^2-6*x+1: > z[3]:=20*x^3-30*x^2+12*x-1: > for i from 0 to 3 do z[i]:=z[i]*sqrt(2*i+1) end do: > ScalProd := proc(u,v); int(u*v, x=0 .. 1); end proc: > plot3 := proc(f) local a,i,fexp; >

for i from 0 to 3 do a[i]:=ScalProd(z[i],f); end do;

>

fexp := a[0]*z[0]+a[1]*z[1]+a[2]*z[2]+a[3]*z[3];

>

plot([fexp,f], x = 0 .. 1);

> end proc; maple will plot f in green and its expansion fexp in red. To use this procedure, enter a command such as > plot3(exp(x)); To plot the diﬀerence between f (x) and its expansion, change the plot command in the above coding to > plot(fexp-f, x = 0 .. 1); In mathematica, the corresponding code is z[0]=1; z[1]=2*x-1; z[2]=6*x^2-6*x+1; z[3]=20*x^3-30*x^2+12*x-1; Do[z[i] = z[i]*Sqrt[2*i+1], {i, 0, 3}] ScalProd[u_, v_] := Integrate[u*v, {x, 0, 1}] plot3[f_] := Module[{a,i,fexp}, Do[a[i]=ScalProd[z[i],f],{i,0,3}]; fexp := a[0]*z[0]+a[1]*z[1]+a[2]*z[2]+a[3]*z[3]; Plot[{fexp,f},{x,0,1}] ]

184

CHAPTER 11. GENERAL VECTOR SPACES mathematica will plot f in purple and its expansion fexp in blue. To use this procedure, enter a command such as plot3[x/(x+1)] To plot the diﬀerence between f (x) and its expansion, change the Plot command to Plot[fexp-f,{x,0,1}] The plots produced for ex (in either symbolic language) are (Left, fexp and f; right, fexp-f):

The plots for x/(x + 1) are (Left, fexp and f; right, fexp-f):

11.2

GRAM-SCHMIDT ORTHOGONALIZATION

Exercises 11.2.1.

Carry out the symbolic computations of Example 11.2.1 and thereby verify that the procedure of that Example produces the Legendre polynomials at their conventional scaling (see Table 11.1). Check your result against the Legendre polynomials as given in Table 14.1.

Solution: Execute the symbolic code. 11.2.2.

Carry out the symbolic computations of Example 11.2.2 and thereby verify that the procedure of that Example produces the Hermite polynomials at their conventional scaling (see Table 11.1). Check your result against the Hermite polynomials as given in Table 14.4.

Solution: Execute the symbolic code.

11.3. OPERATORS 11.2.3.

185

Carry out symbolic computations in which the function set un = xn (starting with n = 0) is orthogonalized using the scalar product deﬁned for the Chebyshev I polynomials and brought to their conventional scaling (see Table 11.1). Check your result against those obtained by symbolic computing.

Solution: Enter the un , deﬁne the scalar product, deﬁne the ﬁrst orthonormal function Q[0], and run the recursive procedure to get additional orthonormal Q[n]. In maple, > for n from 0 to nmax do u[n]=s^n end do: > ScalProd := proc(u,v); int(u*v*(1-s^2)^{-1/2},s=-1 .. 1) > end proc; > Q[0] := u[0]/sqrt(ScalProd(u[0],u[0])); > for n from 1 to nmax do >

QQ := u[n]-add(ScalProd(u[n],Q[j])*Q[j], j=0 .. n-1);

>

Q[n] := QQ/sqrt(ScalProd(QQ,QQ))

> end do; In mathematica, Do[ u[n] = s^n, {n, 0, nmax} ] ScalProd[u_,v_] := Integrate[u*v, {s, -1, 1}] Q[0] = u[0]/Sqrt[ScalProd[u[0],u[0]] ] Do[ QQ = u[n]-Sum[ ScalProd[ u[n], Q[j] ]*Q[j], {j,0,n-1}]; Q[n] = QQ/Sqrt[ ScalProd[QQ,QQ] ], {n, 1, nmax} ] We complete this Exercise by rescaling the√Q[n], which is accomplished by √ multiplying Q0 by π and Qn (n ̸= 0) by π/2. Do this (and simplify and sort the result) by executing one of the following code sequences: > T[0] := sqrt(Pi)*Q[0]; > for n from 1 to nmax do >

T[n] := sort(sqrt(Pi/2) * Q[n]) end do;

T[0] = Sqrt[Pi] * Q[0] Do[ T[n] = Expand[Simplify[Sqrt[Pi/2]*Q[n]]],{n,1,nmax}]

11.3

OPERATORS

Exercises 11.3.1.

Show that linear operators A and B have the following properties in common with matrices. Assuming the existence of the quantities involved, (a) (AB)−1 = B −1 A−1 ,

(b) (AB)† = B † A† .

(c) If A and B are Hermitian, identify the conditions under which AB is also Hermitian. (d) If U and V are both unitary, show that U V is also unitary.

186

CHAPTER 11. GENERAL VECTOR SPACES Solution: (a) A−1 is an operator such that A−1 Af = f for all members f of the Hilbert space. This means that A−1 A can be removed from any sequence of operators. To verify that (AB)−1 = B −1 A−1 , rewrite the formula (AB)−1 AB = 1 as B −1 A−1 AB = B −1 B = 1. (b) Note that ⟨f |ABg⟩ = ⟨A† f |Bg⟩ = ⟨B † A† |g⟩, which shows that B † A† is the adjoint of AB. (c) Using the result of part (b) and the fact that A† = A and B † = B, we establish that (AB)† = BA. Thus AB is not Hermitian unless BA = AB, i.,e., AB is Hermitian only if A and B commute. (d) U V is unitary if (U V )† = (U V )−1 = V −1 U −1 . This equation is satisﬁed because (U V )† = V † U † , with V † = V −1 and U † = U −1 .

11.3.2.

The time-independent Schr¨ odinger equation of an electron in a one-dimensional quantum mechanics problem has (in hartree atomic units) the form Hψ = Eψ, where 1 d2 + V (x), 2 dx2 where V is a real multiplicative operator (i.e., it is a real function of x that contains no derivatives or integrals). Show that the operator H is Hermitian if the Hilbert space is for a ﬁnite or inﬁnite interval in x, with its members required to vanish at the ends of the interval, and with a scalar product of deﬁnition ∫ ⟨χ|φ⟩ = χ∗ φ dx . H=−

The integral in the scalar product is over the interval relevant to the Hilbert space.

Solution: First note that V is Hermitian, because it is a real multiplicative operator and ∫ ∫ ∗ ⟨χ|V φ⟩ = χ V φ dx = (V χ)∗ φ dx = ⟨V χ|φ⟩. Next note that, because χ and φ both vanish at the ends of the interval on which the scalar product is deﬁned, we can integrate by parts twice ∫ ∫ ∫ 2 ∗ d2 φ dχ∗ dφ d χ χ∗ 2 dx = − dx = + φ dx . dx dx dx dx2 This equation shows that d2 /dx2 is also Hermitian for the Hilbert space presently under discussion. 11.3.3.

Explain why A† as given by Eq. (11.32) is the adjoint of A for the vector space deﬁned in the ﬁrst paragraph of Example 11.3.2.

Solution: If we insert A† from Eq. (11.32) into the left member of a scalar product, we get (evaluating the delta functions, assuming them to contribute fully even though they are at the endpoints of the integration interval), )∗ ∫ 1( d † ⟨A f |g⟩ = − i [δ(x − 1) − δ(x + 1)] + i f ∗ (x)g(x) dx dx −1 [ ] ⟨ df ⟩ ∗ ∗ = i f (1)g(1) − f (−1)g(−1) + i g , dx which was shown in the Example to be equal to ⟨f |Ag⟩.

11.4. EIGENVALUE EQUATIONS

11.4

187

EIGENVALUE EQUATIONS

Exercises 11.4.1.

Plot the eigenfunctions y1 , y2 , and y3 of Example 11.4.1 at the scale given by Eq. (11.36). From the results, indicate the number of nodes (including those at x = 0 and x = L) exhibited by the yn of general (positive integer) n, and write an equation for the locations of all nodes of yn .

Solution: Generate the plots (for L = 1) with one of > plot(sin(n*Pi*x), x = 0 .. 1); Plot[Sin[n*Pi*x],{x,0,1}]

The eigenfunction yn has n + 1 nodes, at x = jL/n, with j = 0, 1, · · · , n.

11.5

HERMITIAN EIGENVALUE PROBLEMS

Exercises 11.5.1.

Given the operator and scalar product H=

d2 d − 2x , dx2 dx

∫ ⟨φ|ψ⟩ =

2

φ∗ (x)ψ(x) e−x dx ,

−∞

for a Hilbert space containing all φ(x) such that ⟨φ|φ⟩ is deﬁned, (a) Show that the operator H is Hermitian. (b) Show that the following are eigenfunctions of H, and for each determine its eigenvalue: φ0 = 1 ,

φ1 = 2x ,

φ2 = 4x2 − 2 .

(c) Evaluate the integrals ⟨φi |φj ⟩ for all pairs of the above φ (including both i = j and i ̸= j). Conﬁrm that in each case, Eq. (11.40) is satisﬁed.

Solution: (a) Integrate by parts as needed to have all the derivatives apply to φ∗ : ] [ 2 ∫ ∞ 2 dψ d ψ(x) − 2x dx ⟨φ|Hψ⟩ = φ∗ (x)e−x dx2 dx −∞ ∫

= −∞

∫ ∞ ) d ( d2 ( −x2 ∗ ) −x2 ∗ ψ dx + ψ dx . e φ 2xe φ dx2 −∞ dx

188

CHAPTER 11. GENERAL VECTOR SPACES Expanding the ﬁrst integral on the right-hand side, [ ] ∫ ∞ 2 ( ∫ ∞ ) ∗ d d −x2 ∗ −x2 ∗ −x2 dφ e φ ψ dx = −2xe φ +e ψ dx , 2 dx −∞ dx −∞ dx we note that the ﬁrst of these expansion terms cancels the second integral of the previous equation, leaving [ ] [ ] ∫ ∞ ∫ ∞ ∗ 2 dφ 2 d dφ∗ d 2 φ∗ e−x −2x ⟨φ|Hψ⟩ = e−x ψ dx = + ψ dx dx dx dx2 −∞ dx −∞ = ⟨Hφ|ψ⟩ . This result conﬁrms that H is Hermitian. (b) Apply H to each φi . We get Hφ0 = 0, Hφ1 = −4x, and Hφ2 = −16x2 +8. These results correspond to Hφ0 = 0φ0 ,

Hφ1 = −2φ1 ,

Hφ2 = −4φ2 .

The eigenvalues are 0, −2, and −4. (c) Since all the eigenvalues are distinct, the φi should be orthogonal. By symmetry we conﬁrm that ⟨φ0 |φ1 ⟩ ⟨φ1 |φ2 ⟩ vanish. The other integrals, which can be checked with symbolic computing, have the values √ √ √ ⟨φ0 |φ2 ⟩ = 0 ⟨φ0 |φ0 ⟩ = π ⟨φ1 |φ1 ⟩ = 2 π ⟨φ2 |φ2 ⟩ = 8 π

11.6

STURM-LIOUVILLE THEORY

Exercises 11.6.1.

The operator H of Exercise 11.5.1, H=

d2 d − 2x , dx2 dx 2

was found to be self-adjoint when the scalar product contained the weight w(x) = e−x . Using the method of this section, derive this formula for w(x) from the form of the operator.

Solution: The formula for w(x) is given by Eq. (11.56). In the present problem, p(x) = 1 and q(x) = −2x. We then get (∫ ) 2 w(x) = exp (−2x) dx = e−x . 11.6.2.

(a) Find the scalar product weight and a choice of endpoints that make the following diﬀerential operator Hermitian: (1 − x2 )

d2 d − 2x(m + 1) . dx2 dx

(b) If in addition it is known that this operator has eigenfunctions φn that are polynomials of successive degrees, ﬁnd φn for n = 0, 1, 2, and 3.

11.6. STURM-LIOUVILLE THEORY

189

Solution: (a) Use Eq. (11.56) with p(x) = 1 − x2 and q(x) = −2(m + 1)x. Then (∫ ) 2 1 −2(m + 1)x 1 w(x) = exp dx = e(m+1) ln(1−x ) 2 2 2 1−x 1−x 1−x = (1 − x2 )m . The Sturm-Liouville conditions, Eq. (11.50), are satisﬁed if we take the endpoints x = ±1, where p(x) = 0. (b) The operator, which we designate H, is even in x, so its eigenfunctions can be either even or odd. This means that that the ﬁrst few φn (which are also arbitrary in scale) can be chosen to have the forms φ0 = 1, φ1 = x, φ2 = x2 + a, and φ3 = x3 + bx. Evaluating, Hφ0 = 0,

Hφ1 = −2(m + 1)x,

Hφ2 = −(4m + 6)x2 + 2,

Hφ3 = −(6m + 12)x3 + [6 − 2b(m + 1)]x. The ﬁrst three of these results correspond to the following eigenfunctions and eigenvalues: φ0 = 1, λ0 = 0; φ2 = x 2 −

φ1 = x, λ1 = −2(m + 1);

1 , λ2 = −2(2m + 3). 2m + 3

From the result for Hφ3 , we ﬁnd λ3 = −6(m + 2), and must solve the following equation for b: −6(m + 2)b = 6 − 2b(m + 1) , Thus, φ3 = x3 −

yielding b =

3x , λ3 = −6(m + 2). m+5

−3 . m+5

Chapter 12

FOURIER SERIES 12.1

PERIODIC FUNCTIONS

Exercises 12.1.1.

Use appropriate trigonometric identities to show that cos ωt + cos(ωt + θ) = 2 cos(θ/2) cos(ωt + θ/2) .

Solution: Expand cos(ωt + θ) and cos(ωt + θ/2). The equation to be proved then takes the form cos ωt + cos ωt cos θ − sin ωt sin θ = 2 cos(θ/2) cos ωt cos(θ/2) − 2 cos(θ/2) sin ωt sin(θ/2) . In the ﬁrst term of the right-hand side, replace 2 cos2 (θ/2) by 1 + cos θ; in the second right-hand term, replace 2 cos(θ/2) sin(θ/2) by sin θ. The two sides of the equation are then identical in form. 12.1.2.

Plot sin(1.5x) over a range of x suﬃcient to identify the smallest interval on which this function is periodic.

Solution: Execute one of > plot(sin(1.5*x),x=0 .. 4*Pi/3); Plot[Sin[1.5*x],{x,0,4*Pi/3}] The function sin(1.5x) is periodic with period 4π/3. Here is its graph.

190

12.1. PERIODIC FUNCTIONS 12.1.3.

191

(a) Given f (x) = sin(πx/2) deﬁned on the range (0, 2), assume that its extension to x values beyond that range is periodic (with wavelength 2). Sketch or plot f (x) and its periodic extension for the range (−3, 4). (b) For the same f (x), assume it to be deﬁned on the range (−1, 1) and sketch or plot it and its periodic extension for the range (−3, 4). (c) If your answers for parts (a) and (b) diﬀer, explain brieﬂy why.

Solution: (a) One can plot f (x) for (0, 2) and sketch its repetition by hand to span the range (−3, 4). To do this with symbolic computing requires code such as, in maple, > Psin:= proc(xx) local x; > x := xx;

while (x>2) do x:=x-2 end do;

>

while (x sin(Pi*x/2);

end proc:

> plot(’Psin(x)’, x=-3 .. 4); The plot command is confused by the call to Psin if its evaluation is not delayed by adding the single quotes. In mathematica, the corresponding code is PSin[xx_] :=

Module[{x}, x = xx;

While[x > 2, x = x-2];

While[x < 0, x = x+2];

Sin[Pi*xx/2] ] Plot[PSin[x], {x, -3, 4}] (b) To make plots for the periodic extension of f (x) from the range (−1, 1), in the above coding change x > 2 to x > 1 and x < 0 to x < -1. The plots for parts (a) and (b) are shown here:

(c) The plots diﬀer because f (x) is not periodic in an interval of length 2. 12.1.4.

A wave distribution is described by f (x, t) = 4 cos(2x + 3t − 0.4). As perceived by an observer at x = 0.5, (a) Determine the velocity with which the distribution passes the observer, and state the direction in which it is moving. (b) Find the wavelength, frequency, angular frequency, and period of the distribution. (c) Find the times at which the observer will see (i) a (positive) maximum amplitude, and (ii) a node.

192

CHAPTER 12. FOURIER SERIES Solution: Start by ﬁnding the period, which is the time required (at constant x) for 2x + 3t − 0.4 to increase by 2π, i.e., τ = 2π/3. The frequency is then ν = 1/τ = 3/2π and the angular frequency is ω = 2πν = 3. The wavelength is the change in x (at constant t) required for 2x + 3t − 0.4 to increase by 2π, i.e., λ = π. The velocity is v = λν = 3/2. Thus, (a) The velocity is 3/2; a point of constant f moves to smaller x as t increases; the distribution is moving in the −x direction. (b) λ = π, ν = 3/2π, ω = 3, τ = 2π/3. (c) (i) f (x, t) is a maximum when 2x + 3t − 0.4 = 2nπ, with n an integer. For x = 0.5, this condition is 3t + 0.6 = 2nπ, or t = −0.2 + 2nπ/3. (ii) Nodes occur when 2x + 3t − 0.4 = (n + 12 )π, with n an integer. For x = 0.5, this condition is 3t + 0.6 = (n + 12 )π, or t = −0.2 + (n + 21 )π/3.

12.1.5.

ˆx + e ˆy at velocity A sinusoidal wave distribution is moving in the direction of the vector e 250 m/s and with angular frequency 1250 s−1 . The maximum amplitude of the wave is 3.0 mm. Determine the wavelength and period of the oscillation, and write an equation for its amplitude as a function of position and time.

Solution: From ω = 1250 s−1 , ν = 1250/2π s−1 , and τ = 2π/1250 s. From λν = v, we ﬁnd λ = 250/(1250/2π) = 2π/5 m. The amplitude of the wave distribution must be a function of x + y, and a unit change √ in x + y corresponds to a displacement of the distribution by a distance 1/ 2. The wave distribution will therefore have an amplitude given by ) ] [( ) ( x+y 2π √ − ωt + δ ψ(x, y, t) = A sin λ 2 Here δ is an arbitrary phase. The term ωt enters with a minus sign because the ˆx + e ˆy . Inserting the data for the wave is to move in the positive direction of e current problem, ( ) 5 ψ(x, y, t) = 0.003 sin √ (x + y) − 1250 t + δ m . 2 12.1.6.

The fundamental frequency of a string of length 1 m clamped at both ends is 50 s−1 . (This is the standing-wave oscillation of longest wavelength.) On a long string maintained under similar conditions (string density and tension), how long would it take a traveling wave to move 100 m?

Solution: The wavelength of the fundamental frequency is two times the length of the string: λ = 2 m. The velocity is then v = λν = 2 × 50 = 100 m/s. Travel of 100 m would take 1 s. 12.1.7.

A piston is caused to move back and forth horizontally by being attached via a long rod to a point on the circumference of a rotating wheel) see Fig. 12.4. Find (in seconds) the period of oscillation of the piston when the wheel is rotating at 3000 rpm (revolutions per minute).

12.1. PERIODIC FUNCTIONS

193

Figure 12.4: Piston for Exercise 12.1.7. Solution: The period of the rotating wheel is the same as that of the oscillation of the piston. The wheel has frequency 3000/60 = 50 s−1 and therefore period 1/50 s. 12.1.8.

If two waves of the same wavelength and of unit intensity (in arbitrary units) and traveling in the same direction are superposed, determine the intensity of the combined wave distribution: (a) If the two waves are in phase, (b) If they diﬀer in phase by 30◦ , (c) If they diﬀer in phase by 90◦ , (d) If they diﬀer in phase by 180◦ .

Solution: The intensity is proportional to the square of the amplitude. (a) The net amplitude is twice that of either wave, so the intensity is increased by a factor 22 , to 4 units. (b) Using Eq. (12.7), the combined wave will (noting that 30◦ = π/6) √ have amplitude 2 cos(π/12), and therefore intensity 4 cos2 (π/12) = 2 + 3. (c) Again using Eq. (12.7) and noting that 90◦ = π/2, the combined wave has amplitude 2 cos(π/4) and intensity 4 cos2 (π/4) = 2. (d) The two waves are (at all times) of the same magnitude but opposite in sign; the resultant intensity is zero. 12.1.9.

A wave of frequency 1 megahertz (106 s−1 ) is multiplied by a wave of frequency 100 s−1 . It is sometimes said that the low-frequency signal is used to modulate (more precisely, “amplitudemodulate”) the high-frequency signal. What frequencies will be present in the combined frequency distribution?

Solution: We need to identify the modulated wave as a sum of contributions. For simplicity, assume that the phases of the two signals correspond to the product cos ωt cos αt. We then have ] 1[ cos(ω + α)t + cos(ω − α)t . cos ωt cos αt = 2 In the current problem, ω = 2π×106 and α = 2π×100, so ω±α = 2π(106 ±100). These factors correspond to frequencies 106 ± 100.

194

CHAPTER 12. FOURIER SERIES

12.2

FOURIER EXPANSIONS

Exercises 12.2.1.

Derive the orthogonality/normalization integrals given in Eqs. (12.13)–(12.16).

Solution: For Eq. (12.13), the recommended substitution yields 1 2

[cos(n − m)x ± cos(n + m)x] dx

with n ̸= m.

0

Since n ± m is in all these cases a nonzero integer, the integral vanishes. For Eq. (12.14), the recommended substitution leads to 1 2

[sin(n − m)x + sin(n + m)x] dx , 0

which vanishes because n ± m is in all these cases an integer. For Eq. (12.14), we have 1 2

[1 − cos 2nx] dx . 0

Because n ̸= 0 the integral of cos(2nx) vanishes; the remaining constant term leads to the claimed result, π. For Eq. (12.15), we have 1 2

[1 + cos 2nx] dx . 0

If n ̸= 0, the integral of cos(2nx) vanishes and we get the result π. But if n = 0, each term of the integral contributes π, so we get the result 2π. 12.2.2.

Verify that the Fourier coeﬃcients for the sawtooth wave of Example 12.2.1 have the values given in that Example.

Solution: The coeﬃcient a0 is the proportional to the average of the sawtooth over (0, 2π), so must be zero. The an for n ̸= 0 are given by ] [∫ π ∫ 2π 1 (x − 2π) cos nx dx x cos nx dx + an = π 0 π 1 = π

x cos nx dx − 2 0

cos nx dx . π

The ﬁrst of these integrals can be integrated by parts; the second is easily evaluated directly. We get 2π ∫ 2π ∫ 2π x sin nx sin nx sin nx an = dx − 2 dx = 0 . − nπ 0 nπ n π 0

12.2. FOURIER EXPANSIONS

195

The coeﬃcients bn (n ̸= 0) are obtained from ∫ ∫ 2π 1 2π bn = x sin nx dx − 2 sin nx dx π 0 π ∫ 2π x cos nx 2π cos nx cos x 2π dx + 2 =− + πn n n π 0 0 [ ] (−1)n 2 1 − =− +0+2 n n n = (−1)n−1 12.2.3.

2 . n

Verify the orthogonality and determine the normalization of the functions appearing in the expansion of Eq. (12.21).

Solution: The orthonormalization integrals are all of the form ∫ 1 L ( nπx ) ( mπx ) v dx , u L −L L L where u and v are either sin or cos. Change the integration variable to y = πx/L, so the arguments of u and v respectively become ny and my. The integration limits become ±π, and dx = (L/π) dy. With these changes, we recover the orthogonality and normalization integrals in the forms presented in Eqs. (12.13) through (12.16), which were veriﬁed in Exercise 12.2.1. 12.2.4.

Verify that the Fourier coeﬃcients appearing in Example 12.2.3 have the values shown.

Solution: For the cosine series, π/2 ∫ 2 π/2 2 sin(2n + 1)x 2 sin(n + 21 )π a2n+1 = cos(2n + 1)x dx = = π 0 π 2n + 1 π 2n + 1 0 2(−1)n . (2n + 1)π π ∫ 2 sin 2nx 2 π/2 cos 2nx dx = For n ̸= 0, a2n = = 0. π 0 π 2n 0 =

However,

a0 =

2 π

π/2

dx = 1 . 0

For the sine series, examine ∫ 2 π/2 2 ak = sin kx dx = [1 − cos kπ/2] . π 0 kπ If k = 4n + 1, the factor in brackets is 1 − cos(2nπ + 21 π) = 1; if k = 4n + 2, that factor is 1 − cos(2nπ + π) = 2;

196

CHAPTER 12. FOURIER SERIES if k = 4n + 3, that factor is 1 − cos(2nπ + 32 π) = 1; if k = 4n, that factor is 1 − cos 2nπ = 0 . When inserted into the formula for ak , we get a4n+1 =

12.2.5.

2 , (4n + 1)π

a4n+2 =

4 , (4n + 2)π

a4n+3 =

2 , (4n + 3)π

a4n = 0 .

Verify that the Fourier coeﬃcients appearing in Example 12.2.5 have the values shown.

Solution: Evaluating

bn =

2 L

L

sin 0

nπx 2 dx = (1 − cos nπ) , L nπ

we note that 1−cos nπ = 0 if n is even, but 1−cos nπ = 2 if n is odd. Therefore we have bn = 0 for even n and bn = 4/nπ for odd n.

12.3

SYMBOLIC COMPUTING

Exercises 12.3.1.

Enter into your computer the code from Example 12.3.1 and reproduce the plot in Fig. 12.8 (a) With nmax = 10, and (b) For the additional values nmax = 2, 6, and 20.

Solution: Enter the code in the text with appropriate values of nmax. Here are the plots for nmax = 2, 6, and 20.

12.3.2.

(a) Derive the following formula for the Fourier cosine expansion of x on the interval (0, π), and write symbolic code to obtain the coeﬃcients and evaluate the expansion through an arbitrary number of terms. x=

nmax cos[(2n + 1)x] 4 ∑ π − 2 π n=0 (2n + 1)2

(b) Plot x and the above expansion on the range (−2π, 2π) for nmax = 0, 1, and 4. (c) To obtain a clearer measure of the errors in your expansions, plot the error in the expansion (your expansion minus x) for the range (0, π) for each of the nmax values.

Solution: 2 (a) For f (x) = x, we have a0 = π

π 0

2 x dx = π

(

π2 2

) = π.

12.3. SYMBOLIC COMPUTING

197

For nonzero n, integrating by parts, an =

2 π

π

x cos nx dx = − 0

2 π

∫ 0

π

[ ] sin nx 2 cos nπ − 1 dx = . n π n2

But cos nπ = (−1)n , so an = 0 for even n, while an = −4/πn2 for odd n. Noting that a sum over odd n can be written as a sum over 2n + 1 for n = 0, 1, 2, · · · , we write the Fourier cosine series for this problem as ∞ ∞ a0 ∑ π 4 ∑ cos[(2n + 1)x] + a2n+1 cos[(2n + 1)x] = − . 2 2 π n=0 (2n + 1)2 n=0

(b) Make a plot for the series through nmax, in maple, > gg := proc(x) local n; >

Pi/2 - 4/Pi*add(cos((2*n+1)*x)/(2*n+1)^2, n=0 .. nmax)

> end proc; > nmax := 0; plot([x,gg(x)], x = 0 .. Pi); In mathematica, gg[x_] := Module[{n}, Pi/2-4/Pi*Sum[Cos[(2*n+1)*x]/(2*n+1)^2, {n,0,nmax}]] nmax = 0; Plot[{x,gg[x]},{x, 0, Pi}] Here are the plots:

(c) To get the error, change the plot command to one of > plot(gg(x)-x, x = 0 .. Pi); Plot[gg(x)-x, {x, 0, Pi}] Here are the error plots:

198 12.3.3.

CHAPTER 12. FOURIER SERIES (a) Derive formulas for the Fourier series expansion of the half-wave rectiﬁer { 0, −π < x < 0, V (x) = V0 sin x 0 < x < π. and write symbolic code to generate the expansion. (b) Plot the expansion of V (x) (truncated for three diﬀerent values of nmax ) for the range (−π, π).

Solution: (a) The coeﬃcients are ∫ 1 π 2V0 a0 = V0 sin x dx = .. π 0 π ∫ ∫ V0 π 1 π V0 sin x cos nx dx = [sin(n + 1)x − sin(n − 1)x] dx an = π 0 2π 0 [ ] V0 cos(n + 1)π − 1 cos(n − 1)π − 1 = − + 2π n+1 n−1 [ ] ] V0 [ 1 1 V0 (−1)n+1 − 1 n+1 = (−1) −1 − + = 2π n+1 n−1 π n2 − 1

(n ̸= 0).

The above expressions shows that an for odd n vanishes, and an for even n, which we write a2n has the value a2n = −

2V0 . − 1)π

(4n2

Looking now at the bn , we note that all except b1 vanish because the functions sin nx are orthogonal on the interval (0, π), so ∫ V0 π 2 V0 b1 = sin x dx = , π 0 2 ∫ V0 π bn = sin x sin nx dx = 0 (n ̸= 1) . π 0 Symbolic code for the Fourier series is, in maple, > nmax : =20: V0 := 1.:

Set expansion length and V0

> a0:=2*V0/Pi: b1:=V0/2: > for n from 1 to nmax do >

a[n]:= -2*V0/Pi/(4*n^2-1) end do:

> gg := proc(x) local n; > >

a0/2 + b1*sin(x) + add( a[n]*cos(2*n*x),n=1 .. nmax) end proc:

12.3. SYMBOLIC COMPUTING

199

In mathematica, nmax = 20; V0 = 1.; a0 = 2*V0/Pi;

Set expansion length and V0

b1 = V0/2;

a = Table[0, {n,1,nmax}]; Do[ a[[n]] = -2*V0/Pi/(4*n^2-1), {n, 1, nmax}] gg[x_] := Module[{n}, a0/2+b1*Sin[x] + Sum[a[[n]]*Cos[2*n*x],{n,1,nmax}]] (b) The truncated series gg can be plotted by executing one of > plot(gg(x), x = -Pi .. Pi); Plot[gg(x), {x, -Pi, Pi}] Plots for nmax = 2, 6, and 10 are shown here.

12.3.4.

(a) Derive formulas for the Fourier series expansion of the full-wave rectiﬁer, V (x) = V0 | sin x|,

0 < x < 2π,

and write symbolic code to generate the expansion. (b) Plot V (x) and its expansion (truncated for three diﬀerent values of nmax ) for the range (0, 2π).

Solution: (a) The computations of Exercise 12.3.3 are useful here. Appealing to symmetry, we conclude that the an needed here have twice the respective values of those in Exercise 12.3.3. However (also by symmetry) the values of all the bn , including b1 , are zero. Symbolic code for the computation of V (x) can be the same as in the earlier exercise if our only change is to replace the code for gg with, for maple, > gg := proc(x) local n; >

a0 + 2*add( a[n]*cos(2*n*x), n=1 .. nmax)

> end proc; or for mathematica gg[x_] := Module[{n}, a0 + 2*Sum[a[[n]]*Cos[2*n*x], {n, 1, nmax}]]

200

CHAPTER 12. FOURIER SERIES (b) Plots can be generated by one of > plot([gg(x), V0*abs(sin(x)], x = 0 .. 2*Pi); Plot[{gg[x], V0*Abs[Sin[x]] }, {x, 0, 2*Pi}] Here are plots for nmax = 2, 8, and 20.

12.3.5.

Write symbolic code for the coeﬃcients in the Fourier sine series of x(2 − x) −x2 √ e 1 + x2 for the range 0 < x < 2. Do not try to evaluate the necessary integrals analytically; arrange for their numerical evaluation. Then determine the smallest number of terms that must be kept in the sine series to cause the error in the expansion to be no larger than 0.01 at any x value in the interval.

Solution: In maple, > nmax := 4:

L := 2:

> for n from 1 to nmax do >

b[n] :=

(2/L)*int(x*(2-x)*sqrt(1+x^2)*exp(-x^2)*sin(n*Pi*x/L),

>

x = 0 .. 2) end do:

> gg := proc(x) local n; >

add(b[n]*sin(n*Pi*x/L), n = 1 .. nmax) end proc;

In mathematica, nmax = 4;

L = 2;

b = Table[0, {n,1,nmax}]; Do[ b[[n]] = (2/L) *Integrate[x*(2-x)*\Sqrt[1+x^2]*E^(-x^2)*Sin[n*Pi*x/L], {x, 0, 2}],

{n,1,nmax}]

gg[x_] := Module[{n},Sum[b[[n]]*Sin[n*Pi*x/L], {n, 1, nmax} Now we can plot the expansion gg and its error. In maple, > plot(gg(x), x = 0 .. 2); > plot{gg(x)-x*(2-x*sqrt(1+x^2)*exp(-x^2), x = 0 .. 2);

12.4. PROPERTIES OF EXPANSIONS

201

In mathematica, Plot[gg[x], {x, 0, 2}] Plot[gg[x]-x*(2-x*Sqrt[1+x^2]*E^(-x^2), {x,0,2}] The maximum error is already less than ±0.01 when nmax = 4. Here are the plots for that nmax value.

12.4

PROPERTIES OF EXPANSIONS

Exercises 12.4.1.

(a) Find the Fourier sine series for x on the range (0, π). (b) Use Parseval’s theorem and the result of part (a) to show that ζ(2) = π 2 /6.

Solution: (a) For the Fourier sine series of x, 2 bn = π

and x=

π

x sin x dx = 0

∞ ∑

(−1)n−1

n=1

−(−1)n 2 , n

2 sin nx . n

(b) Use Eq. (12.33); note that the passage to Eq. (12.34) is for an interval of length 2π and is not appropriate here. If we deﬁne the scalar product as an unweighted integral on x = (0, π), the formula needed here is ∫

π

x2 dx = 0

∞ ∑ n=1

π

sin2 nx dx =

b2n 0

∞ π∑ 4 . 2 n=1 n2

Evaluating the x integral to obtain the result π 3 /3, we have ∞ ∑ π3 1 = 2π = 2πζ(2) . 2 3 n n=1

Solving for ζ(2), we get ζ(2) = π 2 /6.

202 12.4.2.

CHAPTER 12. FOURIER SERIES Starting from the Fourier expansion of the square wave, { 1, 0 < x < π, f (x) = −1, −π < x < 0, =

4 π

[

sin x sin 3x sin 5x + + + ··· 1 3 5

] ,

use Parseval’s theorem to show that 1 1 1 π2 1 + 2 + 2 + 2 + ··· = . 3 5 7 8

Solution: Integrating [f (x)]2 and its Fourier expansion from −π to π, we have ∫ π ∫ ∞ ∞ 16 π ∑ sin2 (2n + 1)x 16 ∑ π f (x) dx = 2π = 2 dx = . 2 2 π −π n=0 (2n + 1) π n=0 (2n + 1)2 −π This equation rearranges into the required result. 12.4.3.

(a) Show that cos ax =

2a sin aπ π

[

1 cos x cos 2x cos 3x − 2 + 2 − 2 + ··· 2a2 a − 12 a − 22 a − 32

] .

(b)

Integrate term-by-term the expansion of part (a) to obtain a series for sin ax.

(c)

Diﬀerentiate term-by-term the expansion of part (a) to obtain another series for sin ax.

(d) Show that the series of parts (b) and (c) are consistent with each other. Hint. Use the Fourier sine series for x, developed in Exercise 12.4.1.

Solution: (a) Because cos ax is an even function of x, its Fourier series will contain only cosine terms. Evaluating these coeﬃcients, ∫ 1 π sin aπ a0 = . cos ax dx = π −π aπ 1 an = π

∫ [

π

1 cos ax cos nx dx = π −π

π

−π

1 sin(n + a)x sin(n − a)x = + π 2(n + a) 2(n − a)

[

] cos(n + a)x + cos(n − a)x dx 2

]x=π (n ̸= 0) . x=−π

Using the fact that sin(n ± a)π = ±(−1)n sin aπ, the expression for an reduces to sin aπ 2a(−1)n . an = π a2 − n2 When these coeﬃcients are used in the Fourier series, the expansion of part(a) is recovered. (b) Termwise integration of the series of part (a) on the interval (0, x) and subsequent multiplication by a2 leads directly to [ ] a2 sin 2x a2 sin 3x 2a sin aπ x a2 sin x − 2 + − + · · · . a sin ax = π 2 a − 12 2(a2 − 22 ) 3(a2 − 32

12.5. APPLICATIONS

203

(c) Termwise diﬀerentiation of the series of part (a) and subsequent multiplication by −1 yields [ ] 2a sin aπ sin x 2 sin 2x 3 sin 3x a sin ax = − 2 + 2 − 2 + ··· . π a − 12 a − 22 a − 32 (d) To verify that these series are mutually consistent, introduce a Fourier sine series for x/2 in the expansion of part (b) and collect similar terms. The series we need is given in the answer to Exercise 12.4.1; in the form to be used here it is ∞ ∑ x sin nx =− . (−1)n 2 n n=1 We now add the sin nx term of this expansion to the corresponding explicit term within the brackets of part (b). For general n, we get n sin nx

− (−1)

n

[ ] a2 sin nx 1 a2 n + (−1) = (−1) sin nx − + n(a2 − n2 ) n n(a2 − n2 ) n

[

] [ ] −a2 + n2 + a2 n n = (−1) sin nx = (−1) sin nx 2 . n(a2 − n2 ) a − n2 n

This is exactly the same as the sin nx term of the expansion of part (c).

12.5

APPLICATIONS

Exercises 12.5.1.

A square-wave signal, of period τ and amplitude { 0, 0 < t < τ /2, f (t) = 1, τ /2 < t < τ, is attenuated (in amplitude) by a factor that depends upon frequency ν: e−kν . (a) Generate the Fourier series for the unattenuated signal and (by examining a plot of truncated expansions of f (t) for a time period of length τ ) select a truncation of the series that gives a good representation of the square wave. (b) Set the period to τ = 0.001 s, take k = 0.0002 s, and plot the attenuated waveform. Then change k to larger and smaller values and observe the result.

Solution: (a) To compute the Fourier series for this exercise, we require ∫ 2 τ dt = 1 , a0 = τ τ /2 2 an = τ bn =

2 τ

τ

cos(2πt/τ ) dt = 0 ,

(n ̸= 0),

τ /2

τ

cos(2πt/τ ) dt = τ /2

(−1)n − 1 . nπ

204

CHAPTER 12. FOURIER SERIES ∞

1 ∑ 2 The Fourier series is then f (t) = − sin(2π(2n + 1)t/τ ) . 2 n=0 (2n + 1)π It is convenient for this Exercise to deﬁne a function F (nmax, τ, k, t) which produces this Fourier series of f (t) for the indicated values of τ , truncated after nmax terms, and including an attenuation factor e−kν , where ν = (2n + 1)/τ . In maple, this procedure may be written > FF := proc(nmax, tau, k, t) local n; >

>

* sin((2*n+1)*(2*Pi)*t/tau) * exp(-(2*n+1)*k/tau),

>

n = 0 .. nmax)

>

end proc:

In mathematica, FF[nmax_, tau_, k_, t_] := Module[ {n}, 0.5 - Sum[2/(2*n+1)/Pi * Sin[(2*n+1)*(2*Pi)*t/tau] * E^(-(2*n+1)*k/tau), {n,0,nmax}] ]; Plotting the unattenuated square wave (setting k = 0), using one of > plot(FF(nmax,0.001,0,t), t = 0 .. 0.001, -0.01 .. 1.01); Plot[FF[nmax, 0.001, 0, t], {t, 0, 0.001}, PlotRange -> {-0.01,1.01}] we get a well-deﬁned square wave at nmax = 80. Note that these plot commands include coding so that the vertical range of the plot will be independent of the choice of k. (b) Now set k to 0.0002 and other values; larger k causes more attenuation of the higher-frequency square-wave components and a less “square” signal, in the large-k limit reaching a constant signal of amplitude 0.5. Smaller k causes the waveform to approach the square-wave limit. Plotted here are the graphs for k = 0.002, 0.0002, and 0.00002.

12.5.2.

Analyze (as in Example 12.5.2) the energy distribution of the harmonics of a rectiﬁed sawtooth wave deﬁned by { 0, −π < x < 0, f (x) = x, 0 < x < π. It suﬃces to examine the ﬁrst eight harmonics.

12.5. APPLICATIONS

205

Solution: The Fourier coeﬃcients for this problem are ∫ 1 π π a0 = x dx = , π 0 2 ∫ π 1 −1 + (−1)n , an = x cos nx dx = π 0 n2 π ∫

π

bn = f rac1π

x sin nx dx = 0

(n ̸= 0),

(−1)n+1 . n

The coeﬃcients and the Fourier series can be coded in maple: > makeAB := proc(nmax) local n; global a, b, a0; > a0 := Pi/2; > for n from 1 to nmax do >

a[n] := (-1+(-1)^n)/n^2/Pi;

>

b[n] := (-1)^(n+1)/n; end do; end proc;

> makeF := proc(nmax,x) local n; global a, b, a0; > >

a0/2 + add(a[n]*cos(n*x)+b[n]*sin(n*x), n=1 .. nmax) end proc:

In mathematica, a = Table[0,{n,1,nmax}]; b = Table[0,{n,1,nmax}]; makeAB[nmax_] := Module[{n}, a0 = Pi/2; a[[n]] = (-1+(-1)^n)/n^2/Pi; b[[n]] = (-1)^(n+1)/n; {n, 1, nmax}] makeF[nmax_,x_] := Module[{n}, a0/2 + Sum[a[[n]]*Cos[n*x] + b[[n]]*Sin[n*x], {n, 1, nmax}] ] We can check the coding of the Fourier series by executing one of the following: > nmax := 20: makeAB(nmax); plot(makeF(nmax,x), x=-Pi .. Pi); nmax = 20; makeAB{[nmax]; Plot[makeF[nmax,x], {x,-Pi,Pi}] A plot for nmax = 60 is shown here.

206

CHAPTER 12. FOURIER SERIES The intensities of the harmonics are given by I0 = (a0 /2)2 , and for n ̸= 0 In = |a2n + b2n |/2. The total energy of the sawtooth wave, in the same units, is 1 2π

π

π2 . 6

x2 dx = 0

The fractional distribution of the energy is then fn = 6In /π 2 . We have (using the an and bn from the computer output): The direct-current energy fraction is f0 = 0.375; the others are f1 = 0.427,

f2 = 0.076,

f3 = 0.035,

f4 = 0.019,

f5 = 0.012,

f6 = 0.008,

f7 = 0.006,

f8 = 0.005 .

Harmonics zero through 8 account for 96.4% of the incoming energy. 12.5.3.

(a) Obtain the Fourier series expansion of δ(x − x0 ), on the range −π < x < π, with x0 an arbitrary point in the interior of the range. (b) Show that when this expansion is used in ∫ π f (x) δ(x − x0 ) dx , −π

you get the expected result, a Fourier series for f (x0 ).

Solution: (a) From the coeﬃcient formulas, an =

1 π

bn =

1 π

π −π

δ(x − x0 ) cos nx dx =

cos nx0 , π

δ(x − x0 ) sin nx dx =

sin nx0 . π

π −π

(b) When the expansion of δ(x − x0 ) is inserted in the integral, we have ∫

[

∑ cos nx0 cos nx ∑ sin nx0 sin nx 1 f (x) + + 2π n=1 π π −π n=1 π

1 = 2π

∞ ∑

π

1 f (x) dx + cos nx0 π −π n=1 ∞ ∑

1 + sin nx0 π n=1 =

1 2

ao +

∞ ∑

] dx

π

f (x) cos nx dx −π

π

f (x) sin nx dx −π

[an cos nx0 + bn sin nx0 ] ,

n=1

where the an and bn in this last line are the coeﬃcients for the Fourier expansion of f (x).

12.5. APPLICATIONS 12.5.4.

207

A musical instrument creates a sound nominally at middle C (frequency ν0 = 262 s−1 ). The actual amplitude proﬁle of the sound wave is, in arbitrary units (written here for one period)  1 1   1.00, −

for n from 1 to nmax do b[n] := 2*(cos(n*Pi)-1.75*cos(n*Pi/2)+0.75)/(n*Pi) end do;

> end proc; > makeF := proc(nu, nmax, t) local n; gloabal b; >

> end proc:

208

CHAPTER 12. FOURIER SERIES mathematica code, for truncation after nmax terms: nmax = 20;

nu = 262;

b = Table[0,{n,1,nmax}]; makeB[nmax_] := Module[{n}, Do[ b[[n]] := 2*(Cos[n*Pi]-1.75*Cos[n*Pi/2]+0.75)/(n*Pi), {n, 1, nmax} ] makeF[nu_, nmax_, t_] := Module[{n}, Sum[b[[n]]*Sin[n*2*Pi*nu*t], {n,1,nmax}] We can now plot the waveform using one of the following command sequences: > makeB(nmax): plot(makeF(nu,nmax,t),t = -1/2/nu .. 1/2/nu); makeB[nmax]; Plot[makeF[nu,nmax,t],{t,-1/2/nu,1/2/nu}] Plots for nmax = 60 and 600 are shown here.

(c) After makeB has been run for any nmax ≥ 14, the needed bn will be available; the intensities scale as |bn |2 . To scale them so that the fundamental has unit intensity, we need In = |bn /b1 |2 . These quantities, through n = 14, are: I1 = 1, I2 = 49, I3 = 0.111, I4 = 0, I5 = 0.040, I6 = 5.444, I7 = 0.020, I8 = 0, I9 = 0.012, I10 = 1.960, I11 = 0.008, I12 = 0, I13 = 0.006, I14 = 1.000. The dominant harmonic is the second, corresponding to I2 .

Chapter 13

INTEGRAL TRANSFORMS 13.1

INTRODUCTION

13.2

FOURIER TRANSFORM

Exercises 13.2.1.

Show in detail how the formulas of Eq. (13.6) follow from Eq. (13.5), including speciﬁcally √ the factors 1/ 2π that premultiply the integrals.

Solution: Identifying nπ/L = t and replacing the sum by an integral, we have ∫

∫ L cn iωt 1 L e dt , L cn = g(ω) e−iωt dω . π 2 −∞ −L √ Replacing L cn by f (t) π/2, we recover the desired formula for g(ω), while that for L cn becomes the integral formula for f (t). ∞

g(ω) =

13.2.2.

Show that the value of δn (0), obtained as the limit x → 0 of Eq. (13.8), is n/π.

Solution: Introduce the power-series expansion of sin nx. At suﬃciently small x the series is dominated by its leading term, nx, We then have δn (x) → nx/πx = n/π. 13.2.3.

Explain in detail how to obtain the Fourier sine and cosine transforms from the corresponding formulas for the (exponential) Fourier transform.

Solution: If f (t) is assumed to be even, and designated fc (t), its Fourier transform can be designated gc (ω) and written as follows: ∫ ∞ ∫ ∞ 1 1 gc (ω) = √ fc (t)(cos ωt + i sin ωt) dt = √ fc (t) cos ωt dt 2π −∞ 2π −∞ √ ∫ ∞ 2 fc (t) cos ωt dt . = π 0 209

210

CHAPTER 13. INTEGRAL TRANSFORMS From the form of gc we see that it is an even function of ω. We may therefore take its inverse transform by the formula ∫ ∞ ∫ ∞ 1 1 gc (ω)(cos ωt − i sin ωt) dω = √ gc (ω) cos ωt dω fc (t) = √ 2π −∞ 2π −∞ √ ∫ ∞ 2 = gc (ω) cos ωt dω . π 0 These equations show that fc and gc are a transform pair, with the transforms deﬁned as in Eqs. (13.13) and (13.14). If now f (t) is assumed to be odd, and designated fs (t), we can form its Fourier transform as follows: √ ∫ ∞ ∫ ∞ 1 2 T √ fs (t)(cos ωt + i sin ωt) dt = i fs (t) sin ωt dt . [fs (t)] (ω) = π 2π −∞ 0 We write the above formula as [fs (t)]T (ω) = igs (ω), with √ ∫ ∞ 2 gs (ω) = fs (t) sin ωt dt . π 0 The form of gs shows that it is an odd function of ω, so we may take the inverse transform of igs , which yields fs (t), as follows: ∫ ∞ ∫ 1 i(−i) ∞ fs (t) = √ igs (ω)(cos ωt − i sin ωt) dω = √ gs (ω) sin ωt dt 2π −∞ 2π −∞ √ ∫ ∞ 2 gs (ω) sin ωt dt . = π 0 These equations show that fs and gs are also a transform pair, with the transforms deﬁned as in Eqs. (13.15) and (13.16).

13.2.4.

Using Eq. (13.18), show that

2 π

∫ 0

sin ω cos ω 1 dω = . ω 2

Solution: Write sin ω cos ω = (sin 2ω)/2, and change the integration variable to u = 2ω: ∫ ∫ ∫ 2 ∞ sin ω cos ω 1 ∞ sin 2ω 1 ∞ sin u 1 dω = dω = du = . π 0 ω π 0 ω π 0 u 2 The last step involves substitution of the result from Eq. (13.18). 13.2.5.

Evaluate the Fourier transform of a pulse of unit height (like that of Example 13.2.1 but extending from t = 0 to t = 2) (a) By explicit computation of the transform integral, and (b)

By applying the shift formula, Eq. (13.22), to the transform given in the Example.

Verify that both methods give the same result.

13.2. FOURIER TRANSFORM

211

Solution: (a) Letting f1 (t) denote the pulse of this Exercise, we write 2 ( 2iω ) ∫ 2 1 1 eiωt 1 e −1 iωt g1 (ω) = √ e dt = √ =√ iω 2π 0 2π iω 0 2π √ ( iω ) e − e−iω 2 sin ω ei ω iω . =e =√ iω π ω 2π (b) Noting that

2 sin ω π ω is the Fourier transform of a pulse f0 (t) from t = −1 to t = 1, the pulse f1 (t) can be identiﬁed as f0 (t − 1). Then the shift formula tells us that [ ]T [ ]T f1 (t) = f0 (t − 1) = eiω g0 (ω) . g0 (ω) =

This agrees with the result found in part (a). 13.2.6.

Establish Eq. (13.20), the formula for the Fourier transform of a Gaussian, by carrying out the following process: 1. Rewrite Eq. (13.19), changing the integration variable to u = at2 and the range to (0, ∞). √ 2. Expand cos(ω u/a) in power series and integrate term-by-term, identifying the integrals as gamma functions of half-integer arguments, 3. Apply the Legendre duplication formula, Eq. (9.12), to simplify the combinations of gamma functions, 4. Identify the resulting summation as g(ω).

Solution: The ﬁrst two steps of the above prescription lead to ∫ ∞ √ 2 du g(ω) = √ e−u cos(ω u/a) √ 2 au 2π 0 ( )n ∫ ∞ ∞ ω2 1 ∑ 1 − e−u un−1/2 du =√ a 2πa n=0 (2n)! 0 ( )n ∞ 1 ∑ Γ(n + 12 ) ω2 √ = − . a 2πa n=0 (2n)! The duplication formula, Eq. (9.12), can be written √ Γ(n + 12 ) 2−2n π = . (2n)! n! Replacing the left-hand member of this equation by its right member in the formula for g(ω), we reach ( )n ∞ 2 1 ∑ 1 ω2 1 √ g(ω) = − = √ e−ω /4a . a 2a n=0 n! 2a

212

CHAPTER 13. INTEGRAL TRANSFORMS

13.3

FOURIER TRANSFORM: SYMBOLIC COMPUTATION

Exercises 13.3.1.

Using symbolic computation if helpful, ﬁnd the Fourier transforms of the following: (a) e−|t| , 2

(b) te−t ,

2

(c) e−|t| sin t,

(e) e−t cos t,

(d) u(t) e−t ,

(f) e−t sin t.

2

Here u(t) is the Heaviside (unit step) function.

Solution: To use symbolic computing, enter one of > fourier(expr, t, w); FourierTransform[expr, t, w]

√ Note that maple deﬁnes the Fourier transform to be 2π times the deﬁnition in this book, and with the sign of i reversed. However, mathematica and this book use the same deﬁnition.

(a) Remove the absolute value sign by writing in a way that causes the integration range to be (0, ∞): ∫ ∞ ∫ ∞[ ] [ ] 1 1 g(ω) = √ e−t eiωt + e−iωt dt = √ e(iω−1)t + e(−iω−1)t dt 2π 0 2π 0 √ [ ] 1 1 1 2 1 =√ + = . π 1 + ω2 2π 1 − iω 1 + iω (b) Use symbolic computing. In maple, √ 2 1 − Iwe−w /4 π 2

> fourier(t*exp(-t^2),t,w); In mathematica,

ie−w /4 w √ 2 2 2

FourierTransform[t * E^(-t^2), t, w]

The maple result √ needs adjustment to conform to the deﬁnition in the book (divide by 2π and change the sign of i). Both systems then yield iωe−ω /4 √ . 2 2 2

g(ω) =

(c) Use symbolic computing. In maple, > fourier(exp(-abs(t))*sin(t),t,w); (2 +

w2

4Iw + 2w)(−2 − w2 + 2w)

13.3. FOURIER TRANSFORM: SYMBOLIC COMPUTATION

213

In mathematica,

√ 2i 2/π w FourierTransform[E^(-Abs[t])*Sin[t], t, w] 4 + w4 The maple result √ needs adustment to conform to the deﬁnition in the book (divide by 2π and change the sign of i). Both systems then yield 1 4iω g(ω) = √ . 2+4 ω 2π (d) The eﬀect of the factor u(t) is to limit the integration range to (0, ∞). Therefore, ∫ ∞ 1 1 1 e(iω−1)t dt = √ . g(ω) = √ 2π 0 2π 1 − iω (e) Write cos t = (eit + e−it )/2; the Fourier transform of e−t cos t then takes the form ∫ ∞[ ] 2 2 1 g(ω) = √ e−t ei(ω+1)t + e−t ei(ω−1)t . 2 2π −∞ 2

These expressions correspond to the transform of e−t evaluated at the respective points ω −1 and ω +1. That transform was given in Eq. (13.20), so we have (setting a = 1 in that equation) 2

] 2 2 2 1 1 [ g(ω) = √ e−(ω+1) /4 + e−(ω−1) /4 = √ e(ω +1)/4 2 2 2

[

e−ω/2 + eω/2 2

]

2 1 = √ e(ω +1)/4 cosh(ω/2) . 2

(f) Proceed as in part (e), but with sin t = (eit − e−it )/2i. We get 2 1 g(ω) = √ e(ω +1)/4 2

13.3.2.

[

] 2 e−ω/2 − eω/2 i = √ e(ω +1)/4 sinh(ω/2) . 2i 2

Repeat Exercise 13.3.1, taking Fourier cosine transforms.

Solution: Note that both symbolic systems use the same cosine transform deﬁnition as this book. (a) Using symbolic computing, execute one of > fouriercos(exp(-t), t, w); FourierCosTransform[E^(-t), t, w] We need not provide an explicit representation for the absolute value because the integral deﬁning the cosine transform is restricted to positive t. √ 2 1 Both systems give a result equivalent to gc (ω) = . 2 π ω +1

214

CHAPTER 13. INTEGRAL TRANSFORMS (b) Using symbolic computing, execute one of > fouriercos(t * exp(-t^2), t, w); FourierCosTransform[t * E^(-t^2), t, w] Both systems give equivalent results, but maple’s answer contains a summation that can actually be evaluated. This problem illustrates that there are real diﬀerences in behavior between the (regular) Fourier transform and the Fourier cosine transform. (c) Using symbolic computing, execute one of > fouriercos(exp(-t)*sin(t), t, w); FourierCosTransform[E^(-t) * Sin[t], t, w] We need not provide an explicit representation for the absolute value because the integral deﬁning the cosine transform is restricted to positive t. 1 4 − 2ω 2 Both systems give a result equivalent to gc (ω) = √ 2π ω 4 + 4 (d) Because the integral deﬁning the transform is restricted to positive t, this transform is the same as that for part (a). (e) Using symbolic computing, execute one of > fouriercos(exp(-t^2) * cos(t), t, w); FourierCosTransform[E^(-t^2) * Cos[t], t, w] √ 2 −(ω2 +1)/4 Both systems give a result equivalent to gc (ω) = e cosh(ω/2) . 2 (f) Using symbolic computing, execute one of > fouriercos(exp(-t^2) * sin(t), t, w); FourierCosTransform[E^(-t^2) * Sin[t], t, w] Both systems give equivalent results, but it may be diﬃcult to conﬁrm the equivalence. The main message of this exercise is that the Fourier and Fourier cosine transforms do not always give results of comparable complexity.

13.3.3.

Repeat Exercise 13.3.1, taking Fourier sine transforms.

Solution: Note that both symbolic systems use the same sine transform deﬁnition as this book. (a) Using symbolic computing, execute one of > fouriersin(exp(-t), t, w); FourierSinTransform[E^(-t), t, w] We need not provide an explicit representation for the absolute value because the integral deﬁning the sine transform is restricted to positive t. √ 2 ω Both systems give a result equivalent to gs (ω) = . 2 π ω +1

13.3. FOURIER TRANSFORM: SYMBOLIC COMPUTATION

215

(b) Using symbolic computing, execute one of > fouriersin(t*exp(-t^2), t, w); FourierSinTransform[t*E^(-t^2), t, w]

√ 2 2 Both systems give a result equivalent to gs (ω) = ω e−ω /4 . 4 (c) Using symbolic computing, execute one of > fouriersin(exp(-t)*sin(t), t, w); FourierSinTransform[E^(-t)*Sin[t], t, w] We need not provide an explicit representation for the absolute value because the integral deﬁning the sine transform is restricted to positive t. √ 2 ω Both systems give a result equivalent to gs (ω) = 2 . π 4 + ω4 (d) Because the integral deﬁning the transform is restricted to positive t, this transform is the same as that for part (a). (e) Using symbolic computing, execute one of > fouriersin(exp(-t^2)*cos(t), t, w); FourierSinTransform[E^(-t^2)*Cos[t], t, w] Both systems give equivalent results, but it may be diﬃcult to conﬁrm the equivalence. The main message of this exercise is that the Fourier and Fourier sine transforms do not always give reults of comparable complexity. (f) Using symbolic computing, execute one of > fouriersin(exp(-t^2)*sin(t), t, w); FourierSinTransform[E^(-t^2)*Sin[t], t, w] √ 2 −(ω2 +1)/4 Both systems give a result equivalent to gs (ω) = e sinh(ω/2) . 2 13.3.4.

Find the inverse Fourier transforms of 2 1 (a) , (c) ω 2 e−ω , 1 − iω 1 , (d) u(ω) e−ω cos ω. 1 + ω2 Here u(t) is the Heaviside (unit step) function. (b)

Solution:

√ The maple inverse Fourier transform must be multiplied by 2π and its sign of i reversed to correspond to the deﬁnition in this book. mathematica and this book use the same deﬁnition. (a) Use one of > invfourier(1/(1-I*w), w, t); InverseFourierTransform[1/(1-I*w), w, t] Both systems give a result equivalent (with the book’s deﬁnition) to √ f (t) = 2π e−t u(t) , where u(t) is the Heaviside (unit step) function.

216

CHAPTER 13. INTEGRAL TRANSFORMS (b) Use one of > invfourier(1/(1+w^2), w, t); InverseFourierTransform[1/(1+w^2), w, t] Both systems give a result equivalent (with the book’s deﬁnition) to √ π f (t) = −|t| . 2 (c) Use one of > invfourier(w^2 * exp(-w^2), w, t); InverseFourierTransform[w^2 * E^(-w^2), w, t] Both systems give a result equivalent (with the book’s deﬁnition) to f (t) =

(2 − t2 ) −t2 /4 √ . 4 2

(d) Use one of > invfourier(Heaviside(w) * exp(-w) * cos(w), w, t); InverseFourierTransform[ HeavisideTheta[w] * E^(-w) * Cos[w], w, t] Both systems give a result equivalent (with the book’s deﬁnition) to 1 + it 1 f (t) = − √ . 2π t2 − 2it − 2 13.3.5.

Find the Fourier sine and Fourier cosine transforms of erfc(t) and verify that application of the respective inverse transforms recovers erfc(t).

Solution: In maple, > simplify(fouriersin(erfc(t),t,w));

√ 2 2(−1 + e−w /4 ) √ πw

The sine transform is self-inverse, so we invert this transform with > simplify(fouriersin(%,w,t));

1 − erf(t)

This answer is correct because erfcx is deﬁned as 1 − erf(x). The simplify command was needed to get the results into convenient forms. √ 2 i 2e−w /4 erf(iw/2) √ > fouriercos(erfc(t),t,w); − πw The cosine transform is also self-inverse; to invert this transform: > fouriercos(%,w,t);

1 − erf(t)

In mathematica, 2 − 2e−w /4 √ 2π w 2

FourierSinTransform[Erfc[t], t, w]

13.4. FOURIER TRANSFORM: SOLVING ODES

217

The sine transform is self-inverse, so we invert this transform with −Erf[t] + Sign[t]

FourierSinTransform[%, w, t]

This answer is correct because the transform and its inverse are only relevant for positive t and ω. √ 2 2 DawsonF[w/2] FourierCosTransform[Erfc[t], t, w] πw The DawsonF function is an error function of imaginary argument; this and the maple result are equivalent. The cosine transform is self-inverse, so we invert this transform with Erfc[t]

FourierCosTransform[%, w, t]

13.4

FOURIER TRANSFORM: SOLVING ODEs

Exercises 13.4.1.

Consider the following ODE in the independent variable t y ′′ + 2y ′ + y = f (t) ,

with

f (t) = u(t + 1) − u(t − 1) .

Note that f (t) is the pulse whose Fourier transform was obtained in Example 13.2.1. (a) Obtain a particular solution to this inhomogeneous ODE by the Fourier-transform method described in the present section of this text. (b) Characterize and check your solution (i) By plotting it on the range −2 < t < 10 and (ii) By writing symbolic code to compute the left-hand side of the ODE and plotting that result for −2 < t < 2.

Solution: (a) Taking the Fourier transform of the ODE and using Example 13.2.1 to obtain the transform of f (t), √ √ 2 sin ω 2 sin ω 2 2 −ω Y − 2iωY + Y = −→ Y (1 − 2iω − ω ) = . π ω π ω Solving for Y and taking the inverse Fourier transform (can use symbolic computing): √ 2 sin ω Y (ω) = , π ω(1 − iω)2 y(t) = (te1−t − 1)u(t − 1) + [1 − (t + 2)e−(t+1) ]u(t + 1) . (b) Write the solution in a convenient form, as one of: > YY := (t*exp(1-t)-1)*Heaviside(t-1) >

+(1-(t+2)*exp(-t-1))*Heaviside(t+1):

YY = (t*E^(1-t)-1)*HeavisideTheta[t-1] + (1-(t+2)*E^(-t-1)*HeavisideTheta[t+1]; We can plot the solution by using one of

218

CHAPTER 13. INTEGRAL TRANSFORMS > plot(YY, t = -2 .. 10); Plot[YY, {t,-2,10}] The left-hand side of the ODE can then be formed and plotted: > ZZ := diff(YY,t,t)+2*diff(YY,t)+YY: > plot(ZZ, t = -2 .. 2); ZZ

= D[YY,t,t] + 2*D[YY,t] + YY;

Plot[ZZ, {t,-2,2}] Both plots are shown here. Left: y(t); Right: left-hand side of ODE.

13.4.2.

Use Fourier-transform methods to obtain a particular solution to the inhomogeneous ODE y ′′ (x) − 2y ′ (x) + y(x) = e−|x| .

Plot and check your solution as in Exercise 13.4.1.

Solution:

[ ]T √ 2 1 −|x| From Exercise 13.3.1 we have e = . π 1 + ω2 Therefore, the Fourier transform of our ODE is √ −ω Y + 2iωY + Y = 2

2 1 . π 1 + ω2

We now solve for Y (the Fourier transform of the ODE solution) and using symbolic computing we invert the transform to reach

y(x) =

] 1 [ −x e u(x) + ex (1 − 2x + 2x2 )u(−x) . 4

We plot y(x) for the range (−10, 10). We now form the left-hand side of the ODE as in Exercise 13.4.1 and plot it for the range (−2, 2). Both plots are shown below (with y(x) on the left).

13.4. FOURIER TRANSFORM: SOLVING ODES

13.4.3.

219

(a) Use the unit step (Heaviside) function to describe a triangular pulse  1 − x, 0 < x < 1,   1 + x, −1 < x < 0, f (x) =   0, |x| > 1, and use your symbolic system to ﬁnd its Fourier transform. (b) Use Fourier-transform methods to obtain a particular solution to y ′′ (x) − 2y ′ (x) + y(x) = f (x) . Plot and check your solution as in Exercise 13.4.1.

Solution: (a) The triangular pulse can be described as f (x) = (x + 1)u(x + 1) − 2xu(x) + (x − 1)u(x − 1) . √ 2 sin2 (ω/2) Its Fourier transform (by symbolic computing) is F (ω) = 2 . π ω2 (b) We take the Fourier transform of our ODE: √ 2 sin2 (ω/2) 2 −ω Y + 2iωY + Y = 2 . π ω2 We now solve for Y (the Fourier transform of the ODE solution) and using symbolic computing we invert the transform to reach y(x) = [2x + 4 + (2x − 4)ex ]u(−x) − [x + 3 − (1 − x)ex+1 ]u(−x − 1) − [x + 1 + (3 − x)ex−1 ]u(−x + 1) . We now plot y(x) (left panel) and y ′′ − 2y ′ + y (right panel).

220

CHAPTER 13. INTEGRAL TRANSFORMS

13.5

FOURIER CONVOLUTION THEOREM

Exercises 13.5.1.

Given f (t) = (t2 + 1)−1 and g(t) = (a) f ∗ f ,

t , evaluate the following convolutions: t2 + 1

(b) g ∗ g ,

(c) f ∗ g .

Solution: Use your symbolic computing system to evaluate the following convolution integrals: √ ∫ ∞ 1 1 1 2π √ (a) f ∗ f = du = 2 , 2 + 1 (t − u)2 + 1 u t + 4 2π −∞ √ ∫ ∞ 1 t−u 2π u (b) g ∗ g = √ du = − 2 , 2 2 t +4 2π −∞ u + 1 (t − u) + 1 (c)

13.5.2.

1 f ∗ g=√ 2π

∞ −∞

t−u 1 du = u2 + 1 (t − u)2 + 1

You are given the following Fourier transforms: f (t) =

1 , t2 + 1

g(t) =

t , t2 + 1

f T (s) = F (s) =

π t . 2 t2 + 4

π −|s| e 2

√ π g T (s) = G(s) = i sign(s) e−|s| , 2

where sign(u) = 1 if u > 0 and −1 if u < 0. Form the convolution f ∗ g and show that its transform is the product F (s)G(s), as required by the convolution theorem. Note. The answers of Exercise 13.5.1 may be helpful.

Solution:

From Exercise 13.5.1, we have f ∗ g =

π t . 2 t2 + 4

Taking the Fourier transform using symbolic computation, we ﬁnd [f ∗ g]T =

] iπe−2|s| iπ [ −2s [u(s) − u(−s)] . e u(s) − e2s u(−s) = 2 2

We also have F (s)G(s) = (iπ/2)e−2|s| sign(s) . These results are in agreement because sign(s) = u(s) − u(−s). 13.5.3.

Using symbolic computation, evaluate the integral ] ∫ ∞ [ x cos x − sin x 2 I= dx 2 x −∞ in the following two ways: (a) By a direct call to a symbolic integration procedure, and (b) By using the Parseval relation.

13.6. LAPLACE TRANSFORM

221

Solution: (a) Direct integration; use one of the following: > int(((x*cos(x)-sin(x))/x^2)^2, x=-infinity .. infinity); 1 π 3 Integrate[({x*Cos[x]-Sin[x]}{x^2})^2, {x,-Infinity,Infinity}] π 3 (b) The ﬁrst step toward use of the Parseval relation is to take the Fourier transform of the quantity in square brackets. Use one of: > F := fourier((x*cos(x)-sin(x))/x^2,x,s); ( ) F := iπ s Heaviside(s + 1) − Heaviside(s − 1) √ Changing the sign of i and multiplying by 1/ 2π, this becomes √ ] π [ F (s) = −i s u(s + 1) − u(s − 1) . 2 FourierTransform[(x*Cos[x]-Sin[x])/x^2,x,s] √ ) 1 π ( i s Sign[−1 + s] − Sign[1 + s] 2 2 These step or sign functions cause the product F ∗ F to be nonzero only on the range (−1 ≤ s ≤ 1), with F ∗ F having in that interval the value πs2 /2. The Parseval integral is therefore π 2

13.6

1

s2 ds = −1

π . 3

LAPLACE TRANSFORM

Exercises 13.6.1.

Take Laplace transforms by hand computation, and check your work by symbolic computation. (a) sin 4(t − 3) , (b) (2t − 1)2 , (c) t3/2 .

Solution: (a) Use the fact that f (t) = sin(4(t−3) is the imaginary part of e4i(t−3) . Then ∫ F (s) = Im

e−ts+4i(t−3) dt = Im

0

(

e−12i s − 4i

) =

This result can be conﬁrmed by executing one of > laplace(sin(4*(t-3)), t, s); LaplaceTransform[Sin[4*(t-3)], t, s]

4 cos(12) − s sin 12 . s2 + 16

222

CHAPTER 13. INTEGRAL TRANSFORMS (b) Make a change of variables from t to u = ts in the integral that deﬁnes the Laplace transform. Then, for f (t) = (2t − 1)2 , ∫

F (s) =

e−u

(

0

=

)2 [ 2 ] ∫ ∞ 2u du 4u 4u 1 −1 = e−u − + du s s s3 s2 s 0

4 · 2! 4 · 1! 1 · 0! 8 − 4s + s2 − + = . s3 s2 s s3

This result can be conﬁrmed by executing one of > laplace((2*t-1)^2, t, s); LaplaceTransform[(2*t-1)^2, t, s] (c) Make a change of variables from t to u = ts in the integral that deﬁnes the Laplace transform. Then, for f (t) = t3/2 , √ ∫ ∞ ( )3/2 du 3 π 1 −u u 5 F (s) = e = 5/2 Γ( 2 ) = 5/2 . s s s 4s 0 This result can be conﬁrmed by executing one of > laplace(t^(3/2), t, s); LaplaceTransform[t^(3/2), t, s] 13.6.2.

Take inverse Laplace transforms, using symbolic computation if helpful. (a)

(s2

s2 , + 4)2

(b)

e−6s , s−1

(c)

1 . s(s + 1)(s + 2)

Solution: (a) Using one of > invlaplace(s^2/(s^2+4)^2, s, t); InverseLaplaceTransform[s^2/(s^2+4)^2, s, t] Either command yields a result equivalent to f (t) =

1 1 sin 2t + t cos 2t . 4 2

(b) Using one of > invlaplace(exp(-6*s)/(s-1), s, t); InverseLaplaceTransform[E^(-6*s)/(s-1), s, t] Either command produces a result equivalent to f (t) = u(t − 6) et−6 . (c) Using one of > invlaplace{1/s/(s+1)/(s+2), s, t); InverseLaplaceTransform[1/s/(s+1)/(s+2), s, t] Either command produces a result equivalent to f (t) = 13.6.3.

1 1 − e−t + e−2t . 2 2

By hand computation, take the Laplace transforms of the six pulses shown in Fig. 13.2. Check your work using symbolic computation.

13.6. LAPLACE TRANSFORM

(a)

(b) 1

223

(c)

1

1

1

1

(d)

4x(1-x)

(e)

1

(f)

1

2

sinπx

2 1 −1

−1

Figure 13.2: Pulse wave forms for Exercise 13.6.3

Hint. For symbolic computation describe the pulses using the unit step function (Heaviside(x) or HeavisideTheta[x]).

Solution: The pulses cause the integration limits of the Laplace transform deﬁnition to be limited. This is taken into account when we write the integrals. (a) Here f (t) = 1 on the interval (0, 1) and is zero elsewhere. Therefore, ∫

1

ets dt =

F (s) = 0

1 − e−s . s

To conﬁrm with symbolic computation, deﬁne f (t) as 1 − u(t − 1), and therefore execute one of > laplace(1-Heaviside(t-1),t,s); LaplaceTransform[1 - HeavisideTheta[t - 1], t, s] Either command produces a result equivalent to the above F (s). (b) The integral for F (s) is ∫

1

F (s) = 0

t e−ts dt =

1 − (s + 1)e−s . s2

The above result is most directly obtained by integrating by parts. To conﬁrm with symbolic computation, deﬁne f (t) as t[1 − u(t − 1)], and therefore execute one of > laplace(t*(1-Heaviside(t-1)),t,s); LaplaceTransform[t*(1 - HeavisideTheta[t - 1]), t, s] Either command produces a result equivalent to the above F (s).

224

CHAPTER 13. INTEGRAL TRANSFORMS (c) The integral for F (s) is ∫

1

F (s) =

4t(1 − t) e−ts dt =

0

) 4 ( s − 2 + (s + 2)e−s . 3 s

The above evaluation requires two integrations by parts. To conﬁrm, deﬁne f (t) as 4t(1 − t)[1 − u(t − 1)], and therefore execute one of laplace(4*t*(1-t)*(1-Heaviside(t-1)),t,s); LaplaceTransform[4*t*(1-t)*(1-HeavisideTheta[t-1]), t, s] Either command produces a result equivalent to the above F (s). (d) The integral for F (s) is ∫

2

F (s) =

(1 − t) e−ts dt =

0

s − 1 + (s + 1)e−2s . s2

Again use an integration by parts. To conﬁrm with symbolic computation, deﬁne f (t) as (1 − t)[1 − u(t − 2)], and therefore execute one of laplace((1-t)*(1-Heaviside(t-2)),t,s); LaplaceTransform[(1-t)*(1-HeavisideTheta[t-2]), t, s] Either command produces a result equivalent to the above F (s). (e) The integral for F (s) is ∫ F (s) =

1

−ts

te

∫ dt +

0

2

(t − 2) e−ts dt =

1

1 − 2se−s − e−2s . s2

Again use an integration by parts. To conﬁrm, deﬁne f (t) as t − 2u(t − 1) − (t − 2)u(t − 2), and therefore execute one of laplace(t-2*Heaviside(t-1)-(t-2)*Heaviside(t-2),t,s); LaplaceTransform[t-2*HeavisideTheta[t-1] -(t-2)*HeavisideTheta[t-2],t,s] Either command produces a result equivalent to the above F (s).

(f) The integral for F (s) is ∫ F (s) = 0

1

sin πt e−ts dt =

π(1 + e−s ) . s2 + π 2

The integral can be evaluated using techniques similar to those described in connection with Eq. (3.34). To conﬁrm with symbolic computation, deﬁne f (t) as sin πt[1 − u(t − 1)], and therefore execute one of laplace(sin(Pi*t)*(1-Heaviside(t-1)),t,s); LaplaceTransform[Sin[Pi*t]*(1-HeavisideTheta[t-1]),t,s] Either command produces a result equivalent to the above F (s).

13.7. LAPLACE TRANSFORM: SOLVING ODES 13.6.4.

225

Check the Laplace convolution theorem by comparing L[(f ∗ g)] and F G, with f = t2 − 4 and g = sin 2t.

Solution: Using symbolic computing, ﬁnd F (s) = (2 − 4s2 )/s3 and G(s) = 2/(s2 + 4). Now compute ∫ t 9 t2 f ∗ g= [(t − u)2 − 4) sin 2u du = (cos 2t − 1) + . 4 2 0 We can take the laplace transform of f ∗ g by noting that its individual terms correspond to entries 1, 2, and 5 in Table 13.1. The result is 9 s 9 1 1 9s4 /4 − 9s2 (s2 + 4)/4 + (s2 + 4) −8s2 + 4 − + 3 = = 3 2 , 2 3 2 4 s +4 4 s s s (s + 4) s (s + 4) in agreement with direct computation of the product F (s)G(s). 13.6.5.

Find the inverse Laplace transform of H(s) = e−as /s by writing it as F G, where F = e−as and G = 1/s, and then applying the convolution theorem.

Solution: The function f (t) with transform F (s) is δ(t−a); this can be veriﬁed by inserting this f (t) in the transform deﬁnition. The function whose transform is 1/s can be identiﬁed from Table 13.1 as g(t) = 1. The function whose transform is F G is f ∗ g, which we can compute directly as { ∫ t 1 , t > a, f ∗ g= δ(u − a) du = 0, otherwise . 0 13.6.6.

Apply the Laplace convolution theorem to e−as Y (s) and thereby obtain a conﬁrmation of Entry 15 in Table 13.1. Note. The vanishing of the result for t < a can be made explicit by writing u(t − a)y(t − a), where u is the unit step function.

Solution: Taking F (s) = e−as , which has inverse transform f (t) = δ(t − a), we note that F (s)Y (s) is the transform of f ∗ y. Computing this convolution, { ∫ t y(t − a), t > a, f ∗ y= δ(u − a)y(t − u) du = 0, otherwise. 0 This result can be written f ∗ y = u(t − a)y(t − a) .

13.7

LAPLACE TRANSFORM: SOLVING ODEs

Exercises 13.7.1.

Verify the steps in the solution of the ODE in Example 13.7.1, including the use of symbolic computation to conﬁrm that the inverse transform of Y (s) is the form given for y(t). Then check that y(t) satisﬁes both the ODE and the initial conditions.

226

CHAPTER 13. INTEGRAL TRANSFORMS Solution: The steps leading to the formula for Y are straightforward. To invert Y , use one of > invlaplace((s^2+4)/s/(s-2)^2,s,t); InverseLaplaceTransform[(s^2 + 4)/s/(s - 2)^2, s, t] Both these commands produce y(t) = 1 + 4te2t . Setting t = 0, we clearly get y(0) = 1. Computing y ′ (t) = (8t + 4)e2t , we conﬁrm that y ′ (0) = 4. To check that the ODE is satisﬁed, we also need y ′′ (t) = 16(t + 1)e2t . The substitution of these quantities into the ODE yields an identity.

13.7.2.

Repeat the procedure of Exercise 13.7.1 to check the solution of the ODE in Example 13.7.2.

Solution: The procedure for obtaining Y is straightforward. We can invert the transform by hand or by using one of > invlaplace(1/(s+2)^2, s, t); InverseLaplaceTransform[1/(s+2)^2, s, t] Both these commands produce y(t) = te2t . Clearly y(0) = 0. From y ′ (t) = (2t + 1)e2t , we also verify y ′ (0) = 1. Using in addition y ′′ (t) = 4(t + 1)e2t , we conﬁrm that the ODE is satisﬁed. 13.7.3.

Verify that the simultaneous ODEs of Example 13.7.3 have the solution given in that Example. Include a check of the initial conditions on y(0) and z(0).

Solution: From the form of y(t) and z(t), it is clear that y(0) = 0 and z(0) = 1. To check that the ODEs are satisﬁed, we need also √ √ √ √ y ′ (x) = −2 cos( 3 x) , z ′ (x) = − 3 sin( 3 x) − cos( 3 x) . Satisfaction of the ODEs follows from substitution for y, y ′ , z, and z ′ . Use Laplace transforms to solve the following ODEs (in the independent variable t) subject to the indicated initial conditions. Use symbolic computing methods where helpful. Verify that your solutions satisfy both the ODE and the initial conditions.

Solve each of these ODEs by taking its Laplace transform and solving the resulting algebraic equation for Y (s), the transform of y(t). Use the formula y ′′ (t) + py ′ (t) + qy(t) = f (t)

−→

[s2 Y (s) − sy(0) − y ′ (0)] + p[sY (s) − y(0)] + qY (s) = F (s) , from which we obtain Y (s) =

F (s) + (s + p)y(0) + y ′ (0) . s2 + ps + q

13.7. LAPLACE TRANSFORM: SOLVING ODES

227

Obtain F (s), the transform of f (t), by symbolic computing, using maple: > with(inttrans): > laplace(f (t),t,s); or mathematica: LaplaceTransform[f (t),t,s] Then solve for Y (s), and take the inverse transform to recover y(t). In maple, > invlaplace(Y (s),s,t); In mathematica, InverseLaplaceTransform[Y (s),s,t] 13.7.4.

y ′′ − 4y = 3e−t ,

with y(0) = 1, y ′ (0) = −1.

Solution: Here F (s) = L{3e−t } = Y (s) =

3 . Then s+1

F (s) + sy(0) + y ′ (0) 3/(s + 1) + s − 1 = . 2 s −4 s2 − 4 ) 1 ( 2t e + 3e−2t − 2e−t . 2 1 y ′ (0) = (2 − 6 + 2) = −1 . 2

The transform inversion yields y(t) = Check: y(0) = 13.7.5.

1 (1 + 3 − 2) = 1; 2

y ′′ + 2y ′ + 5y = 10 cos t,

with y(0) = 1, y ′ (0) = 2.

Solution: Here F (s) = L{10 cos t} = Y (s) =

10s . Then s2 + 1

F (s) + (s + 2)y(0) + y ′ (0) 10s/(s2 + 1) + s + 4 = . 2 s + 2s + 5 s2 + 2s + 5

The transform inversion yields y(t) = 2 cos t + sin t − e−t cos 2t . Check: y(0) = 2 + 0 − 1 = 1; 13.7.6.

y ′′ + 2y ′ + 5y = 10 cos t,

y ′ (0) = 0 + 1 + 1 = 2 .

with y(0) = 2, y ′ (0) = 1.

Solution: Here F (s) = L{10 cos t} = Y (s) =

10s . Then +1

s2

F (s) + (s + 2)y(0) + y ′ (0) 10s/(s2 + 1) + 2s + 5 = . 2 s + 2s + 5 s2 + 2s + 5

The transform inversion yields y(t) = 2 cos t + sin t . Check: y(0) = 2 + 0 = 2;

y ′ (0) = 0 + 1 = 1 .

228 13.7.7.

CHAPTER 13. INTEGRAL TRANSFORMS y ′′ + y = sin t,

with y(0) = 0, y ′ (0) = 1.

Solution: Here F (s) = L{sin t} = Y (s) =

1 . Then s2 + 1 F (s) + sy(0) + y ′ (0) 1/(s2 + 1) + 1 = . s2 + 1 s2 + 1 1 (3 sin t − t cos t) . 2 1 y ′ (0) = (3 − 1) = 1 . 2

The transform inversion yields y(t) = Check: y(0) = 13.7.8.

1 (0 − 0) = 0; 2

y ′′ + y = sin t,

with y(0) = 1, y ′ (0) = 0.

Solution: Here F (s) = L{sin t} = Y (s) =

s2

1 . Then +1

F (s) + sy(0) + y ′ (0) 1/(s2 + 1) + s = . 2 s +1 s2 + 1 ) 1( sin t − (t − 2) cos t . 2 1 y ′ (0) = (1 − 1) = 0 . 2

The transform inversion yields y(t) = Check: y(0) = 13.7.9.

1 (0 + 2) = 1; 2

y ′′ − y = sin t,

with y(0) = 0, y ′ (0) = 0.

Solution: Here F (s) = L{sin t} =

1 . Then s2 + 1

Y (s) =

F (s) + sy(0) + y ′ (0) 1/(s2 + 1) = . s2 − 1 s2 − 1 1 (sinh t − sin t) . 2 1 y ′ (0) = (1 − 1) = 0 . 2

The transform inversion yields y(t) = Check: y(0) = 13.7.10.

1 (0 − 0) = 0; 2

y ′′ − y = sin t,

with y(0) = 1, y ′ (0) = −1.

Solution: Here F (s) = L{sin t} = Y (s) =

s2

1 . Then +1

1/(s2 + 1) + s − 1 F (s) + sy(0) + y ′ (0) = . 2 s −1 s2 − 1 ) 1( t e + 3e−t − 2 sin t . 4 1 y ′ (0) = (1 − 3 − 2) = −1 . 4

The transform inversion yields y(t) = Check: y(0) =

1 (1 + 3 − 0) = 1; 4

13.7. LAPLACE TRANSFORM: SOLVING ODES 13.7.11.

229

Use symbolic computing to check that y(x) and z(x) as found in Example 13.7.3 are the inverse Laplace transforms of Y and Z and that y and z constitute a solution to the ODE system of that Exercise (including its initial conditions).

Solution: Taking the inverse transforms of Y and Z, in maple: > invlaplace(-2/(s^2+3),s,t); √ 2√ − 3 sin( 3 t) 3 > invlaplace((s-1)/(s^2+3), s,t):

this is y(t)

In mathematica, these inverse transforms are InverseLaplaceTransform[-2/(s^2 + 3), s, t]; InverseLaplaceTransform[(s - 1)/(s^2 + 3), s, t] Sqrt[3] Cos[Sqrt[3] t] − Sin[Sqrt[3] t] Sqrt[3]

this is z(t)

To check that y(t) and z(t) satisfy the ODE system, form √ dy = −2 cos( 3 t) , dt

√ √ √ dz = − 3 sin( 3 t) − cos( 3 t) . dt

Substitution of y, z, y ′ , and z ′ into thee ODEs conﬁrms that they are satisﬁed. It is also easy to check that y(0) = 0 and z(0) = 1. Use Laplace transforms to solve the following sets of simultaneous ODEs subject to the indicated initial conditions. Use symbolic computing methods where helpful. Verify that your solutions satisfy both the ODEs and the initial conditions.

13.7.12.

y ′ (t) + z(t) = 2 cos t , z ′ (t) − y(t) = 1 ,

with y(0) = 0, z(0) = 0.

Solution: Take the Laplace transforms of the equations. Use the relations L{2 cos t} = 2s/(s2 + 1) and L{1} = 1/s. Also note the initial values y(0) = z(0) = 0. sY + Z = −Y + sZ =

2s +1

s2 1 s

These algebraic equations have solution 1 s 2s2 Y =− + + , s 1 + s2 (1 + s2 )2

Z=

1 2s + . 2 1+s (1 + s2 )2

Now, taking the inverse Laplace transforms of Y and Z (either by table look-up or symbolic computing), y(t) = (1 + t) cos t + sin t − 1 ,

z(t) = (1 + t) sin t .

230

CHAPTER 13. INTEGRAL TRANSFORMS Checking the initial conditions, y(0) = (1 + 0) cos(0) + 0 − 1 = 0,

z(0) = (1 + 0) sin(0) = 0.

To check the ODE solutions, compute y ′ (t) = 2 cos t − (1 + t) sin t ,

z ′ (t) = sin t + (1 + t) cos t .

It is now straightforward to verify that the two ODEs are satisﬁed. 13.7.13.

y ′′ (t) + z ′′ (t) − z ′ (t) = 0 ,

with y(0) = 1, y ′ (0) = 0, z(0) = 0, z ′ (0) = 1.

y ′ (t) + z ′ (t) − 2z(t) = 1 − et ,

Solution: Take the Laplace transforms of the equations. Use the relation L{1 − et } = 1/s − 1/(s − 1). Also note the initial values y(0) = z ′ (0) = 1, y ′ (0) = z(0) = 0.  2 ′ 2 ′  s Y − sy(0) − y (0) + s Z − sz(0) − z (0) − sZ + z(0) = 0  sY − y(0) + sZ − z(0) − 2Z = 1 − 1 s s−1  2  s Y + s(s − 1)Z = s + 1 −→  sY + (s − 2)Z = 1 + 1 s

     −

1  s−1

Solving these equations for Y and Z, we get Y =

1 , s2

Z=

1 , s−1

corresponding to y(t) = t and z(t) = et . To check the solution, we also need y ′ (t) = 1 and z ′ (t) = et . 13.7.14.

Use symbolic computing to verify that the inverse transform given in Eq. (13.71) yields the expression for y(t) given in Eq. (13.70).

Solution: In maple, > invlaplace(s*(2*s-1)/(2*s-3)/(s+2)/(s-1),s,t); 6 3t/2 10 −2t 1 t e + e − e . 7 21 3 In mathematica, InverseLaplaceTransform[s*(2*s-1)/(2*s-3)/(s+2)/(s-1), s, t] ) 1 −2t ( 10 − 7 e3t + 18 e7t/2 . e 21 13.7.15.

Use the convolution theorem to solve 2y ′′ + y ′ − 6y = sin x , with y(0) = 2 and y ′ (0) = −1.

13.7. LAPLACE TRANSFORM: SOLVING ODES

231

Solution: Following the analysis presented in Eqs. (13.65)–(13.68), we identify the roots of 2s2 + s − 6 as α = −2 and β = 3/2, and write y(x) = φ(x) ∗ sin x + 2y(0)φ′ (x) + [2y ′ (0) + y(0)]φ(x) = φ(x) ∗ sin x + 4φ′ (x) . Here, referring to Eq. (13.68), ) 1 ( 3x/2 φ(x) = e − e−2x , 7

1 7

φ′ (x) =

(

3 3x/2 e + 2e−2x 2

) .

To complete this exercise we need to evaluate the convolution φ(x) ∗ sin x: ) ∫ x ( 3(x−u)/2 e − e−2(x−u) φ(x) ∗ sin x = du 7 0 =

=

e3x/2 7

x

e−3u/2 sin u du −

0

e−2x 7

x

e2u sin u du 0

] ] 1 [ 3x/2 1 [ −2x 4e − 4 cos x − 6 sin x − e − cos x + 2 sin x . 91 35

We now get y(x) = φ(x) ∗ sin x + 4φ′ (x) = 13.7.16.

82 3x/2 39 −2x cos x + 8 sin x e + e − . 91 35 65

By solving the following ODEs subject to y(t−t0 ) = y ′ (t−t0 ) = 0, ﬁnd the response functions for (a) y ′′ − 4y ′ − 5y = δ(t − t0 ) , (b) y ′′ + 7y ′ + 10y = δ(t − t0 ) , (c) y ′′ − 4y = δ(t − t0 ) .

Solution: The response functions are the solutions y(t) of these ODEs. Using the fact that L{δ(t)} = 1, take the Laplace transform of the ODE, solve for Y , and take the inverse transform. For these ODEs, we can invert the transform by hand if we start by factoring its denominator and making a partial-fraction decomposition. Alternatively, we can use symbolic computing. In either case, use t − t0 as the direct-space variable. (a) We have s2 Y − 4sY − 5Y = 1. Therefore, Y =

1 , s2 − 4s − 5

y(t − t0 ) =

1 2(t−t0 ) e sinh 3(t − t0 ) . 3

(b) Here s2 Y + 7sY + 10Y = 1, yielding Y =

1 , 2 s + 7s + 10

y(t − t0 ) =

] 1 [ −2(t−t0 ) e − e−5(t−t0 ) . 3

(c) Here s2 Y − 4Y = 1, yielding Y =

s2

1 , −4

y(t − t0 ) =

1 sinh 2(t − t0 ) . 2

Chapter 14

SERIES SOLUTIONS: IMPORTANT ODEs 14.1

SERIES SOLUTIONS OF ODEs

Exercises Obtain series solutions for the following ODEs.

14.1.1.

y ′ = 3x2 y .

Solution: Inserting the series expansion y =

∞ ∑

an xn+s ,

n=0 ∞ ∑

(n + s)an xn+s−1 = 3

n=0

∞ ∑

an xn+s+2 .

n=0

Shifting the index up by three in the ﬁrst summation (so both summations will show the same explicit power of x), sa0 xs−1 + (s + 1)a1 xs + (s + 2)a2 xs+1 +

∞ ∑

[(n + s + 3)an+3 − 3an ] xn+s+2 = 0 .

n=0

The three extra terms from the ﬁrst sum must individually vanish, leading to the indicial equations sa0 = 0 , (s + 1)a1 = 0 , (s + 2)a2 = 0 . Since a0 must be nonzero, we have s = 0; the second and third of the above equations can then only be satisﬁed if a1 = a2 = 0. From the requirement that the coeﬃcient of each power of x in the summed 232

14.1. SERIES SOLUTIONS OF ODES

233

terms vanish, we have (setting s = 0) the recurrence formula (n + 3)an+3 − 3an = 0 ,

or

an+3 =

3 an . n+3

We have already shown that a1 = a2 = 0, so this recurrence formula shows that the only nonzero coeﬃcients will be a0 , a3 , a6 , etc. and that the general formula (with a0 set to unity) will be a3n =

3n 1 = . 3 · 6 · · · (3n) n!

Our series solution is then (for general a0 ) y(x) = C

∞ ∑ 3 x3n = C ex . n! n=0

For this ODE, the solution is equivalent to a closed form. We can check our 3 solution by noting that y ′ (x) = 3x2 C ex . 14.1.2.

2xy ′′ − y ′ + 2y = 0 .

Solution: Inserting the series expansion y =

∞ ∑

an xn+s ,

n=0

2

∞ ∑

(n + s)(n + s − 1)an xn+s−1 −

n=0

∞ ∑

(n + s)an xn+s−1 + 2

n=0

∞ ∑

an xn+s = 0 .

n=0

Shifting the index up by one in the ﬁrst two summations (so all summations will show the same explicit power of x), and combining the sums, 2s(s − 1)a0 xs−1 − sa0 xs−1 +

∞ ∑

[2(n + s + 1)(n + s)an+1 − (n + s + 1)an+1 + 2an ] xn+s = 0 .

n=0

The extra terms from the ﬁrst two sums must combine to give a vanishing result so we have the indicial equation s(2s − 3)a0 = 0 , showing that the possible values of s are s = 0 and s = 3/2. Taking ﬁrst s = 0, the requirement that the coeﬃcients in the summed terms vanish leads to the recurrence formula an+1 =

−2an . (n + 1)(2n − 1)

Setting a0 = 1, this recurrence formula yields y1 (x) = 1 + 2x +

∞ ∑ (−1)n−1 2n n x . n! (2n − 3)!! n=2

234

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES Now taking s = 3/2, the recurrence formula becomes an+1 =

−2an . (n + 1)(2n + 5)

Setting a0 = 1, the recurrence formula yields y2 (x) =

∞ ∑ 3(−1)n 2n xn+3/2 . n! (2n + 3)!! n=0

The ODE therefore has general solution C1 y1 (x) + C2 y2 (x). 14.1.3.

2xy ′′ + y ′ + 2y = 0 .

Solution: The ﬁrst few steps for this ODE are similar to those of Exercise 14.1.2 except for a change of sign in the y ′ term. After introducing a power-series expansion and shifting the index, we have 2s(s − 1)a0 xs−1 + sa0 xs−1 +

∞ ∑

[2(n + s + 1)(n + s)an+1 + (n + s + 1)an+1 + 2an ] xn+s = 0 ,

n=0

with indicial equation s(2s − 1)a0 = 0, yielding s = 0 and s = 1/2. For s = 0 and a0 = 1 we have the recurrence formula and solution an+1 =

−2 an , (n + 1)(2n + 1)

y1 (x) =

∞ ∑

(−1)n 2n xn . n! (2n − 1)!! n=0

For s = 1/2 and a0 = 1 we have an+1 =

−2an , (2n + 3)(n + 1)

y2 (x) =

∞ ∑

(−1)n 2n xn+1/2 . n! (2n + 1)!! n=0

The general solution to this ODE is C1 y1 (x) + C2 y2 (x).

14.2

LEGENDRE EQUATION

Exercises 14.2.1.

Plot Pn (x) on the range (−1, +1) for integer n from 0 through 8 (recommend that you do this on separate plots). (a) For each Pn , count the number of zeros on the plotted range and compare this number to n. (b) Identify the point (or points) where each Pn has its maximum magnitude, and note that maximum value. (c) Determine from your plots whether each Pn has a deﬁnite parity, and if so, identify that parity (even or odd).

14.2. LEGENDRE EQUATION

235

Solution: The command to make the plots of Pn (x) is one of > plot(LegendreP(n,x), x=-1 .. 1); Plot[PegendreP[n,x], {x,-1,1}] Substitute for n the index value for which a plot is desired. (a) All n zeros of Pn (x) lie in the range −1 < x < 1. (b) Each Pn (x) has maximum magnitude (equal to 1) at x = +1 and x = −1. (c) The Pn of even n have even parity; those of odd n have odd parity. 14.2.2.

Find the three roots of P3 (x). Hint. One way to do this is by plotting P3 and then restricting the range of the plot to the neighborhood of a zero until its location can be read out to suﬃcient precision.

Solution: A plot that will give a rather accurate value of a nonzero root of P3 can be produced by one of > plot(LegendreP(3,x), x=0.774 .. 0.775); Plot[LegendreP[3,x], {x, 0.774, 0.775}] The plot (below) shows the root to be at 0.7746. Because of the odd symmetry, the other roots are at −0.7746 and zero.

14.2.3.

Express f (x) = 3x3 − 2x2 − 3x + 5 as a linear combination of Legendre functions. Hint. Start by ﬁnding the coeﬃcient of P3 , denoted c3 . Then observe that f (x) − c3 P3 (x) is a polynomial of degree 2. Continue with the coeﬃcients of P2 , P1 , and ﬁnally P0 .

Solution: Given that P3 (x) = (5/2)x3 − (3/2)x, we have c3 = 6/5. Because P3 is odd, we can immediately ﬁnd c2 given than P2 (x) = (3/2)x2 − 1/2: c2 = −4/3. Then 6 13 f (x) − c3 P3 (x) − c2 P2 (x) = − x + . 5 3 The remainder of the above is equivalent to −(6/5)P1 (x) + (13/3)P0 (x). Thus, f (x) =

6 4 6 13 P3 (x) − P2 (x) − P1 (x) + P0 (x) . 5 3 5 3

236 14.2.4.

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES Diﬀerentiate the Legendre generating function, Eq. (14.19), with respect to x, and thereby establish Eqs. (14.32) and (14.33). Hint. It may be necessary to use Eq. (14.31), which was derived in the text.

Solution: Start from

∞ ∑ t ∂g(x, t) = = Pn′ (x)tn . ∂x (1 − 2xt + t2 )3/2 n=0

Rearrange to t g(x, t) =

Pn (x)tn+1 = (1 − 2xt + t2 )

n

Pn′ (x)tn

n

=

∑[ ] Pn′ (x)tn − 2xPn′ (x)tn+1 + Pn′ (x)tn+2 . n

Equating the coeﬃcients of individual powers of t, we get the following, which we call Eq. (A). ′ ′ Pn (x) − Pn+1 (x) + 2xPn′ (x) − Pn−1 (x) = 0 .

To bring this equation to the forms required by this Exercise, combine it with the result of diﬀerentiating the recurrence formula, Eq. (14.31), That diﬀerentiation yields ′ ′ (n + 1)Pn+1 (x) − (2n + 1)Pn (x) − (2n + 1)xPn′ (x) + nPn−1 (x) = 0 .

Adding n times the ﬁrst of these equations to the second, we obtain ′ (x) − (n + 1)Pn (x) − xPn′ (x) = 0 . Pn+1

a result equivalent to Eq. (14.33). If we then add two times this equation to Eq. (A), we get a result equivalent to Eq. (14.32). 14.2.5.

Show that given either of Eqs. (14.34) and (14.35), one can derive the other.

Solution: Use Eq. (14.30) in the form (n + 1)xPn (x) = −nxPn (x) + nPn−1 (x) + (n + 1)Pn+1 (x) . When this is substituted for (n+1)xPn (x) in Eq. (14.35), one obtains Eq. (14.34). 14.2.6.

By explicitly generating the ﬁrst few terms in the expansion of the Legendre generating function, obtain an expression for P4 (x). Hint. This may require ﬁrst an expansion of (1 − 2xt + t2 )−1/2 and then a further expansion of some of the resulting terms.

Solution: Form the binomial expansion of (1 − w)−1/2 , where w = −2xt + t2 , but keep only the terms that have t4 dependence. There are three such terms, 5 35 3 (−2xt + t2 )2 − (−2xt + t2 )3 + (−2xt + t2 )4 . 8 16 128

14.2. LEGENDRE EQUATION

237

Now retain only the t4 terms when the powers of −2xt + t2 are expanded: 3 4 5 35 t − 3(−2x)2 t4 + (−2x)4 t4 . 8 16 128 [ ] 35 4 15 2 3 4 This simpliﬁes to x − x + t . The coeﬃcient of t4 is P4 (x). 8 4 8 14.2.7.

Verify that Eq. (14.23) is an identity.

Solution: Using gx and gt to denote the partial derivatives of g(x, t), note that t

∂2 [t g(x, t)] = 2t gt + t2 gtt , ∂t2

and that gt =

x−t , (1 − 2xt + t2 )3/2 gx =

gtt =

t (1 − 2xt + t2 )3/2

1 3(x − t)2 − , 2 3/2 (1 − 2xt + t ) (1 − 2xt + t2 )5/2 gxx =

3t2 . (1 − 2xt + t2 )5/2

The equation to be proved then corresponds to 2xt 2t(x − t) t2 3t2 (x − t)2 (1 − x2 ) 3t2 − + − + = 0, Y 5/2 Y 3/2 Y 3/2 Y 3/2 Y 5/2 where Y = 1 − 2xt + t2 . Collecting similar terms, the above reduces to −

) 3t2 3 ( + 5/2 t2 − 2xt3 + t4 = 0 . 3/2 Y Y

The parenthesized expression in the second term is just t2 Y , establishing the desired identity. 14.2.8.

Show that Pn′ (1) =

n(n + 1) . 2

Solution: Subtract Eq. (14.33) from Eq. (14.32), with x = 1. The result can be written ′ Pn′ (1) − Pn−1 (1) = nPn (1) = n .

Since we know that P0′ = 0, the above gives Pn′ (1) =

n ∑

j = n(n + 1)/2.

j=1

14.2.9.

Referring to the discussion between Eqs. (14.27) and (14.28), show that for a two-charge system, if the total charge Q is zero, the dipole moment µ remains the same if the two charges are moved the same distance along the z-axis (meaning that a1 → a1 + d and a2 → a2 + d, with d arbitrary). Then show that the quadrupole moment ω remains unchanged under this position shift if both Q and µ are zero.

238

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES Solution: Given that originally µ = a1 q1 +a2 q 2 , a change of origin that causes a1 → a1 +d and a2 → a2 + d changes µ to (a1 + d)q1 + (a2 + d)q2 = (a1 q1 + a2 q2 ) + (q1 + q2 )d; the term containing d vanishes if q1 + q2 = Q = 0. For the quadrupole moment, write (a1 + d)2 q1 + (a2 + d)2 q2 = (a21 q1 + a22 q2 ) + 2d(a1 q1 + a2 q2 ) + d2 (q1 + q2 ) . This expression reduces to the original value of ω if a1 q1 +a2 q2 = µ and q1 +q2 = Q are both zero.

14.2.10.

Enter into your symbolic system the recurrence-formula coding of Example 14.2.2 and verify that it works properly. Then remove the command expand or Expand (but of course keeping the expression that was to be expanded), and note how rapidly the complexity of the ﬁnal result escalates as n is increased.

Solution: Without the use of expand or Expand the complexity increases rapidly as one increases the index n of Pn . 14.2.11.

Show that Pn (−1) = (−1)n .

Solution: From the generating function with x = −1, g(−1, t) =

∞ ∑ 1 1 = = (−1)n tn . 1 + t n=0 (1 + 2t + t2 )1/2

From this we read out Pn (−1) = (−1)n . ∫

14.2.12.

1

Show that −1

xm Pn (x) dx = 0 if m < n .

Solution: Write xm as its expansion in Legendre polynomials; the polynomial of highest degree in the expansion will be Pm . When inserted into the integral of the present exercise, all terms will vanish due to orthogonality. ∫

14.2.13.

1

Show that −1

′ Pn (x)Pn−1 (x) dx = 0 .

Solution: ′ Pn−1 is a polynomial of degree n−2 and if expanded in Legendre polynomials the polynomial of highest degree in the expansion will be Pn−2 . When inserted into the integral of the present exercise, all terms will vanish due to orthogonality.

14.2.14.

1

Evaluate −1

Pn (x) dx .

Solution:

1

Write this integral as −1

Pn (x)P0 (x) dx .

Unless n = 0 it will vanish due to orthogonality. If n = 0 the integral can be evaluated directly; its value is 2.

14.2. LEGENDRE EQUATION 14.2.15.

239

Using symbolic computing, write code that produces the Legendre expansion of the step function { 0, −1 ≤ x < 0, f (x) = 1, 0 < x ≤ 1. and make calculations for enough expansion lengths to give a good indication of the rate at which the expansion converges.

Solution: Using the fact that ⟨Pn |Pn ⟩ = 2/(2n + 1), the expansion coeﬃcients and the expansion are an =

2n + 1 2

1

Pn (x) dx ,

u(x) =

0

∞ ∑

an Pn (x) .

n=0

maple code for making the expansion and plotting the result can be > makeA := proc(nmax) local n; global a; >for n from 0 to nmax do >

a[n] := (n+0.5)*int(LegendreP(n,x), x=0 .. 1) end do

> end proc; > makeU := proc(nmax,x) local n; >

add(a[n]*LegendreP(n,x), n=0 .. nmax) end proc;

> nmax :=20: makeA(nmax): plot(makeU(nmax,x), x = -1 .. 1); mathematica code can be a = Table[0,{n,1,nmax}]; makeA[nmax_] := Module[{n,x},

a0 = 0.5;

Do[a[[n]] = (n+0.5)*Integrate[LegendreP[n,x], {x, 0, 1}], {n, 1, nmax}] ] makeU[nmax_,x_] := Module[{n}, a0 + Sum[a[[n]]*LegendreP[n,x], {n, 1, nmax}] ] nmax = 20; makeA[nmax]; Plot[makeU[nmax,x], {x, -1, 1}] The plot produced by the above code for nmax = 20 is shown here. The convergence of the series is extremely slow.

14.2.16.

Using symbolic computing, obtain the Legendre expansion of the function sin−1 x (this is not 1/ sin x).

240

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES Solution: The function sin−1 x is known in the symbolic systems as arcsin or ArcSin. Use the same coding as in Exercise 14.2.15 except for the procedure makeA, which should be replaced by one of the following: > makeA := proc(nmax) local n; global a; > for n from 0 to nmax do >

a[n] := (n+0.5)*int(arcsin(x)*LegendreP(n,x), x=-1 .. 1) end do

> end proc; makeA[nmax_] := Module[{n,x}, a0 = 0.5*Integrate[ArcSin[x], {x, -1, 1}]; Do[a[[n]] = (n+0.5)*Integrate[ArcSin[x]*LegendreP[n,x], {x, -1, 1}], {n, 1, nmax}] ] The plot produced by the above code for nmax = 4 is shown here, together with an accurate value of sin−1 x. The series converges quite rapidly.

14.2.17.

The Legendre expansion of a function f (x), truncated after PN , which can be written f (x) =

N ∑

an Pn (x) ,

n=0

is a least-squares-error approximation to f (x). This means that ]2 ∫ 1 [ N ∑ I= f (x) − bn Pn (x) dx −1

n=0

is a minimum when the bn are the coeﬃcients in the Legendre expansion of f (x), i.e., bn = an , n = 0, · · · , N . Expanding I, we have I = ⟨f |f ⟩ − 2

bn ⟨Pn |f ⟩ +

bm bn ⟨Pm |Pn ⟩ .

mn

n

Evaluating some of the expressions in I and then adding to I the zero quantity N ∑ n=0

a2n ⟨Pn |Pn ⟩ −

N ∑

a2n ⟨Pn |Pn ⟩ ,

n=0

show that I is indeed a minimum when bn = an for n = 0, · · · , N .

14.3. ASSOCIATED LEGENDRE FUNCTIONS

241

Solution: In the last expression for I, replace ⟨Pn |f ⟩ by an ⟨Pn |Pn ⟩ and remove (because they vanish) all the terms of the double sum with m ̸= n. Then, adding and subtracting the suggested quantity, we have [ ] ∑ ∑ I = ⟨f |f ⟩ − a2n ⟨Pn |Pn ⟩ + ⟨Pn |Pn ⟩ a2n − 2bn an + b2n n

= ⟨f |f ⟩ −

∑ n

n

a2n ⟨Pn |Pn ⟩ +

∑ (an − bn )2 ⟨Pn |Pn ⟩ . n

This last summation consists of terms each of which is nonnegative; I is minimized by setting each bn equal to the corresponding an .

14.3

ASSOCIATED LEGENDRE FUNCTIONS

Exercises 14.3.1.

Using LegenP (maple; Appendix K) or LegendreP (mathematica), plot Pnm (x) for several n and m values on the range −1 ≤ x ≤ 1, and from the plots (a) Count the number of zeros on the plotted range and relate this number to n and m. (b) Determine whether each Pnm has a deﬁnite parity, and if so, identify that parity (even or odd).

Solution: If using maple, start by pasting into your current session the code from Appendix K for the procedure LegenP, including the one-line procedure ONE at its end. Then run these procedures so that LegenP will be available for use. Make plots of various Pnm (x) by running one of > plot(LegenP(n, m, x), x = -1 .. 1); Plot[LegendreP[n, m, x], {x, -1, 1}] From the plots, (a) It can be seen that Pnm (x) has n − |m| zeros on −1 < x < 1. (b) Pnm is even if n − m is even and odd if n − m is odd. 14.3.2.

Verify, using symbolic computing, that the recurrence formula connecting Pnm of successive m values, Eq. (14.49), is valid for arbitrary n. Your check must include m values of both signs and cases where m + 1 and m − 1 are of opposite sign.

Solution: If a maple user, the procedure LegenP must have been executed in the current session before using the code shown here. It is convenient to deﬁne a quantity that stands for the left-hand side of the recurrence formula. Thus, run one of > RF := LegenP(n,m+1,x)+2*m*x/(1-x^2)^(1/2)*LegenP(n,m,x) >

+(n+m)*(n-m+1)*LegenP(n,m-1,x):

242

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES RF = LegendreP[n,m+1,x]+2*m*x/(1-x^2)^{1/2}*LegendreP[n,m,x] +(n+m)*(n-m+1)*LegendreP[n,m-1,x]; Now, for a number of values of n and m (including m = 0 and some negative m values) check to see if RF evaluates to zero. You may need to simplify your initial result. An example in maple: > n := 4: m := -3: RF; ) 1 ( ) )2 )( 1 ( 1 ( 2 7x − 1 −1 + x2 − x2 1 − x2 + −1 + x2 − 48 8 48 > simplify(%); 0 A mathematica example: n = 3; m = 1; RF ( ) ( ) ( ) −15x −1 + x2 − 3x −1 + 5x2 + 6 −3x + 5x3 Simplify[%] 0

14.3.3.

Show that the substitution y(x) = (1−x2 )m/2 u(x) converts the associated Legendre equation, Eq. (14.40), to the form [ ] (1 − x2 )u′′ − 2(m + 1)xu′ + n(n + 1) − m(m + 1) u = 0 .

Solution: Compute y ′ (x) = (1 − x2 )m/2 u′ (x) − mx(1 − x2 )m/2−1 u(x) , y ′′ (x) = (1 − x2 )m/2 u′′ (x) − 2mx(1 − x2 )m/2−1 u′ (x) [ ] + −m(1 − x2 )m/2−1 + m(m − 2)x2 (1 − x2 )m/2−2 u(x) . Then substitute these expressions and that for y(x) into the associated Legendre equation, ] [ m2 y(x) = 0 . (1 − x2 )y ′′ (x) − 2xy ′ (x) + n(n + 1) − 1 − x2 The desired equation remains after cancelation of a common factor (1 − x2 )m/2 . 14.3.4.

Show that the substitution x = cos θ converts the following ODE into the associated Legendre equation. ] ( ) [ 1 d dy m2 y = 0. sin θ + l(l + 1) − 2 sin θ dθ dθ sin θ

Solution: Note that d = dθ

(

dx dθ

)

d d = − sin θ . dx dx

14.3. ASSOCIATED LEGENDRE FUNCTIONS

243

Note also that sin2 θ = 1−x2 . The derivative term of the original ODE therefore becomes ( ) ( ) d dy d dy − − sin2 θ = (1 − x2 ) = (1 − x2 )y ′′ − 2xy ′ , dx dx dx dx where the primes indicate derivatives with respect to x. Replacement of sin2 θ by 1 − x2 in the ﬁnal term of the original ODE completes the conversion to an associated Legendre equation. 14.3.5.

Prove Eqs. (14.52) and (14.53).

Solution: m Starting from the Rodrigues formula for Pm (x) and expanding the quantity to 2 m be diﬀerentiated, (x − 1) , we note (as did the text) that the only term of that expansion that will survive 2m diﬀerentiations is x2m , and the diﬀerentiation of that term will produce (2m)!. We then have m (x) = (−1)m (1 − x2 )m/2 Pm

(2m)! (2m)! = (−1)m (1 − x2 )m/2 2m m! (2m)!!

= (−1)m (1 − x2 )m/2 (2m − 1)!! −m For Pm (x), the Rodrigues formula gives −m Pm (x) =

(−1)m 1 (1 − x2 )−m/2 (x2 − 1)m = (1 − x2 )m/2 . (2m)!! (2m)!!

We have used the fact that (x2 − 1)m = (−1)m (1 − x2 )m . 14.3.6.

Consider the expansion of Pn−m (x) in the orthogonal functions Pjm (x): Pn−m (x) =

cj Pjm (x) ,

cj =

j

⟨Pjm |Pn−m ⟩ ⟨Pjm |Pjm ⟩

.

Evaluate the numerator of cj by inserting the Rodrigues formulas for Pjm and Pn−m and carrying out repeated integrations by parts; use Eq. (14.54) to evaluate the denominator. Thereby establish the validity of Eq. (14.46) for Pn−m .

Solution: After carrying out m integrations by parts in the direction that converts [ ( d )j+m ] [ ( d )n−m ] 2 j (x − 1) (x2 − 1)n into dx dx ] [ ( d )n ] [ ( d )j 2 j (x − 1) (x2 − 1)n , (−1) dx dx m

we have (−1)m times the orthogonality integral ⟨Pj |Pn ⟩, so 2δjn . 2n + 1 Combining with the value of ⟨Pjm |Pjm ⟩ as given in Eq. (14.54), we get ⟨Pjm |Pn−m ⟩ = (−1)m

cj = (−1)m δjn

(j − m)! . (j + m)!

The expansion of Pn−m (x) reduces to a single term, and hence to Eq. (14.46).

244

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES

14.4

BESSEL EQUATION

Exercises 14.4.1.

Write symbolic code that evaluates a Bessel function from its power series, truncated after 10 terms. How successful is this truncated expansion at representing J0 (1), J0 (10), J16 (1), and J16 (10)?

Solution: In maple, a suitable procedure is > bessT := proc(n,x); >

> end proc; A similar procedure in mathematica is bessT[n_,x_] := Module[{s}, Sum[(-1)^s/s!/Gamma[n+s+1]*(x/2)^(n+2*s), {s, 0, 9}]] By comparing with a call to BesselJ, the ten-term expansion of J0 (1) yields 0.76519768655796655138, correct except for the last two digits, while for J0 (10) the expansion yields −6.21 · · · , which is not even qualitatively correct (J0 (10) ≈ −0.2459 · · · ). For J16 (x), more than 20 signiﬁcant digits are obtained for x = 1; for x = 10 the expansion yields 0.001566747 while the accurate value is 0.001566756. 14.4.2.

Exercise 14.4.1 shows that the power series will not always be a practical way to evaluate Bessel functions. Fortunately, the extensive study received by Bessel functions has let to many alternate methods for their evaluation. A useful integral representation (for integer n) is ∫ 1 π Jn (x) = cos(x sin θ − nθ) dθ . π 0 Carrying out numerical integrations, check this formula against the values given by your symbolic program for J0 (10) and J16 (10).

Solution: maple recognizes the integral as the representation of a Bessel function, so one does not get a numerical integration in that language unless one makes the integrand numerical. mathematica does not make that identiﬁcation. Therefore, meaningful answers to this Exercise can involve the following coding: one of > evalf(1/Pi*int(evalf(cos(x*sin(theta)-n*theta)), >

theta = 0 .. Pi));

N[1/Pi*Integrate[Cos[x*Sin[t] - n*t], {t, 0, Pi}]] When appropriate values of n and x are speciﬁed, one gets almost the same result as a call to BesselJ(n,x) or BesselJ[n,x].

14.4. BESSEL EQUATION 14.4.3.

245

Use a recurrence formula and the relation between Jn and J−n to show that J0′ (x) = −J1 (x).

Solution: Use Eq. (14.73) with ν = 0. We initially have 2J0′ (x) = J−1 (x) − J1 (x) = −J1 (x) − J1 (x) , which leads directly to J0′ (x) = −J1 (x). 14.4.4.

Use the result from Exercise 14.4.3 to show that ∫ ∞ J1 (x) dx = 1 . 0

Solution: Write

∞ 0

14.4.5.

J1 (x) dx = −

J0′ (x) dx = −J0 (∞) + J0 (0) = 0 + 1 = 1 .

0

It can be shown that there exists a generating function for Bessel functions of integral order corresponding to the formula −1

g(x, t) = ex(t−t

)/2

=

∞ ∑

Jn (x)tn .

n=−∞

Note that the expansion of g(x, t) contains an inﬁnite number of terms with each power of t, so the quantities deﬁned by the generating function will not be polynomials. By diﬀerentiating the formula for g(x, t), derive the recurrence formulas given in Eqs. (14.72) and (14.73).

Solution: To obtain the ﬁrst of these recurrence formulas, diﬀerentiate the generating function formula with respect to t, ∞ ∑ ∂g(x, t) x(1 + t−2 ) x(t−t−1 )/2 = e = nJn (x)tn−1 . ∂t 2 n=−∞

Replacing the exponential, which is g(x, t), by the equivalent summation, we reach ∑x[ ] ∑ Jn (x)tn + Jn (x)tn−2 = nJn (x)tn−1 . 2 n n Changing the summation indices of the left-hand terms so as to make the explicit power of t the same in all terms, and collecting all terms into a single summation, ] ∑ [x x Jn−1 (x) + Jn+1 (x) − nJn (x) tn−1 = 0 . 2 2 n This equation is to be satisﬁed for all t, and therefore the coeﬃcient of each power of t must vanish. That result is Eq. (14.72). To get the second recurrence formula, diﬀerentiate g(x, t) with respect to x. Doing so and replacing the exponential by the equivalent summation, ∑ ∂g(x, t) t − t−1 ∑ Jn′ (x)tn . = Jn (x)tn = ∂x 2 n n

246

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES This equation can be rewritten ∑ 1∑ 1∑ Jn (x)tn+1 − Jn (x)tn−1 = Jn′ (x)tn . 2 n 2 n n Redeﬁning the summation indices to make all terms have the same explicit power of t, we reach ] ∑ [1 1 Jn−1 (x) − Jn+1 (x) − nJn′ (x) tn = 0 . 2 2 n This equation is to be satisﬁed for all t, and therefore the coeﬃcient of each power of t must vanish. That result is Eq. (14.73).

14.4.6.

The Bessel generating function given in Exercise 14.4.5 has the property that g(−x, 1/t) = g(−x, −t) = g(x, t). Use those relationships to show that J−n (x) = (−1)n Jn (x).

Solution: The relation g(−x, −t) = g(x, t) implies that, for all x and t, ∑ ∑ Jn (−x)(−t)n = Jn (x)tn , so that Jn (−x) = (−1)n Jn (x) . n

n

The relation g(−x, 1/t) = g(x, t) implies that, for all x and t, ∑ ∑ Jn (−x)t−n = Jn (x)tn . n

n

Equating coeﬃcients of tn , we get J−n (−x) = Jn (x). Replacing x by −x and then identifying Jn (−x) as (−1)n Jn (x), we reach the desired ﬁnal result. 14.4.7.

Verify that the substitution y = x−1/2 z converts Eq. (14.79) into Eq. (14.80).

Solution: Making the substitutions listed before Eq. (14.80) and multiplying the resulting equation by x1/2 , the veriﬁcation is immediate. 14.4.8.

By making the substitution y = xc Zν (axb ) into the ODE [( ] )2 1 − 2c ′ c2 − ν 2 b 2 y ′′ + y + abxb−1 + y=0 x x2 verify that y is a solution when Zν is a Bessel function of order ν.

Solution: Letting w stand for axb , we have y ′ = xc−1 [bwZ ′ (w) + cZ(w)] ; [ ] y ′′ = xc b2 w2 Z ′′ (w) + (2bc + b2 − b)wZ ′ (w) + (c2 − c)Zw . We now form ′′

y +

(

1 − 2c x

)

[ 2 c−2

y =b x

] c2 w Z (w) + wZ (w) − 2 Z(w) . b 2

′′

14.4. BESSEL EQUATION

247

The ﬁnal term of the ODE can be written ( ) c2 2 c−2 2 2 b x w + 2 −ν Z(w) , b and we see that, when combined with the earlier terms, the quantities c2 Z(w)/b2 cancel, leaving a Bessel equation for Z, with independent variable w and order ν. 14.4.9.

Find the ﬁrst two positive zeros of J1 (x) by plotting it and then restricting the range of the plot to the neighborhood of a zero until to its location can be read out to suﬃcient precision. Check your results by comparison with the output of BesselJZeros or BesselJZero.

Solution: A suitable plot, and its check value, using maple, are > plot(BesselJ(1,x), x = 3.8315 .. 3.8320);

3.8317

3.831705970

> BesselJZeros(1.,1); In mathematica,

Plot[BesselJ[1, x], {x, 7.0154, 7.0159}]

7.0156

7.01559

BesselJZero[1.,2] These plots are shown here.

14.4.10.

Show that (a) Jν−1 (x) = Jν+1 (x) at every maximum or minimum of Jν (x), (b) Jν−1 (x) = −Jν+1 (x) at every zero of Jν (x).

Solution: (a) At a minimum or maximum of Jν (x), Jν′ (x) = 0, and the recurrence formula, Eq. (14.73), indicates that Jν−1 (x) = Jν+1 (x). (b) When Jν (x) = 0, the recurrence formula, Eq. (14.72), indicates that Jν−1 (x) = −Jν+1 (x). 14.4.11.

Expand sin x on the interval 0 ≤ x ≤ π in the Bessel series sin x =

∞ ∑ j=1

aj J1 (α1j x/π) ,

248

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES where α1j is the jth zero of J1 . Make a plot showing the accuracy of your expansion; do not truncate before sin x and its expansion are nearly indistinguishable on your plot. Hint. Remember to use the normalization integral appropriate for the interval (0, π).

Solution: Use symbolic computing and proceed as in Example 14.4.1. In maple, obtain the expansion coeﬃcients cj from the procedure > makeC := proc(nmax) global c; local n,zn; >

for n from 1 to nmax do

>

zn := BesselJZeros(1,n);

>

c[n]:=evalf(2/Pi^2/BesselJ(2,zn)^2*

>

int(BesselJ(1,zn*x/Pi)*x*sin(x), x=0 .. Pi)) end do

> end proc; Then carry out the expansion through nmax terms by invoking > makeC(c,nmax); > expSin := proc(nmax,x) local n; >

> end proc; Check the accuracy of the expansion for any nmax by either of these plots: > nmax := 4; plot([expSin(nmax,x),sin(x)], x=0 .. Pi); > nmax := 4; plot(expSin(nmax,x)-sin(x), x=0 .. Pi); In mathematica, obtain the expansion coeﬃcients cj from the procedure makeC[c_,nmax_] := Module[{zn,n}, Do[ zn = BesselJZero[1,n]; c[n] = N[Integrate[x*Sin[x] BesselJ[1,zn*x/Pi], {x,0,Pi}]] *2/Pi^2/BesselJ[2,zn]^2, {n,1,nmax} ] ] Then carry out the expansion through nmax terms by invoking makeC[c,nmax] expSin[nmax_,x_] := Module[{n}, Sum[c[n] * BesselJ[1,BesselJZero[1,n]*x/Pi]] ] To check the accuracy of the expansion make one of the following plots: Plot[{expSin[nmax,x],Sin[x]}, {x, 0, Pi}] Plot[expSin[nmax,x]-Sin[x], {x, 0, Pi}] Reasonable convergence is already achieved at nmax = 4. Here are plots for that nmax value: Left, expansion of sin x; right, error in the expansion.

14.4. BESSEL EQUATION

14.4.12.

249

Expand the step function {

1,

0≤x≤

0,

1 2

f (x) =

1 , 2

≤x≤1

as a series in Bessel functions of order zero.

Solution: This is not a rapidly convergent expansion. Code for it can be the following. In maple, > makeA := proc(nmax) global a; local n,z,x; > for n from 1 to nmax do >

z := (BesselJZeros(0,n));

>

a[n] := 2/BesselJ(1,z)^2*int(x*BesselJ(0,z*x), x=0 .. 0.5)

>

end do;

> end proc; > makeF := proc(nmax,x); >

> end proc; > nmax:=10: makeA(nmax): plot(makeF(nmax,x), x = 0 .. 1); In mathematica, makeA[nmax_] := Module[{z,x,n}, Do[z = BesselJZero[0,n]; a[n] = 2/BesselJ[1,z]^2 * Integrate[x*BesselJ[0,z*x], {x,0,0.5}], {n,1,nmax}]] ] makeF[nmax_, x_] := Sum[a[n]*BesselJ[0,BesselJZero[0,n]*x], {n,1,nmax}] nmax:=10; makeA[nmax]; Plot[makeF[nmax,x], {x,0,1}] These commands produce the following plot:

250

14.4.13.

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES

Show that lim

x→0

J1 (x) 1 = . x 2

Solution: The leading term in the expansion of J1 (x) about x = 0 is, from Eq. (14.90), x/2. Dividing this by x gives the answer to this exercise, 1/2. 14.4.14.

Find lim Jν (x)/Yν (x). x→0

Solution: From Eq. (14.90), for ν = 0, we divide the limiting form of J0 by that of Y0 , getting π/2 ln x. This limiting form is more informative than its limit, which is zero. For nonzero ν, the corresponding limiting form is lim Jν (x)/Yν (x) = −

x→0

πx2ν 22ν Γ(ν)Γ(ν

+ 1)

.

Strictly speaking, this limit is also zero if ν > 0.

14.5

OTHER BESSEL FUNCTIONS

Exercises 14.5.1.

(1)

(2)

(a) Using the results in Eq. (14.90), ﬁnd the limiting forms of Hν (x) and Hν (x) at x → ∞. (b) If these expressions are multiplied by e−iωt , explain why the resulting function of x and t is identiﬁed as an outgoing or incoming wave. Determine which combination is “outgoing” and which is “incoming”.

Solution: (1)

(2)

(a) From Hν = Jν + iYν and Hν = Jν − iYν , we have the following limiting behavior at x → ∞. √ √ 2 2 i(x−x0 ) (1) [cos(x − x0 ) + i sin(x − x0 )] → e , Hν (x) → πx πx √ √ 2 2 −i(x−x0 ) (2) Hν (x) → [cos(x − x0 ) − i sin(x − x0 )] → e . πx πx Here x0 = (2ν + 1)π/4.

14.5. OTHER BESSEL FUNCTIONS

251

(b) If we include a time factor e−iωt , these limiting forms become √ 2 i(x−ωt−x0 ) (1) −iωt Hν (x) e → e , πx √ 2 −i(x+ωt−x0 ) (2) −iωt Hν (x) e → e . πx The time-dependent form built from H (1) is a time-dependent waveform with the property that a point of constant phase (i.e., a point with a ﬁxed value of the complex exponential) will be maintained if x increases as t is increased. Since larger x corresponds in most practical problems to points further from a signal source, the H (1) form is identiﬁed as an outgoing wave. On the other hand, the H (2) form will describe a point of constant phase if x decreases as t increases; it is called an incoming wave. 14.5.2.

Show that as x → ∞, Iν (x) and Kν (x) have the limiting forms √ 1 π −x Iν (x) ∼ √ e . ex , Kν (x) ∼ 2x 2πx

Solution: Form i−ν times the large-x limit of Jν (ix). Replace the cosine function in Eq. (14.90) by its exponential form and then retain only the term that dominates at large x. We get √ e−i(ix−(2ν+1)π/4) 2 −ν Iν (x) ∼ i . πix 2 The complex exponential simpliﬁes to iν+1/2 ex , so all occurrences of i cancel, leaving the claimed result for Iν . Form Kν (x) =

π ν+1 (1) π i Hν (ix) ∼ iν+1 2 2

2 πix

ei(ix−(2ν+1)π/4) .

The complex exponential simpliﬁes to i−ν−1/2 e−x , so all occurrences of i cancel, leaving the claimed result for Kν . 14.5.3.

Develop recurrence relations for In and Kn that are parallel to those given for Jn in Eqs. (14.72) and (14.73). Check your work by substitution of values of In and Kn obtained by symbolic computing.

Solution: Write Eq. (14.72) for the argument ix; the result is Jν−1 (ix) + Jν+1 (ix) =

2ν Jν (ix) . ix

Now note that Jν (ix) = iν Iν (x), and also that Jν±1 (ix) = iν±1 Iν±1 (x). This converts the above equation to iν−1 Iν−1 (x) + iν+1 Iν+1 (x) =

2 ν i Iν (x) , ix

which simpliﬁes to Iν−1 (x) − Iν+1 (x) =

2ν Iν (x) . x

252

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES The same steps can be applied to Eq. (14.73). However, note that Jν′ (ix) is dJν (ix)/d(ix) = iν−1 Iν′ (x). We get Iν−1 (x) + Iν+1 (x) = 2Iν′ (x) . (1)

The functions Kν (x) are proportional to Hν (ix), so we build the Kν recurrence (1) relations from those for Hν , also given by Eqs. (14.72) and (14.73). The procedure is similar to that for the Iν , but with the substitutions [ ]′ 2 2 Hν(1) (ix) = i−ν−1 Kν (x) and Hν(1) (ix) = i−ν−2 Kν′ (x) . π π Taking note of these relations, we ﬁnd Kν−1 (x) − Kν+1 (x) = −

2ν Kν (x) , x

Kν−1 (x) + Kν+1 (x) = −2Kν′ (x) . Symbolic computing checks of these recurrence relations are the following, which when executed for speciﬁc values of n and decimal values of x should simplify to zero. In either language, these statements should be executed when n and x are undeﬁned. In maple, > RR1 := BesselI(n-1,x)-BesselI(n+1,x)-2*n/x*BesselI(n,x): > RR2 := BesselI(n-1,x)+BesselI(n+1,x)-2*diff(BesselI(n,x),x): > RR3 := BesselK(n-1,x)-BesselK(n+1,x)+2*n/x*BesselK(n,x): > RR4 := BesselK(n-1,x)+BesselK(n+1,x)+2*diff(BesselK(n,x),x): In mathematica, RR1 = BesselI[n-1,x] - BesselI[n+1,x] - 2*n/x*BesselI[n,x]; RR2 = BesselI[n-1,x] + BesselI[n+1,x] - 2*D[BesselI[n,x], x] RR3 = BesselK[n-1,x] - BesselK[n+1,x] + 2*n/x*BesselK[n,x]; RR4 = BesselK[n-1,x] + BesselK[n+1,x] + 2*D[BesselK[n,x], x] mathematica recognizes RR2 and RR4 as identically zero, so they need not be conﬁrmed numerically. The other formulas can be used as illustrated here: maple:

> n:=2: x:=1.76: RR1;

mathematica: n = 1.7; x = 0.67; RR3 14.5.4.

1. 10−9 −8.88178 × 10−15

Referring to the answer to Exercise 14.4.8, show that the Airy diﬀerential equation, y ′′ − xy = 0 , has solutions that can be written in the standard forms √ ( ) 1 x Ai(x) = K1/3 23 x3/2 , π 3 √ Bi(x) =

) ( )] ( x [ . I−1/3 23 x3/2 + I1/3 23 x3/2 3

Solution: We need only to show that the Airy equation has solutions that are modiﬁed

14.5. OTHER BESSEL FUNCTIONS

253

Bessel functions of order 1/3. To bring the ODE of Exercise 14.4.8 to the form of the present diﬀerential equation, we take 1 − 2c = 0 (i.e., c = 1/2), b = 3/2, a = 2i/3, and c2 − ν 2 b2 = 0 (i.e., ν = 1/3). With these parameter values, the ODE has solutions Z1/3 (w), corresponding to y = x1/2 Z1/3 (w), with w = (2/3) ix3/2 . Because of the presence of the imaginary unit, the general solution of the ODE will be a linear combination of ) ) ( ( 2 3/2 2 3/2 x and K1/3 x . I1/3 3 3 14.5.5.

Plot jn (x) and yn (x) on the same graph for n = 0 and (on a diﬀerent graph) for n = 8. Extend the range of x enough that the asymptotic behavior for large x is apparent. Enlarge a portion of each graph so that the asymptotic phases of jn and yn can be read out accurately and compared with the formulas in Eq. (14.106).

Solution: In maple, load into your session and execute the deﬁnitions of the spherical Bessel procedures, given immediately following Eq. (14.106), and then make the plot with coding such as > plot([SphBesselJ(0,x),SphBesselY(0,x)], x=21 .. 24); In mathematica the necessary procedures are in the basic language, and the plot coding can be Plot[{SphericalBesselJ[0,x],SphericalBesselY[0,x]}, {x,21,24}] Shown below is a plot of j0 and y0 over an extended range and a close-up of the region between x = 21 and x = 24. The latter plot shows that the zeros of j0 and y0 diﬀer by an amount that is separated by π/2.

Similar plots are presented for n = 8, but the separation of the zeros only approaches π/2 = 1.57 · · · asymptotically at larger values of x.

254 14.5.6.

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES Verify the recurrence formula, Eq. (14.100), for both jn and yn , by using symbolic computing to check various cases.

Solution: In maple, load into your session and execute the deﬁnitions of the spherical Bessel procedures, given immediately following Eq. (14.106), and then enter the statement > RR := SphBesselJ(n-1,x)+SphBessel(n+1,x) >

-(2*n+1)/x*SphBesselJ(n,x):

RR should simplify to zero when n is speciﬁed. Check data: > n=3: simplify(RR);

0

In mathematica the necessary procedures are in the basic language, and the coding for this exercise can be RR = SphericalBesselJ[n-1,x] + SphericalBesselJ[n+1,x] -(2*n+1)/x*SphericalBesselJ[n,x]; RR should simplify to zero when n and x are speciﬁed. Check data: n=2; x=1.73; RR 14.5.7.

−3.05311 × 10−16

Derive recurrence formulas for in and kn , and use symbolic computing to check your formulas.

Solution: Start from the recurrence relations for In and Kn in Exercise 14.5.3 and the deﬁnitions of in and kn in Eqs. (14.104) and (14.105). If we replace Iν by iν or Kν by kν , the only change needed in the recurrence relations is the replacement of ν by ν + 21 , so 2ν becomes 2ν + 1. Thus, iν−1 (x) − iν+1 (x) =

2ν + 1 iν (x) x

iν−1 (x) + iν+1 (x) = 2i′ν (x) kν−1 (x) − kν+1 (x) = −

2ν + 1 kν (x) x

kν−1 (x) + kν+1 (x) = −2kν′ (x) To check these formulas using symbolic computing, one must (in either symbolic language) load and execute the procedure deﬁnitions for in and kn . These are found in the text after Eq. (14.106). Then, in maple, enter (with n and x undeﬁned) > RR1 := SphBesselI(n-1,x)-SphBesselI(n+1,x) >

-(2*n+1)/x*SphBesselI(n,x):

> RR2 := SphBesselI(n-1,x)+SphBesselI(n+1,x) >

-2*diff(SphBesselI(n,x),x):

> RR3 := SphBesselK(n-1,x)-SphBesselK(n+1,x) >

+(2*n+1)/x*SphBesselK(n,x}:

14.5. OTHER BESSEL FUNCTIONS

255

> RR4 := SphBesselK(n-1,x)+SphBesselK(n+1,x) >

+2*diff(SphBesselK(n,x),x):

These statements should evaluate to zero when values of n and x have been supplied. A test can take the form > n:=3:

x:=1.42: RR1;

0.

In mathematica, with n and x undeﬁned, RR1 = SphericalBesselI[n-1,x]-SphericalBesselI[n+1,x] -(2*n+1)/x*SphericalBesselI[n,x]; RR2 = SphericalBesselI[n-1,x]+SphericalBesselI[n+1,x] -2*D[SphericalBesselI[n,x],x]; RR3 = SphericalBesselK[n-1,x]-SphericalBesselK[n+1,x] +(2*n+1)/x*SphericalBesselK[n,x]; RR4 = SphericalBesselK[n-1,x]+SphericalBesselK[n+1,x] +2*D[SphericalBesselK[n,x],x]; These statements should evaluate to zero when values of n and x have been supplied. A test can take the form > n:=3; 14.5.8.

x:=1.42; RR1

−6.07153 × 10−17

Prove that y−1 (x) = j0 (x).

Solution: From Eq. (14.70), Y−1/2 (x) = −J1/2 (x)/(−1) = J1/2 (x). Then, using the deﬁnitions in Eqs. (14.95) and (14.96), we ﬁnd √ y−1 (x) = 14.5.9.

π Y−1/2 (x) = 2x

π J1/2 (x) = j0 (x) . 2x

(a) Develop a recurrence formula for jn (x) parallel to Eq. (14.75). (b) Use the formula you found in part (a) to write jn+1 in terms of a derivative of jn , then in terms of two diﬀerentiations involving jn−1 , then in terms of three diﬀerentiations involving jn−2 , etc. Generalize to prove that ( ) ) ( ) ( 1 d n 1 d n sin x jn (x) = xn − j0 (x) = xn − . x dx x dx x

Solution: (a) Write Eq. (14.75) for ν = n + 12 ; noting then that Jn+1/2 /x1/2 is proportional to jn , the result can be brought to the form ] d [ −n x jn (x) = −xn jn+1 (x) . dx (b) Rearranging slightly the formula of part (a), ( ) 1 d jn (x) jn+1 (x) − = , x dx xn xn+1

256

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES ( ) 1 d − x dx

we see that

is an operator that advances n by one in the expression jn (x)/xn . Therefore, if we start from j0 (x)/x0 = j0 (x) = sin x/x, we can apply this operator n times to form jn (x)/xn . 14.5.10.

Obtain a repeated-diﬀerentiation formula parallel to Exercise 14.5.9 for yn (x).

Solution: Because yn satisﬁes the same recurrence formulas as jn , the formula of part (a) of 14.5.9 applies with yn everywhere replacing jn . The explicit formula of part (b) of that exercise here becomes ( )n ( )n ( 1 d 1 d cos x ) y0 (x) = xn − − . yn (x) = xn − x dx x dx x

14.6

HERMITE EQUATION

Exercises 14.6.1.

Plot Hn (x) (suggested range −n/2 ≤ x ≤ n/2, vertical range −22n · · · 22n ) for n = 2, 6, 9, and 10.

Solution: Code to make the plots can be, in maple, > n := 2: plot(HermiteH(n,x), x = -n/2 .. n/2, -2^(2*n) .. 2^(2*n)); Code in mathematica can be n = 6; Plot[HermiteH[n,x], {x, -n/2, n/2}, PlotRange -> {-2^(2*n), 2^(2*n)}] 14.6.2.

Determine the values of Hn (0) for general n by carrying out suitable operations on the generating-function formula.

Solution: 2

Expand g(x, t) = e2xt−t for x = 0: g(0, t) = e−t = 2

∞ ∑ (−1)n t2n . n! n=0

In the generating-function formula, Eq. (14.112), the coeﬃcient of t2n is H2n (x)/(2n)!. We therefore have here H2n (0) (−1)n = , (2n)! n!

so

H2n (0) = (−1)n

(2n)! . n!

The absence of odd powers of t in the expansion corresponds to H2n+1 (0) = 0. 14.6.3.

Establish the recurrence formulas, Eqs. (14.113) and (14.114), by processes that begin by diﬀerentiating the generating-function formula.

14.6. HERMITE EQUATION

257

Solution: 2

Diﬀerentiating g(x, t) = e2xt−t with respect to t: 2 ∞ ∑ ∂e2xt−t Hn (x) n−1 = (2x − 2t)g(x, t) = nt . ∂t n! n=0

Inserting the generating-function expansion for g(x, t), we get ∑

2xHn (x)

n

∑ tn ∑ tn+1 tn−1 − 2Hn (x) = Hn (x) . n! n! (n − 1)! n n

Shifting the index deﬁnitions of the second and third summations to cause all to have the same explicit power of t and equating the coeﬃcient of each power, we get 2xHn (x) − 2nHn−1 (x) = Hn+1 (x) , which is Eq.(14.113). Diﬀerentiating g(x, t) with respect to x, we get ∑ H ′ (x) ∂e2xt−t n = 2tg(x, t) = tn . ∂x n! n 2

Inserting the generating-function expansion for g(x, t), we get ∑

2Hn (x)

n

∑ H ′ (x) tn+1 n = tn , n! n! n

from which, equating coeﬃcients of the same power of t, we obtain 2nHn−1 (x) = Hn′ (x) , which is Eq. (14.114). 14.6.4.

Derive the normalization integral for the Hn given in Eq. (14.116) by the following steps: 1. Form a sum of orthogonality/normalization integrals: ∫ ∞ ∑ tn um 2 g(x, t)g(x, u)e−x dx = ⟨Hn |Hm ⟩ . n!m! −∞ nm 2. Eliminate terms on the right-hand side of this equation that vanish due to orthogonality. 3. Organize the left-hand side of this equation so that one of its factors within the integral 2 is e−(x−t−u) . 4. Evaluate the left-hand-side integral. 5. Do whatever is necessary to equate the coeﬃcients of (tu)n on both sides of the equation, thereby ﬁnding ⟨Hn |Hn ⟩.

Solution: The integrand of the left-hand side is exp(2xt − t2 + 2xu − u2 − x2 ) rewrite this 2 as e2tu e−(x−t−u) . Then evaluate the integral ∫ ∞ √ 2 e−(x−a) dx = π , −∞

258

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES irrespective of the value of a. The original equation of this exercise then becomes e2tu

∑ (tu)n √ π= ⟨Hn |Hn ⟩ . n! n! n

Now introduce the power-series expansion e2tu =

∑ 2n (tu)n n

n!

.

Solving for the normalization integral, √ ⟨Hn |Hn ⟩ = 2n n! π . 14.6.5.

For g(x, t) as given in Eq. (14.112), note that ( )m d g(x, t) = (2t)m g(x, t) . dx ( )m d Show that if the coeﬃcient of xm in Hm (x) is cm , then Hm (x) = m! cm and cm = 2m . dx

Solution: After diﬀerentiating the generating-function formula m times, we have ( )m ∞ ∞ ∑ ∑ d tn+m tn g(x, t) = (2t)m g(x, t) = 2m = Hn (x) Hn(m) (x) . dx n! n! n=0 n=0 Equating the coeﬃcients of tm on both sides of this equation, we have (m)

2m H0 (x) =

Hm (x) , m!

(m) i.e., Hm (x) = 2m m!.

But Hm is a polynomial of degree m; when diﬀerentiated m times we get m! cm , where cm is the coeﬃcient of xm . Setting m! cm = 2m m!, we get cm = 2m . 14.6.6.

Expand f (x) = 1/ cosh 2x in Hermite polynomials, using symbolic computing to evaluate the coeﬃcients. Compare plots of f (x) and its expansions truncated after H4 and H10 . Note that even keeping terms through H10 the expansion is very poor beyond about |x| = 2.

Solution: In maple, > makeA := proc(nmax,a) local n,x; > >

for n from 0 to nmax do a[n] := evalf(int(HermiteH(n,x)/cosh(2*x)*exp(-x^2),

>

x=-infinity .. infinity)) /

>

evalf( int(HermiteH(n,x)^2*exp(-x^2),

>

x=-infinity .. infinity)); end do;

> end proc: > makeF := proc(nmax,x,a) local n; >

add(a[n]*HermiteH(n,x), n=0 .. nmax) end proc;

> nmax:=10: > makeA(nmax): plot([makeF(nmax,x),1/cosh(2*x)], x=-3.5 .. 3.5);

14.6. HERMITE EQUATION

259

In mathematica, makeA[nmax_] := Module[{n}, Do[a[n] =

N[Integrate[HermiteH[n, x]/Cosh[2*x]*E^(-x^2), {x, -Infinity, Infinity}]] / N[Integrate[HermiteH[n, x]^2*E^(-x^2), {x, -Infinity, Infinity}]], {n, 0, nmax}] ]

makeF[nmax_, x_] := Sum[a[n]*HermiteH[n, x], {n, 0, nmax}] nmax = 10; makeA[nmax]; Plot[{makeF[nmax, x], 1/Cosh[2*x]}, {x, -2, 2}] Note that a is a function deﬁned for integer arguments. mathematica cannot do the Hermite integrals algebraically; using the function N we force numeric evaluation. Here are plots for truncation after H4 (left) and H10 (right). The function f (x) is everywhere positive and tends to zero at large |x|; its truncated expansion passes through negative values and becomes grossly in error beyond about x = n1/2 , where n is the last Hn before truncation.

14.6.7.

To obtain the Rodrigues formula for Hn , Eq. (14.131), ˆ and B ˆ are identical: (a) Start by showing that the following two operators A ˆ = ex2 /2 (x − D)e−x2 /2 A

and

ˆ = ex2 (−D)e−x2 . B

Hint. Show that they lead to the same result when applied to the same arbitrary function f (x). ˆn and B ˆ n and apply both to f (x) = 1. Relate A ˆn H0 to Eq. (14.130) and (b) Now form A ˆ n H0 to Eq. (14.131). B (c) Complete the demonstration of Eq. (14.131) by showing that it produces Hn (x) at the conventional scale.

Solution: ˆ to an arbitrary f (x). (a) Apply Aˆ and B ˆ (x) = ex2 /2 x e−x2 /2 f (x) − ex2 /2 (−x) e−x2 /2 f (x) − f ′ (x) Af = 2xf (x) − f ′ (x) ˆ (x) = ex2 (2x) e−x2 f (x) − f ′ (x) = 2xf (x) − f ′ (x) Bf

260

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES ˆ These expressions are equal, showing that Aˆ = B. 2 2 ˆ n = ex2 (−D)n e−x2 . Noting that (b) Form Aˆn = ex /2 (x − D)n e−x /2 and B 2 Eq. (14.119) gives φ0 (x) = e−x /2 , we see that φn , as given in Eq. (14.130), is [ ] n −x2 /2 ˆn −x2 /2 ˆ n −x2 /2 x2 n d −x2 e (−1) e φn (x) = e A 1=e B 1=e . dxn

However, we also know from Eq. (14.119) that φn (x) = e−x /2 Hn (x), showing that the quantity in square brackets is equivalent (within a scale factor) to Hn (x). It is the Rodrigues formula for Hn given in Eq. (14.131). 2

(c) We can determine that the Rodrigues formula produces the Hermite polynomials at the conventional sign and scale by looking at the leading coefﬁcient in Hn (x), namely the coeﬃcient of xn . The xn term of the n-fold 2 derivative will be reached if each diﬀerentiation is applied to e−x , thereby producing a factor −2x. Diﬀerentiation of powers of x produced by previous derivatives will decrease, not increase, the net power of x. Thus, the term of Hn containing xn will be (−1)n (−2x)n ; the coeﬃcient is 2n , as expected. 14.6.8.

√ √ ˆ = (x + D)/ 2 and a ˆ † = (x − D)/ 2 were introduced in the text just before The operators a Eq.(14.124). ˆ and a ˆ † are appropriate because they are adjoints of each (a) Show that the notations a other in a Hilbert space for which the scalar product is deﬁned as in Eq. (14.120). (b) Writing

ˆ |φn ⟩ , ⟨ˆ aφn |ˆ aφn ⟩ = ⟨φn |ˆ a† a

where φn are the functions deﬁned in Eq. (14.119), show that this scalar product is positive if n > 0, zero when n = 0, and negative if n < 0. (c) Since the scalar product of part (b) can never be negative (explain why!), show how the result of part (b) leads to the conclusion that eigenfunctions φn exist only if n is a nonnegative integer.

Solution: (a) Remembering that D stands for d/dx, we note that, if u and v are arbitrary members of a Hilbert space with the indicated scalar product, then ∫ ∞ ∫ ∞ (xu)∗ v dx , u∗ (xv) dx = −∞

−∞

∫ ∞ u∗ (Dv) dx = u∗ v − −∞

−∞

(Du)∗ v dx = −

(Du)∗ v dx .

−∞

These equations imply that ⟨u|(x + D)v⟩ = ⟨(x − D)u|v⟩ for all u and v in the Hilbert space, so x − D is the √ adjoint of x + D, and (obviously) √ (x − D)/ 2 is the adjoint of (x + D)/ 2. (b) The function φn has been deﬁned to be an eigenfunction of (L) = (x2 − D2 )/2 with eigenvalue n + 12 , and we know from Eq. (14.125) that also ˆ† a ˆ = nφn . Thus the scalar product of the present problem has the same a sign as n, and is zero when n = 0.

14.7. LAGUERRE EQUATION

261

ˆφn and (c) The scalar product of part (b) is the square of the magnitude of a ˆφn is itself zero. Since we have already therefore must be positive unless a ˆ is an operator that lowers n by one unit, we see that if n shown that a ˆ reach a is not a nonnegative integer we can by repeated application of a negative value of n and therewith an inconsistency. We conclude that φn can exist only if n is a nonnegative integer.

14.7

LAGUERRE EQUATION

Exercises 14.7.1.

Plot Ln (x) for n = 2 over a range of x suﬃcient to include all its zeros. In particular, determine whether all the zeros occur for positive values of x. Then repeat this exercise for n = 8.

Solution: Because of the wide variation in the amplitude of the individual oscillations of the Laguerre polynomials, it may be diﬃcult to count all their zeros of L8 (x) from a single plot. One way to resolve this diﬃculty is to look at a succession of partial ranges; another is to multiply L8 (x) by a factor that cannot be zero, such as e−x/2 . Whatever the strategy, the plots can be generated by code such as > plot(LaguerreL(2,x), x = 0 .. 4); Plot[Laguerre[n,x], {x, x1, x2}] All n zeros of Ln are distinct and occur for positive x. 14.7.2.

Starting from the Rodrigues formula for Ln (x), show that Ln (0) = 1.

Solution: At x = 0, the n-fold diﬀerentiation of xn e−x will produce terms that all vanish except for the single term in which xn is diﬀerentiated n times and e−x is not diﬀerentiated at all. This term of the diﬀerentiation will (for x = 0) have the value n!, and the complete expression for Ln (0) will be (1/n!)e0 (n!) = 1. 14.7.3.

Using Leibniz’ rule for the nth derivative of a product, show how Eq. (14.134) follows from Eq. (14.133).

Solution: Writing Leibniz’ rule, Eq. (2.62), with e−x diﬀerentiated j times and xn diﬀerentiated n − j times, ) n ( )( dn ( n −x ) ∑ n n!xj (−1)j , x e = dxn j! j j=0 leading directly from Eq. (14.133) to (14.134). 14.7.4.

Verify that Ln (x) as given in Eq. (14.134) satisﬁes the Laguerre ODE.

Solution: Identify the coeﬃcient of xp in each of the terms that result when the formula

262

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES for Ln is substituted into the Laguerre ODE, where we write the ODE as xy ′′ + y ′ − xy ′ + ny = 0. Presenting the treatment of each term in tabular form, xy ′′

x(−1)j j(j − 1) xj−2 (n − j)! j! j!

j =p+1

(−1)p+1 xp (n − p − 1)! (p − 1)! (p + 1)!

y′

(−1)j j xj−1 (n − j)! j! j!

j =p+1

(−1)p+1 xp (n − p − 1)! p! (p + 1)!

x(−1)j j xj−1 − (n − j)! j! j!

j=p

(−1)p xp − (n − p)! (p − 1)! p!

n(−1)j xj (n − j)! j! j!

j=p

n(−1)p xp (n − p)! p! p!

−xy

ny

,

the overall coeﬃcient of xp adds to zero. 14.7.5.

Use the Laguerre generating function, Eq. (14.135), to establish the recurrence formulas, Eqs. (14.136) and (14.137).

Solution: Diﬀerentiating the generating function g(x, t) with respect to t, ( ) ∞ ∑ ∂ e−xt/(1−t) 1−x+t = g(x, t) = nLn tn−1 . ∂t 1−t (1 − t)2 n=0 Write the above equation in the form ∑ ∑ (1 − x + t) Ln (x)tn = (1 − 2t + t2 )Ln (x)tn n

n

and set to zero the coeﬃcient of each power of t. The result is the recurrence formula, Eq. (14.136). Diﬀerentiating g(x, t) with respect to x, we get ∑ −t g(x, t) = L′n (x)tn . 1−t n Inserting the summation represented by g(x, t) and writing the resulting equation in the form ∑ ∑ ∑ − Ln (x)tn+1 = L′n (x)tn − L′n (x)tn+1 , n

n

we identify from the coeﬃcient of t

n+1

n

the recurrence formula

Ln (x) = L′n (x) − L′n+1 (x) , which we designate Formula A. To obtain Eq. (14.137), start by diﬀerentiating Eq. (14.136) with respect to x. This yields initially (n + 1)L′n+1 (x) = −Ln (x) + (2n + 1 − x)L′n (x) − nL′n−1 (x) . Now use Formula A to rewrite L′n+1 and L′n−1 in terms of L′n . Much cancelation can then take place; what remains corresponds to Eq. (14.137).

14.7. LAGUERRE EQUATION 14.7.6.

263

Establish the normalization of the Laguerre polynomials as follows: 1. Using the Laguerre generating function, Eq. (14.135), write ∫ ∞ ∑ e−x g(x, u)g(x, v) dx = ⟨Ln |Lm ⟩un v m . 0

nm

2. Change the integration variable from x to (1 − uv)x/(1 − u)(1 − v) and carry out the left-hand-side integration. 3. Expand the result of that integration in powers of uv, and identify the normalization integrals.

Solution: Giving the new integration variable the name w, we ﬁnd that the three exponentials e−xu/(1−u) , e−xv/(1−v) , and e−x , combine to yield just e−w . The remainder of the integrand, dx/[(1 − u)(1 − v)], reduces to dw/(1 − uv), so the entire left-hand-side integral has the simple evaluation ∫ ∞ ∫ ∞ 1 1 e−x g(x, u)g(x, v) dx = e−w dw = . 1 − uv 0 1 − uv 0 Expanding this expression and setting it equal to the sum of powers of u and v, ∑ ∑ (uv)n = ⟨Ln |Lm ⟩un v m , n

nm

we conﬁrm that the Ln are orthogonal with the scalar product in use, and that the terms with m = n show that ⟨Ln |Ln ⟩ = 1. 14.7.7.

In Hartree atomic units the diﬀerential equation for the radial wave functions of the hydrogen atom is 1 1 y L(L + 1) − y ′′ − y ′ − + y = Ey , 2 r r 2r2 where L is an integer (an angular quantum number), and E is an eigenvalue which for bound states must be negative. The solutions y must be ﬁnite at r = 0 and approach zero asymptotically as r → ∞. (a) Show that the term with r−2 dependence is removed from the ODE by the substitution y = rL u, yielding the equation −

1 ′′ L+1 ′ u u − u − = Eu . 2 r r

(b) Show that the further substitutions u = e−αr v and E = −α2 /2 remove the terms of the ODE that are dominant at large r, leaving rv ′′ + (2L + 2 − 2αr)v ′ + [2 − 2α(L + 1)]v = 0 . (c) Show that the change of variable x = 2αr, with v(r) renamed z(x), yields the equation ) ( 1 − L − 1 z = 0. xz ′′ + (2L + 2 − x)z ′ + α (d) The ODE of part (c) is an associated Laguerre equation, and it will have solutions consistent with the requirements of the present problem only if the coeﬃcient of z, denoted n in Eq. (14.139), is a nonnegative integer. We must therefore set 1/α to an integer value, N , and also require that N − L − 1 be nonnegative. Based on the above, write (in terms of associated Laguerre polynomials) the bound-state hydrogenic wave functions for general N and L, and state the ranges of values these parameters may assume. (e) Write explicit formulas for the six hydrogenic radial wave functions of smallest E, identifying each by its values of N , L, and E. Write the Laguerre functions in polynomial form.

264

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES Solution: (a) From y = rL u we get y ′ = LrL−1 u + rL u′ , y ′′ = L(L − 1)rL−2 u + 2LrL−1 u′ + rL u′′ . When these quantities and the expression for y are substituted into the radial ODE and all terms are divided by rL we get −

u′′ L+1 ′ u − u − = Eu . 2 r r

(b) Setting u = e−αr v, we have u′ = e−αr (−αv + v ′ ) , ( ) u′′ = e−αr α2 v − 2αv ′ + v ′′ . Setting also E = −α2 /2 and substituting into the ODE for u, after dividing all terms by eαr we reach −

) L+1 1( 2 v α2 v α v − 2αv ′ + v ′′ − (−αv + v ′ ) − = − , 2 r r 2

which simpliﬁes to rv ′′ + (2L + 2 − 2αr)v ′ + [2 − 23α(L + 1)] v = 0 . (c) Setting v(r) = z(2αr) and deﬁning x = 2αr, we now obtain v ′ (r) = 2αz ′ (x) , v ′′ (r) = 4α2 z ′′ (x) . Using these formulas and writing r = x/2α, our ODE becomes ( ) 1 ′′ ′ xz + (2L + 2 − x)z + − L − 1 z = 0. α (d) From Eqs. (14.139) and (14.140) we identify Lkn (x) as polynomial solutions of the ODE xz ′′ + (k + 1 − x)z ′ + nz = 0 , so z, as deﬁned in the present problem, is a polynomial Lkn with k + 1 = 2L + 2 and n = α−1 − L − 1. Because n is required to be a nonegative integer we set α−1 = N where N is an integer required to be at least as large as L + 1. Also, L must be a nonnegative integer. Thus, the general radial wavefunction found here is yN L (r) = rL e−r/N L2L+1 N −L−1 (2αr) .

14.8. CHEBYSHEV POLYNOMIALS

265

(e) Inserting the value of α into the formula E = −α2 /2, we get EN L = −1/2N 2 , which is independent of L but increases (becomes less negative) with increasing N . Thus, the state(s) of lowest E must have N = 1, and therefore also L = 0. Next in energy come the states with N = 2, for which L = 0 or L = 1. The states with N = 3 can have L = 0, L = 1, or L = 2. The six states of lowest E are therefore 1 y10 = r0 e−r/1 L10 (2r) = e−r E=− 2 y20 = r0 e−r/2 L11 (r) = e−r/2 (2 − r)

E=−

1 8

y21 = r1 e−r/2 L30 (r) = re−r/2

E=−

1 8

E=−

1 18

E=−

1 18

E=−

1 18

y30 y31

(

2 =r e =e 3 − 2r + r2 9 ( ) 2r = r1 e−r/3 L31 (2r/3) = re−r/3 4 − 3 0 −r/3

L12 (2r/3)

−r/3

)

y32 = r2 e−r/3 L50 (2r/3) = r2 e−r/3

14.8

CHEBYSHEV POLYNOMIALS

Exercises 14.8.1.

(a) Expand x8 in Chebyshev polynomials Tn . Do so in two ways: 1. Find the coeﬃcient of T8 , denoted c8 , that causes x8 − c8 T8 (x) to have a vanishing x8 term. Then ﬁnd the coeﬃcient of T6 that causes x8 − c8 T8 (x) − c6 T6 (x) to have a vanishing x6 term. Continue in the same way to ﬁnd c4 , c2 , and c0 . 2. Using symbolic computing if helpful, use the orthogonality of the Tn to develop the expansion. (b) Check the accuracy of the expansion of x8 if truncated after T6 by plotting x8 and its truncated expansion on the same graph. Also plot the error in the truncated expansion.

Solution: This problem requires the use of T0 = 1, T2 = 2x2 − 1, T4 = 8x4 − 8x2 + 1, T6 = 32x6 − 48x4 + 18x2 − 1, and T8 = 128x8 − 256x6 + 160x4 − 32x2 + 1. All except T8 are in Table 14.7 of the text. All can be obtained by symbolic computing, using the code shown between Eqs. (14.150) and (14.151) of the text. (a) Form ﬁrst x8 −

T8 5 1 1 = 2x6 − x4 + x2 − . 128 4 4 128

Now subtract T6 /16: x8 −

T8 T6 7 7 7 − = x4 − x2 + . 128 16 4 8 128

Next subtract 7T4 /32: x8 −

T6 7T4 7 21 T8 − − = x2 − . 128 16 32 8 128

266

CHAPTER 14. SERIES SOLUTIONS: IMPORTANT ODES The last two steps require the subtraction of 7T2 /16 and 35T0 /128. giving the complete Chebyshev expansion as x8 =

1 1 7 7 35 T8 (x) + T6 (x) + T4 (x) + T2 (x) + T0 (x) . 128 16 32 16 128

The expansion can also be obtained using the orthogonality of the Tn as given in Eq. (14.152). The coeﬃcient of Tn is ∫

1

x8 Tn (x)(1 − x2 )−1/2 dx

an = ∫ −1 . 1 [T )n(x)]2 (1 − x2 )−1/2 dx −1

The coeﬃcients can be obtained by one of the following symbolic codes, > a[n]:=int(x^8*ChebyshevT(n,x)/sqrt(1-x^2),x=-1 .. 1)/ >

int(ChebyshevT(n,x)^2/sqrt(1-x^2), x=-1 .. 1);

a[n]:=Integrate[x^8*ChebyshevT[n,x]/Sqrt[1-x^2], {x, -1, 1}]/ Integrate[ChebyshevT[n,x]^2/Sqrt[1-x^2], {x, -1, 1}] executed for n = 0, 2, 4, 6, and 8. (b) Assuming that a[0], a[2], a[4] and a[6] are available, deﬁne the truncated expansion s6 as a procedure, using one of > s6 := proc(x); >

add(a[2*n]*ChebyshevT(2*n,x), n = 0 .. 3)

> end proc; s6[x_] := Sum[a[2*n]*ChebyshevT[2*n,x], {n,0,3}] Then make the two plots by one of > plot([s6(x),x^8], x = -1 .. 1); > plot(s6(x)-x^8, x = -1 .. 1); Plot[{s6(x),x^8}, {x, -1, 1}] Plot[s6(x)-x^8, {x, -1, 1}] At left below is a plot of the expansion of x8 ; at right is a plot of the error in the truncated expansion.

Chapter 15

PARTIAL DIFFERENTIAL EQUATIONS 15.1

INTRODUCTION

Exercises 15.1.1.

Prove that if the electrostatic potential ψ is given at all points of a surface ∂V enclosing a charge-free region V , the value of ψ, given as the solution of the Laplace equation for the region V subject to the given boundary values, is unique. Start by assuming that there are two distinct solutions, ψ1 and ψ2 ; their diﬀerence, u = ψ1 − ψ2 , then must satisfy the Laplace equation ∇2 u = 0 for the region V , with u = 0 on ∂V . Next apply the divergence theorem ∫ to ∇ · (u∇u) dτ to show that ∫

∇u · ∇u dτ = 0 ,

u∇2 u dτ + V

V

and (explain why) we can then conclude that u = 0 throughout V .

Solution: The divergence theorem tells us that ∫

∫ ∇ · (u∇u) dτ =

u∇u · dσ = 0 ∂V

because u vanishes everywhere on ∂V . The display equation in the statement of this exercise is the result of expanding the integrand in the left-hand side of the above equation. But the ﬁrst of the two integrals in the exercise statement is zero because u satisﬁes the Laplace equation. That leaves ∫ ∇u · ∇u dτ = 0 . V

The integrand of this integral is nonnegative, so the integral will be nonzero (violating this equation) unless ∇u is itself zero everywhere except possibly at isolated points. Since u is zero on the boundary and assumed continuous (because the region is charge-free), it must be zero throughout V . This in turn means that ψ1 = ψ2 , i.e., that the solution is unique. 267

268

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS

15.2

SEPARATION OF VARIABLES

Exercises 15.2.1.

Solve the vibrating string problem of Example 15.2.1 subject to the initial conditions that the string be undisplaced at time t = 0 (i.e., ψ(x, 0) = 0 for all x), but that its initial velocity distribution be of the form { Ax, 0 ≤ x ≤ L/2, ∂ψ(x, t) = ∂t t=0 A(L − x), L/2 ≤ x ≤ L.

Solution: Example 15.2.1 has already derived the general solution of this problem subject to the boundary conditions; it is given in Eq. (15.20). Our task here is to ﬁnd a speciﬁc solution that is also consistent with the initial conditions of the present problem. The string is to be undisplaced at t = 0, which requires us to set the coeﬃcients an to zero and to have a displacement given by ψ(x, t) =

∞ ∑

bn sin

n=1

nπx nπvt sin . L L

We now need to impose the condition corresponding to the velocity distribution of the string at t = 0. Diﬀerentiating the above ψ(x, t) with respect to t and identifying the result at t = 0, ∞ ∑ ∂ψ nπx ( nπv ) . = bn sin ∂t t=0 n=1 L L We see that the initial velocity distribution in x is given by a Fourier sine series, with the coeﬃcient of the nth term equal to nπvbn /L. The sine series needs to describe the triangular wave given in the problem statement; this wave distribution is (except for length scale) the same as that treated in Example 12.5.1, and, scaling the solution from that example, we have { } ∞ ∑ Ax, 0 ≤ x ≤ L/2, nπx = cn sin , L A(L − x), L/2 ≤ x ≤ L. n=1   4AL(−1)(n−1)/2 , n odd, cn = n2 π 2  0, n even.

with

We now set nπvbn /L = cn , equivalent to bn = Lcn /nπv. Changing the summation index to sum only over odd n, our overall solution to this problem is ψ(x, t) = 15.2.2.

∞ (2n + 1)πx (2n + 1)πvt 4AL2 ∑ (−1)n sin sin . π 3 v n=0 (2n + 1)3 L L

Repeat Exercise 15.2.1 but with the initial velocity a delta function at the point x0 (the result of striking the string with a hammer at x = x0 ).

Solution: This problem diﬀers from Exercise 15.2.1 only in the initial velocity distribution.

15.2. SEPARATION OF VARIABLES

269

We need here the Fourier sine series on the interval (0, L) of δ(x−x0 ). Its Fourier coeﬃcients (here labeled cn ) are cn =

2 L

L

δ(x − x0 ) sin 0

nπx 2 nπx0 dx = sin . L L L

Comparing with the solution of Exercise 15.2.1, this problem has a solution of the same form, but with the current value of cn . Its overall solution is ψ(x, t) =

15.2.3.

∞ 2 ∑1 nπx0 nπx nπvt sin sin sin . πv n=1 n L L L

Compare the results of Exercises 15.2.1 and 15.2.2 (the latter for the case x0 = L/2) by computing the intensities b2n of the ﬁrst 6 harmonics (the values of n from 1 to 6), and compare these intensities with the a2n that were found as the solution of Example 15.2.1.

Solution: Making the comparison by expressing the intensities as fractions of that of the fundamental, the intensities from Exercise 15.2.1 are zero for the even harmonics and 1/n6 for the odd harmonics. For Exercise 15.2.2, the factor sin nπx0 /L (with x0 = L/2) is ±1 for the odd harmonics and zero for the even ones. The odd-harmonic intensities are 1/n2 . We therefore have the following table of intensities: Harmonic 1 2 3 4 5 6 Plucked String Hammered String 15.2.4.

1 1

0

3−6

0 5−6

0

0

−2

−2

0

3

0 5

Show that a change of variable from k to ik causes the space spanned by sin kx and cos kx to become a space spanned by sinh kx and cosh kx. Thus, changing the sign of k2 in the ODE ψ ′′ + k2 ψ = 0 converts its solutions from trigonometric to hyperbolic functions.

Solution: Because sin ikx = i sinh kx and cos ikx = cosh kx, any linear combination of sin kx and cos kx is transformed into a linear combination of sinh kx and cosh kx. 15.2.5.

maple and mathematica both support contour plots. Create a procedure tt that computes T (x, y) as given in Eq. (15.34) and plot the temperature distribution for a unit square plate, with contours 0, 0.2, 0.4, 0.6, and 0.8 (in units of T0 ). Basic coding to produce the plots is: maple: > with(plots): > contourplot(tt(x,y), x = 0 .. 1, y = 0 .. 1, > mathematica:

contours = [0.,.2,.4,.6,.8] ); ContourPlot[tt[x,y], {x,0,1}, {y,0,1}, Contours -> {.2,.4,.6,.8} ]

(The desired contours are the default values for mathematica and hence need not have been speciﬁed.) More detail regarding contour plots, including a discussion as to how to control a plot’s aspect ratio, can be found in Appendix A.

270

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS Solution: In maple, > tt := proc(nmax,x,y) local n; > 4/Pi*add(sinh((2*n+1)*Pi*x)*sin((2*n+1)*Pi*y)/(2*n+1) >

/sinh((2*n+1)*Pi), n=0 .. nmax)

> end proc; > with(plots): > nmax:=20; contourplot(tt(nmax,x,y), x=0 .. 1, y=0 .. 1); In mathematica, tt[nmax_,x_,y_] := Module[{n}, 4/Pi*Sum[Sinh[(2*n+1)*Pi*x]*Sin[(2*n+1)*Pi*y]/(2*n+1) /Sinh[(2*n+1)*Pi], {n, 0, nmax}]] nmax = 20; ContourPlot[tt[nmax,x,y], {x,0,1}, {y,0,1}] The above coding should produce a plot similar to Fig. 15.1. 15.2.6.

Modify the solution for the two-dimensional Laplace equation given in Eq. (15.34) so that it applies for T = T0 on the boundary at x = 0, with T = 0 on the other boundaries of the rectangle x = (0, Lx ), y = (0, Ly ). Then (referring to Exercise 15.2.5 if helpful), for Lx = Ly = 1, (a) Make a contour plot of ψ(x, y) for this new problem. (b) Form the solution ψ(x, y) for a problem in which T = T0 on both the boundaries x = 0 and x = Lx (with T = 0 on y = 0 and y = Ly ). (c) Make a contour plot of ψ(x, y) for the problem of part (b).

Solution: This problem diﬀers from that in Eq. (15.34) only by the reﬂection transformation x → Lx − x. We need only make this change in the solution and in the symbolic coding needed to make the contour plots. (a) Replacing x by (1-x) in the symbolic coding of the procedure tt in the solution of Exercise 15.2.5, we make a plot as in that Exercise. The result is shown here:

(b) Because the sum of two solutions to the Laplace equation is also a solution, with the values on the boundaries the sum of those of the two solutions, we can add the solution of the present problem to that in Eq. (15.34), reaching thereby the boundary conditions that are now desired.

15.2. SEPARATION OF VARIABLES

271

(c) To make the plot, in maple replace tt(nmax,x,y) in the plot command by tt(nmax,x,y)+tt(nmax,1-x,y); in mathematica the replacement in the Plot command is tt[nmax,x,y]+tt[nmax,1-x,y]. The plot is shown here:

15.2.7.

What steady-state temperature distribution within the rectangle of Example 15.2.2 do you expect if the edges at x = 0, y = 0, and y = Ly are all kept at T = T0 while that at x = Lx is set to zero? (a) Obtain the temperature distribution ψ(x, y) for this problem, using equations motivated by Eq. (15.34). (b) For Lx = Ly = 1, make a contour plot for the distribution of part (a). (See Exercise 15.2.5 if helpful.) (c) Compare your plot with that obtained in Exercise 15.2.5.

Solution: This problem diﬀers from that of Example 15.2.2 in that the roles of the initial conditions T = 0 and T = T0 are interchanged. (a) We need only to make the same interchange in the solution, thereby reaching [ ]−1 4T0 ∑ nπLx nπx nπy ψ(x, y) = T0 − n sinh sinh sin . π Ly Ly Ly n odd

(b) Make the plot by the procedure used in Exercise 15.2.5. (c) The plot is the same as that in Fig. 15.1, but with the contours (from left to right) at 0.8, 0.6, 0.4, and 0.2 T0 . 15.2.8.

Verify that Eq. (15.36) is a solution to X ′′ = 0 such that X(0) = 0 and X(L) = T0 .

Solution: Clearly T ′′ (x) = 0, T (0) = 0, and T (L) = T0 . 15.2.9.

Either by hand or using symbolic computing, verify the integration in Eq. (15.46).

Solution: To evaluate by hand, integrate by parts to remove the factor x from the integrand. Using maple, it is convenient to use assume to restrict n to integer values. Thus, > assume(n,integer); > 2/L*int(T0*x/L*sin(n*Pi*x/L), x=0 .. L); −

2(−1)n T 0 nπ

272

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS In mathematica 2/L*Integrate[T0*x/L*Sin[n*Pi*x/L], {x, 0, L}] 2T 0(−nπCos[nπ] + Sin[nπ]) , n2 π 2 which simpliﬁes to the correct result.

15.2.10.

Two large slabs, each of thickness d, are caused to come to a steady-state temperature distribution with one side at T = 0 and the other side at T = T0 . At time t = 0 they are stacked with the T = T0 sides touching each other and the outer (T = 0) sides are maintained (at all times) at temperature T = 0. Applying the diﬀusion equation, Eq. (15.35), (a) Find the temperature distribution within the slabs as a function of position and time. (b) Make plots of the temperature proﬁle at the times at which the interface between the two slabs is at temperatures (i) 0.8 T0 and (ii) 0.6 T0 .

Solution: (a) Writing T (x, t) = X(x)U (t) and solving the diﬀusion equation subject to the boundary conditions X(0) = X(2d) = 0, we ﬁnd X ′′ = λ, X

U′ = λ. k2 U

The solutions Xn satisfying the boundary conditions are (at arbitrary scale) Xn = sin nπx/2d ,

corresponding to

λn = −

n2 π 2 . 4d2

Then we have Un′ = λn , k 2 Un

leading to Un (t) = e−n

2 2

k π 2 t/4d2

.

Any linear combination of these solutions will also satisfy the diﬀusion equation, so we now form the combination that will reproduce the speciﬁed initial conditions, namely { xT0 /d, 0 ≤ x < d, T (x, 0) = (2d − x)T0 /d, d < x ≤ 2d. At t = 0, U (t) = 1, so our expansion for T (x, 0) is T (x, 0) =

∞ ∑ n=1

bn sin

nπx , 2d

which is a Fourier sine series. The bn are calculated (making use of the symmetry about x = d) as ∫ (2m + 1)πx 8T0 (−1)m 2 d T0 x sin dx = . b2m = 0, b2m+1 = d 0 d 2d (2m + 1)2 π 2 The solution for general t is therefore T (x, t) = 8T0

∞ ∑

(−1)m (2m + 1)πx −(2m+1)2 k2 π2 t/4d2 sin e . 2 π2 (2m + 1) 2d m=0

15.2. SEPARATION OF VARIABLES

273

(b) The function to be plotted is, in maple, > TT := proc(x,t,nmax); > >

> end proc; To check that this gives the correct triangular temperature proﬁle at t = 0, enter some numerical data and make a plot: > nmax:=10: k:=1: T0:=3: d:=7: > plot(TT(x,0,nmax), x = 0 .. 14); To ﬁnd the time t0 at which TT(d,t0) has some chosen value, we plot TT(d,t,nmax) for a range of t, adjusting the plot range until t0 has been identiﬁed with suﬃcient accuracy. Such a plot corresponds to the coding > plot(TT(d,t,nmax), t = t1 .. t2); Finally, given a value of t0 , the proﬁle of T (x, t0 ) as a function of x is given by > plot(TT(x,t0,nmax), x = 0 .. 14); In mathematica, the function to be plotted is TT[x_,t_,nmax_] := Module[{m}, 8*T0*Sum[(-1)^m/((2*m+1)^2*Pi^2)*Sin[(2*m+1)*Pi*x/(2*d)] * E^(-(2*m+1)^2*k^2*Pi^2*t/(4*d^2),{m,0,nmax})] ] To check that this gives the correct triangular temperature proﬁle at t = 0, enter some numerical data and make a plot: nmax=10; k=1; T0=3; d=7; Plot[TT[x,0,nmax], {x,0,14}] To ﬁnd the time t0 at which TT(d,t0) has some chosen value, we plot TT(d,t,nmax) for a range of t, adjusting the plot range until t0 has been identiﬁed with suﬃcient accuracy. Such a plot corresponds to the coding Plot[TT[d,t,nmax], {t, t1, t2} ] Finally, given a value of t0 , the proﬁle of T (x, t0 ) as a function of X is given by Plot[TT[x,t0,nmax], {x,0,14}] Here is the plot of the triangular initial temperature proﬁle (with nmax = 10).

274

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS Next are two plots indicating the values of t for which the interface is at 0.8T0 and 0.6T0 .

The above plots indicate that the time at which the maximum temperature is 0.8T0 is 1.539, and the time it is 0.6T0 is 6.158. These are the times used to generate the temperature proﬁles shown below.

15.2.11.

Repeat the calculation and plotting of Exercise 15.2.10 if the outer sides of the two slabs (initially at T = 0) are insulated (so no heat ﬂows between them and the surroundings). The boundary condition for an insulated end at x = x0 is that ∂T /∂x vanish at x = x0 . Hint. The ﬁnal temperature of both slabs will be constant, at T = T0 /2.

Solution: (a) Making the separation of variables as in Exercise 15.2.10, but with the present boundary condition, we have 2 2 2 2 n2 π 2 and Un (t) = e−n k π t/d . d2 We need to ﬁt T (x, 0) to a series of the form

with λn = −

Xn = cos nπx/d ,

T (x, 0) = The an have the values a0 = T0 ,

a2m+1

2 = d

∫ 0

d

nπx a0 ∑ + an cos . 2 d n=1 (2m + 1)πx 4T0 T0 x cos dx = − . d d (2m + 1)2 π 2

The a2m of nonzero m vanish. The solution for general t is therefore ∞ T0 4T0 ∑ (2m + 1)πx −(2m+1)2 k2 π2 t/d2 1 T (x, t) = − 2 cos e . 2 π m=0 (2m + 1)2 d

15.2. SEPARATION OF VARIABLES

275

(b) The maple code for the temperature proﬁle is > TT := proc(x,t,nmax); >

>

* exp(-(2*m+1)^2*Pi^2*k^2*t/d^2), m=0 .. nmax);

> end proc; The corresponding mathematica code is TT[x_,t_,nmax_] := Module[{m}, T0/2 - 4*T0/Pi^2*Sum[Cos[(2*m+1)*Pi*x/d]/(2*m+1)^2 * E^(-(2*m+1)^2*Pi^2*k^2*t/d^2), {m,0,nmax}] ] These procedures can now be used as in Exercise 15.2.10. The initial temperature proﬁle for the present problem is shown here:

These are the plots that indicate the times at which the maximum temperature is 0.8T0 and 0.6T0 :

The above plots indicate that the time at which the maximum temperature is 0.8T0 is 1.539, and the time it is 0.6T0 is 6.948. These are the times used to generate the temperature proﬁles shown below.

276

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS

15.2.12.

Solve the quantum problem of a particle in a cube of side a. Use dimensionless units, as in Example 15.2.4. (a) Show that the energy eigenvalues are of the form Enmp =

] π2 [ 2 n + m2 + p2 , 2a2

and tabulate all the solutions for which n2 + m2 + p2 ≤ 14. (b) Show that the degenerate eigenfunctions can be interconverted by carrying out symmetry operations on the cube, and that an eigenvalue is unique only when the eigenfunction has (except possibly for sign) the full cubic symmetry.

Solution: (a) Separating the variables and proceeding as in Example 15.2.4, we obtain Eq. (15.55) with a = b = c, yielding Enmp as given in the problem statement. Each distinct ordered list of n, m, p corresponds to a diﬀerent eigenfunction. In order of energy, the ﬁrst few possible choices of n, m, p are: n, m, p

n2 + m2 + p2

1, 1, 1 2, 1, 1 1, 2, 1 1, 1, 2 2, 2, 1 2, 1, 2 1, 2, 2 3, 1, 1 1, 3, 1 1, 1, 3 2, 2, 2 3, 2, 1 3, 1, 2 2, 3, 1 1, 3, 2 2, 1, 3 1, 2, 3

3 6 6 6 9 9 9 11 11 11 12 14 14 14 14 14 14

ψnmp sin πx/a sin πy/a sin πz/a sin 2πx/a sin πy/a sin πz/a sin πx/a sin 2πy/a sin πz/a sin πx/a sin πy/a sin 2πz/a sin 2πx/a sin 2πy/a sin πz/a sin 2πx/a sin πy/a sin 2πz/a sin πx/a sin 2πy/a sin 2πz/a sin 3πx/a sin πy/a sin πz/a sin πx/a sin 3πy/a sin πz/a sin πx/a sin πy/a sin 3πz/a sin 2πx/a sin 2πy/a sin 2πz/a sin 3πx/a sin 2πy/a sin πz/a sin 3πx/a sin πy/a sin 2πz/a sin 2πx/a sin 3πy/a sin πz/a sin πx/a sin 3πy/a sin 2πz/a sin 2πx/a sin πy/a sin 3πz/a sin πx/a sin 2πy/a sin 3πz/a

(b) The six eigenfunctions for n2 + m2 + p2 = 14 can be interconverted by rotating the coordinate system to interchange (in all possible ways) the x, y, and z coordinates. The eigenﬁmnctions all have the same energy because they describe situations that are similar except for spatial orientation. The sets of eigenfuntions for n2 + m2 + p2 equal to 6, 9, or 11 occur in groups of three because half the rotations do not change the wavefunction, and the unique eigenunctions for n2 + m2 + p2 equal to 3 or 12 are invariant under all the coordinate interchanges. 15.2.13.

Consider a quantum particle in a two-dimensional rectangular box with sides a and 2a. Using dimensionless units, as in Example 15.2.4, ﬁnd all the eigenfunctions with E ≤ 3π 2 /a2 . Identify any degeneracies.

15.2. SEPARATION OF VARIABLES

277

Solution: Here Enm =

π2 2

(

n2 m2 + 2 2 a 4a

) =

π2 (4n2 + m2 ) . 8

The eigenfunctions requested here have 4n2 + m2 ≤ 24. n, m

4n2 + m2

1, 1 1, 2

5 8

sin πx/a sin πy/2a sin πx/a sin π2y/2a

1, 3 2, 1

13 17

sin πx/a sin π3y/2a sin 2πx/a sin πy/2a

1, 4 2, 2

20 20

sin πx/a sin π4y/2a sin 2πx/a sin π2y/2a

ψnm

There is an “accidental degeneracy” (one not associated with symmetry) at 4n2 + m2 = 20. 15.2.14.

Consider a three-dimensional quantum harmonic oscillator, with wave functions described by the time-independent Schr¨ odinger equation (in dimensionless units), with V = (x2 +y 2 +z 2 )/2: −

1 1 2 ∇ ψ(x, y, z) + (x2 + y 2 + z 2 )ψ = Eψ . 2 2

This equation must be solved subject to the condition that ψ approach zero for points far from the coordinate origin (i.e., when any of ±x, ±y, or ±z become large). (a) We plan to solve this equation by separating the variables. Letting ψ = X(x)Y (y)Z(z), show that the separation leads to the ODEs −

X ′′ x2 + X = Ex X , 2 2

Y ′′ y2 + Y = Ey Y , 2 2

Z ′′ z2 + Z = Ez Z , 2 2

with E = Ex + Ey + Ez . (b) The equation for X has solutions 2

Xn (x) = e−x

/2

Hn (x),

n = 0, 1, 2, · · · ,

where Hn (x) is the Hermite polynomial of degree n. Note that this solution vanishes in the limit of large |x|. Insert the formula for Xn (x) into the X equation and, comparing with the Hermite ODE, Eq. (14.107), verify the solution and show that the value of Ex corresponding to Xn is Ex = n + 12 . Hint. It may be helpful to look at Eqs. (14.117) and (14.119). (c) Extending the work of part (b) to the Y and Z equations, show that the eigenfunctions of ψ can be denoted ψnmp with Enmp = n + m + p + 32 , and give a formula for ψnmp . (d) List all the states with E ≤ 9/2, for each giving its values of the indices n, m, p, its eigenfunction ψnmp (expanded as an explicit polynomial times an exponential), and its value of E. Note. Save your answer to part (d); it will be needed in connection with a later Exercise.

Solution: (a) Writing ψ = X(x)Y (y)Z(z), noting that ∇2 =

∂2 ∂2 ∂2 + 2 + 2, 2 ∂x ∂y ∂z

and dividing through by XY Z, we reach ] ] ] [ [ [ 1 1 1 X ′′ Y ′′ Z ′′ 2 2 2 − +x + − +y + − +z =E. 2 X 2 Y 2 Z

278

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS Since the dependence on x, y, and z is localized, we can set [ ] 1 X ′′ = + x2 = Ex etc., 2 X with Ex (and also Ey and Ez ) constants whose possible values are not yet determined, but with Ex + Ey + Ez = E.] This equation (and the similar equations for Y and Z) are equivalent to those given in the Exercise. (b) We require Xn′′ = e−x

2

/2

Hn′′ − 2xe−x

2

/2

Hn′ + (x2 − 1)e−x

2

/2

Hn .

Inserting this expression into the ODE for Xn , we ﬁnd, canceling e−x

2

/2

,

1 − Hn′′ + xHn′ − (Ex − 12 )Hn = 0 . 2 Comparing with the Hermite ODE, Eq. (14.107), we see that Hn satisﬁes this equation if it is the Hermite polynomial of degree n, with Ex − 12 = n. This conﬁrms that Xn is a solution, with Ex = n + 12 . (c) Combining the solutions for X, Y , and Z, we ﬁnd ψnmp = Xn (x)Ym (y)Zp (z) = Hn (x)Hm (y)Hp (z)e−(x E =n+m+p+

3 2

2

+y 2 +z 2 )/2

,

.

(d) In making the following table, it is useful to note that H0 (x) = 1, H1 (x) = 2x, H2 (x) = 4x2 − 2, and H3 (x) = 8x3 − 12x. We also use the notation r2 = x2 + y 2 + z 2 . n

m

p

ψnmp

0

0

0

e−r

1

0

0

2xe−r

/2 2

/2

0

1

0

2 2ye−r /2

0

0

1

2ze−r

2

/2 2 2)e−r /2

2

0

0

(4x2

0

2

0

(4y 2 − 2)e−r −

2

/2

2 2)e−r /2

Enmp

n

m

p

ψnmp

3/2

3

0

0

(8x3 − 12x)e−r

5/2

0

3

0

(8y 3 − 12y)e−r −

Enmp 2

/2

9/2

/2

9/2

2 12z)e−r /2

9/2

2

5/2

0

0

3

(8z 3

5/2

2

1

0

(4x2 − 2)(2y)e−r −

9/2

2 2)(2z)e−r /2

9/2

7/2

2

0

1

(4x2

7/2

1

2

0

(4y 2 − 2)(2x)e−r −

9/2

2 2)(2z)e−r /2

9/2

0

0

2

7/2

0

2

1

1

1

0

4xye−r

2

/2

7/2

1

0

2

(4z 2 − 2)(2x)e−r

1

0

1

4xze−r

2

/2

7/2

0

1

2

(4z 2 − 2)(2y)e−r

1

2 4yze−r /2

1

2 8xyz e−r /2

7/2

1

1

2

/2

(4y 2

1

2

/2

(4z 2

0

15.3

2

2

/2

9/2

2

/2

9/2 9/2

SEPARATION OF VARIABLES IN CYLINDRICAL COORDINATES

Exercises 15.3.1.

Cylindrical problems do not always involve Bessel functions. Find the steady-state temperature distribution if the surface of a long cylinder of material (radius a) is kept at the temperature distribution ψ0 (φ) (independent of z). The PDE governing this situation, expressed in cylindrical coordinates, is 1 ∂ψ 1 ∂2ψ ∂2ψ + + 2 = 0. ∂ρ2 ρ ∂ρ ρ ∂φ2

15.3. SEPARATION OF VARIABLES IN CYLINDRICAL COORDINATES

279

(a) Obtain the solution ψ(ρ, φ) as an expansion valid for general ψ0 . (b) Obtain a closed-form solution for the case ψ0 = T0 cos φ. (c) Obtain an expansion giving the solution for the case  π π  T0 , − a[0]:=1; for n from 0 to nmax do > a[2*n+2] := 0; a[2*n+1] := 2/Pi/(2*n+1)*(-1)^n end do; > end proc; > psi := proc(nmax,rho,phi) local n; >

> end proc; The following plot checks the value of psi on the boundary. > nmax:=40; aa:=1; makeA(nmax): >

plot(psi(nmax,aa,phi), phi=-Pi .. Pi);

In mathematica, makeA[nmax_] := Module[ {n},

a[0]=1;

Do[ a[2*n+2]=0; a[2*n+1]=2/Pi/(2*n+1)*(-1)^n, {n, 0, nmax} ] ] psi[nmax_,rho_,phi_] := Module[ {n}, a[0]/2 + Sum[a[n]*(rho/aa)^n*Cos[n*phi], {n,1,2*nmax+1}] ] To check the value of psi on the boundary, invoke nmax=40; aa=1; makeA[nmax]; Plot[psi[nmax,aa,phi], {phi,-Pi,Pi} ] Both symbolic languages produce a plot similar to the following:

(e) A contour plot will be most useful if it uses the physical geometry, which in the present problem is a circular disk. We therefore need coding that will calculate ψ in Cartesian coordinates. maple code for the Cartesian

15.3. SEPARATION OF VARIABLES IN CYLINDRICAL COORDINATES

281

representation of ψ is > PsiC := proc(nmax,x,y) local rho,phi; >

rho:=sqrt(x^2+y^2); phi:=arctan(y,x);

>

rho:=min(rho,1.);

psi(nmax,rho,phi)

> end proc; Now obtain the contour plot: > with(plots): > contourplot(PsiC(nmax,x,y), x = -1 .. 1, y = -1 .. 1, >

contours=[.02,.2,.4,.6,.8,.98]);

maple does not make a reasonable default choice of contours for this problem. In mathematica, psiC[nmax_,x_,y_] := Module[ {rho,phi}, rho = Sqrt[x^2+y^2]; phi = ArcTan[y,x]; rho = Min[rho,1.]; psi[nmax,rho,phi] ] Now obtain the contour plot: ContourPlot[psiC[nmax,x,y], {x,-1,1}, {y,-1,1}] Here is a contour plot similar to that produced by the coding in either symbolic language:

The lighter shading corresponds to larger amplitudes of ψ. 15.3.2.

Verify that the separation of variables in Eq. (15.69) leads to Eq. (15.70).

Solution: Drop the ∂ 2 ψ/∂φ2 term of the Laplace equation, set ψ = P Z, and divide through by P Z. We get P ′′ P′ Z ′′ + + = 0. P ρP Z We now set Z ′′ /Z = C 2 , thereby reaching Eq. (15.70). 15.3.3.

Find the normal modes of oscillation of a circular cylindrical cavity of radius a and length h if the amplitude ψ(ρ, φ, z, t) of the oscillation is governed by the PDE ∇2 ψ =

1 ∂2ψ , v 2 ∂t2

282

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS subject to the boundary condition that ψ vanish on all surfaces of the cavity. Find the functional form of each linearly independent normal mode in terms of ρ, φ, and z, and identify its frequency of oscillation.

Solution: Place the cylinder with its axis along the z-axis with its end caps at z = 0 and z = h, and use cylindrical coordinates with the curved surface of the cylinder at ρ = a. Make the separation of variables ψ = P (ρ)Φ(φ)Z(z)U (t), and write the PDE in the form P ′′ (ρ) P ′ (ρ) Φ′′ (φ) Z ′′ (z) U ′′ (t) + + 2 + = 2 . P (ρ) ρP (ρ) ρ Φ(φ) Z(z) v U (t) We then set Φ′′ (φ) = −m2 , Φ(φ)

Z ′′ (z) = −µ2 , Z

and

U ′′ (t) = −k 2 . v 2 U (t)

and note that the boundary conditions on φ cause Φ(φ) to have solutions cos mφ and sin mφ, with m an integer or zero. In addition, the boundary conditions on z cause Z(z) to have solutions Zp = sin(pπz/h), with p a positive integer. The value of µ corresponding to a given p is µ = pπ/h. We do not yet have ﬁxed values for k 2 , but the U equation has solutions that are sinusoidal vibrations at the frequency ν = kv/2π. Substituting the above relationships into the PDE, we get ( ) p2 π 2 ρ2 P ′′ (ρ) + ρP (ρ) + k 2 − 2 ρ2 P (ρ) − m2 P (ρ) = 0 . h We recognize the P equation as a Bessel ODE of order m and require a solution that is regular at ρ = 0 and zero at ρ = a. The relevant solutions are Jm (αρ/a), where α is a zero of Jm , and where k 2 must be such that α2 p2 π 2 2 = k − . a2 h2 This condition determines the value of k 2 as α2 p2 π 2 + 2 . 2 a h To make a list of the solutions, note that p can be any positive integer and that α can be any positive zero of Jm . Moreover, m can have any nonnegative integer value and (except for m = 0) there will be two independent solutions for each m. Each linearly independent solution is called a normal mode. Thus, writing αmn to denote the nth positive zero of Jm , the normal modes and their respective frequencies ν = kv/2π are the following (applicable for n, m, and p = 1, 2, 3, · · · ): √ v ( α0n )2 ( pπ )2 + , ψn,0,p = J0 (α0n ρ/a) sin(pπz/h), ν= 2π a h { } √ cos mφ v ( αmn )2 ( pπ )2 + . ψn,m,p = Jm (αmn ρ/a) sin(pπz/h), ν = 2π a h sin mφ k2 =

15.3. SEPARATION OF VARIABLES IN CYLINDRICAL COORDINATES 15.3.4.

283

A semi-inﬁnite circular cylinder of radius a is placed on the positive z-axis, with its ﬁnite end at z = 0. Find the steady-state temperature distribution within the cylinder if its curved surface is maintained at temperature T = 0 and its end at z = 0 is maintained at the temperature distribution ψ0 = T0 y = T0 ρ sin φ. The PDE governing the temperature distribution is ∇2 ψ = 0. Hint. You may use symbolic methods to evaluate any necessary integrals.

Solution: For this problem the strategy can be to ﬁnd a set of solutions that when written in cylindrical coordinates satisfy the boundary conditions at ρ = a and at z = ∞, and then to form the linear combination of those solutions that also satisﬁes the boundary condition on the disk at z = 0. Assuming separated-variable solutions to the Laplace PDE of the form ψ = P (ρ)Φ(φ)Z(z), we have Φ′′ (φ) = −m2 , Φ(φ)

Z ′′ (z) = µ2 , Z(z)

P ′ (ρ) P ′′ (ρ) m2 + + µ2 − 2 = 0 . P (ρ) ρP (ρ) ρ

The φ equation has solutions cos mφ and sin mφ, and its boundary conditions require m to be an integer or zero; the z equation has solutions e±µz , with only the solution e−µz (for µ assumed positive) acceptable in the present problem. The ρ equation can be identiﬁed as a Bessel equation of order m in the variable µρ. To satisfy the boundary condition on the curved surface at ρ = a we must choose µ such that µa is a zero of Jm . Denoting the nth such zero αmn , our overall solutions therefore have the form { } cos mφ ψnm (ρ, φ, z) = Jm (αmn ρ/a) e−αmn z/a . sin mφ We now need to take the linear combination of the ψnm such that ψ(ρ, φ, 0) = T0 ρ sin φ. We therefore include only terms with φ dependence sin φ in the expansion, so our expansion reduces to the form ψ(ρ, φ, z) =

∞ ∑

cn J1 (α1n ρ/a) sin φ e−α1n z/a ,

n=1

with coeﬃcients cn such that T0 ρ =

∞ ∑

cn J1 (α1n ρ/a) .

n=1

This is a Bessel series of the type described in Eq. (14.89), with coeﬃcients given by the integral formula ∫ a ( 2 T0 α1n ρ ) 2 cn = 2 J ρ dρ . 1 a [J2 (α1n ]2 ) 0 a The integral can be evaluated analytically to yield a result containing a Bessel function; to do so symbolically use one of > int(BesselJ(1,BesselJZeros(1,n)*r/a)*r^2, r=0 .. a);

284

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS −

a3 BesselJ(0, BesselJZeros(1, n)) BesselJZeros(1, n)

Integrate[BesselJ[1, BesselJZero[1, n]*r/a]*r^2, {r, 0, a}] a3 BesselJ[2, BesselJZero[1, n]] BesselJZero[1, n] These results are actually in agreement because J2 (α1n ) = −J0 (α1n ) , a result easily proved from the Bessel function recurrence formula. We therefore have the ﬁnal result ψ(ρ, φ, z) =

15.4

∞ ∑ 2aT0 sin φ J1 (α1n ρ/a) e−α1n z/a . α J (α ) 1n 2 1n n=1

SPHERICAL-COORDINATE PROBLEMS

Exercises 15.4.1.

(a) Show that the space spanned by the three spherical harmonics Y1m (m = 1, 0, − 1) is also spanned by the real basis x/r, y/r, z/r. (b) Regarding each member of the real basis identiﬁed in part (a) as a function in threedimensional space, characterize its nodal surface(s), i.e., the set of points where the function is zero. (c) Let (x′ , y ′ , z ′ ) be the point to which (x, y, z) is transformed when a Cartesian coordinate system is subjected to a rotation. Show that x′ /r′ , y ′ /r′ , z ′ /r′ span the same space as x/r, y/r, z/r. This means that each member of the primed basis can be written as a linear combination of the unprimed basis functions.

Solution: (a) We need to show that the Y1m can be formed as linear combinations of x/r, y/r, and z/r. To start, we note that Y10 is proportional to z/r. Further, we note that Y11 is proportional to (x + iy)/r and Y1−1 is proportional to (x − iy)/r. These harmonics can be formed from the linear combinations (x/r) ± i(y/r). (b) x/r is zero (has a nodal surface) on the yz-plane (deﬁned by x = 0). The nodal surface of y/r is the xz-plane; that of z/r is the xy-plane. (c) A rotation changes x to x′ , a linear combination of x, y, and z with the same value of r. Similar remarks apply to y ′ and z ′ . Any point in the three-dimensional space can be represented in either coordinate system. 15.4.2.

(a) Form a real basis for the space spanned by the spherical harmonics with l = 2 by forming linear combinations of Y2m and Y2−m . (b) Multiplying each member of your real basis by r2 to produce a set of functions in threedimensional space, describe the nodal structure of each basis member. Characterize each nodal surface by its shape (planar, spherical, conical, etc.) and position. (c) If we rotate a Cartesian coordinate system so that x′ = z, y ′ = y, z ′ = −x, state how the nodal structure of each basis member (in its original position) is described in the primed coordinate system. (d) Show that it is possible to write each original (unprimed) basis member as a linear combination of basis members constructed for the primed coordinate system.

15.4. SPHERICAL-COORDINATE PROBLEMS

285

Solution: (a) The function Y20 is real; for each m > 0, we can form two real functions: (−1)m Y2m + Y2−m and ((−1)m Y2m − Y2−m )/i. (b) r2 Y20 = A(3z 2 − r2 ) (with A a known, but currently irrelevant constant). Its nodal surfaces are √ cones with vertex at the origin and with opening angles at cos θ = ± 1/3, measured from the polar (z-) axis. −Y21 + Y2−1 = Axz; its nodal surface are the planes z = 0 (the xy-plane) and x = 0 (the yz-plane). (−Y21 − Y2−1 )/i = Ayz; its nodal surfaces are y = 0 (the xz-plane) and z = 0 (the xy-plane). Y22 +Y2−2 = A(x2 −y 2 ); its nodal surfaces are the planes on which x = y and x = −y. These planes are perpendicular to the xy-plane, at 45◦ angles from the x- and y-axes and also perpendicular to each other. Finally, (Y22 − Y2−2 )/i = Axy; its nodal surfaces are the planes x = 0 (the yzplane) and y = 0 (the xz-plane). (c) The conical nodes of 3z 2 − r2 now have an axis in the x′ direction. The planar nodes of xz are now in the x′ - and z ′ -planes. The planar nodes of x2 − y 2 and xy are now perpendicular to the x′ -plane. (d) 3z 2 − r2 → 3x′2 − r2 = 2x′2 − y ′2 − z ′2 = − 12 (2z ′2 − x′2 − y ′2 ) + 23 (x′2 − y ′2 ); zx → −x′ z ′ ;

zy → x′ y ′ ;

xy → −z ′ y ′ ;

x2 − y 2 → z ′2 − y ′2 = 21 (2z ′2 − x′2 − y ′2 ) + 12 (x′2 − y ′2 ). 15.4.3.

Find the seven spherical harmonics with l = 3 that correspond to the deﬁnition in Eq. (15.91). Then repeat Exercise 15.4.2 for l = 3.

Solution: (a) At arbitrary scale, the real harmonics for l = 3 are r3 Y30 = 5z 3 − 3zr2 , r3 (−Y31 + Y3−1 ) = x(5z 2 − r2 ), r3 (−Y31 − Y3−1 )/i = y(5z 2 − r2 ), r3 (Y32 + Y3−2 ) = z(x2 − y 2 ), r3 (Y32 − Y3−2 )/i = xyz, r3 (−Y33 + Y3−3 ) = x3 − 3xy 2 , r3 (−Y33 − Y3−3 )/i = 3x2 y − y 3 . (b) For r3 Y30 , there is a nodal plane at z√= 0 and nodal cones at opening angles about the z-axis with cos θ = ± 3/5. For r3 (−Y31 + Y3−1 ), there is a nodal plane at √ x = 0 and nodal cones at opening angles about the z-axis with cos θ = ± 1/5. For r3 (−Y31 − Y3−1 )/i there is a nodal plane at√ y = 0 and nodal cones at opening angles about the z-axis with cos θ = ± 1/5. For r3 (Y32 + Y3−2 ) there are nodal planes at z = 0 and perpendicular to the xy-plane through the lines x = y and x = −y. For r3 (Y32 − Y3−2 )/i there are nodal planes at x = 0, y = 0, and z = 0. For r3 (−Y33 + Y3−3 ) there are nodal planes at x = 0 and, perpendicular to

286

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS √ the xy-plane, through the lines x = ±y 3. For r3 (−Y33 − Y3−3 )/i, there are nodal planes √ at y = 0 and, perpendicular to the xy-plane, through the lines y = ±x 3. (c) All the conical nodes now have their axis in the x′ direction. All nodes originally on the plane x = 0 are now on the z ′ = 0 plane, those originally on the plane y = 0 remain on y ′ = 0, and those originally on the plane z = 0 are now on the x′ = 0 plane. The nodes of of Y32 + Y3−2 that are now perpendicular to the x′ = 0 plane pass through the lines y ′ = z ′ and y ′ = −z ′ in that plane. The nodes of −3 −Y33 + Y√ that are now perpendicular to the x′ = 0 plane pass through 3 −3 ′ z = ±y 3 and those of −Y33 − Y3√ that are now perpendicular to the ′ ′ ′ x = 0 plane pass through y = ±z 3. 5 3 (d) 5z 3 − 3zr2 → 2x′3 − 3x′ y ′2 − 3x′ z ′2 = (x′3 − 3x′ y ′2 ) − x′ (5z ′2 − r2 ); 4 4 x(5z 2 − r2 ) → −4z ′ x′2 + y ′2 z ′ + z ′3 =

1 5 (5z ′3 − 3z ′ r2 ) − z ′ (x′2 − y ′2 ); 2 2

1 5 y(5z 2 − r2 ) → 4yx′2 − y ′3 − y ′ z ′3 = − y ′ (5z ′2 − r2 ) + (3x′2 y ′ − y ′3 ); 4 4 z(x2 − y 2 ) → x′ (z ′2 − y ′2 ) =

1 ′3 1 (x − 3x′ y ′2 ) + x′ (5z ′2 − r2 ); 4 4

xyz → −x′ y ′ z ′ ; 1 3 x3 − 3xy 2 → −z ′3 + 3z ′ y ′2 = − (5z ′3 − 3z ′ r2 ) − z ′ (x′2 − y ′2 ); 2 2 3x2 y − y 3 → 3z ′2 y ′ − y ′3 = 15.4.4.

3 ′ ′2 1 y (5z − r2 ) + (3x′2 y ′ − y ′3 ) . 4 4

(a) There is one monomial in x, y, z, of degree 0, namely 1. There are three such monomials of degree 1, namely x, y, and z. There are six such monomials of degree 2: x2 , y 2 , z 2 , xy, xz, and yz. Find the set of such monomials of degree 3. (b) The monomial of degree zero (divided by r0 ) is a basis for the Ylm with l = 0. The monomials of degree 1 (divided by r1 ) are a basis for the Ylm with l = 1. From the six monomials of degree 2 (divided by r2 ), form a ﬁve-membered basis for the Ylm with l = 2 and identify the function built from this six-monomial set that is linearly independent of that ﬁve-membered basis. Is this sixth function a basis for any set of spherical harmonics? (c) Using the ideas developed in part (b), analyze the set of monomials of degree three and use them to make bases for sets of spherical harmonics. (d) Obtain the monomials of degree four and ﬁnd the sets of spherical harmonics for which they can form bases. Can you extrapolate your ﬁndings to monomial sets of arbitrary degree?

Solution: (a) There are ten monomials in 3-D space of degree 3: x3 , y 3 , z 3 , x2 y, x2 z, y 2 x, y 2 z, z 2 x, z 3 y, and xyz. (b) The sixth function of degree 2 is x2 + y 2 + z 2 ; it is a basis function for Y00 . (c) The seven forms of degree 3 found in part (a) of Exercise 15.4.3 form a basis for the spherical harmonics with l = 3. The three forms orthogonal to those seven can be written (x2 +y 2 +z 2 )x, (x2 +y 2 +z 2 )y, (x2 +y 2 +z 2 )z. These are a basis for the spherical harmonics with l = 1.

15.4. SPHERICAL-COORDINATE PROBLEMS

287

(d) There are 15 monomials of degree 4. Nine linear combinations of them can be identiﬁed as a basis for the spherical harmonics with l = 4; one, (x2 + y 2 + z 2 )2 , is a basis for l = 0. The remaining ﬁve can be shown to be a basis for l = 2. The generalization of the ﬁndings of this exercise is that the monomials of degree l form bases for the spherical harmonics of the l value equal to the degree of the monomials, and all smaller l values (downward in steps of 2) to l = 1 or l = 0. 15.4.5.

The angular parts of the wave functions that are solutions to quantum central-force problems are traditionally labeled by alphabetic symbols that indicate the l values of their spherical harmonics. The code is: “s” for l = 0; “p” for l = 1; “d” for l = 2; “f ” for l = 3. Higher l values are assigned symbols in alphabetic order: “g”, “h”,· · · . (a) Explain why “s” refers to a unique spatial function, while “p” refers to one or more of three functions with diﬀerent spatial orientations. (b) How many functions are in a set designated “d”? In a set designated “f ”? (c) Explain why the diﬀerent members of a function set with a given l value will all lead to the same radial ODE in a central force problem.

Solution: (a) Because “s” means l = 0, the only value of m is m = 0, so the function is unique. But “p” means l = 1, so there are three possible m values: 1, 0, and −1. (b) Because “d” means l = 2, there are ﬁve functions of diﬀerent m; “f ” means l = 3, so there are seven diﬀerent m values. (c) The radial ODE depends upon the angular function only through the quantity l(l + 1), which does not depend on m. 15.4.6.

A complete three-dimensional space contains no charge outside a sphere of radius r0 ; within that sphere are charges that cause the electrostatic potential on the sphere to be given by ψ(r0 , θ, φ) = sin2 (2θ) sin2 φ. Find the potential at all points external to the sphere.

Solution: The potential external to the sphere must satisfy Laplace’s equation with the boundary condition given above. The most general solution that vanishes at r = ∞ is, as in Eq. (15.102), ψ(r, θ, φ) =

∑ blm Y m (θ, φ) . rl+1 l lm

We choose the coeﬃcients blm so that ψ satisﬁes the boundary condition at r = r0 . We start this process by ﬁnding the spherical harmonic expansion of ψ(r0 , θ, φ). That expansion has the form ∑ ψ(r0 , θ, φ) = clm Ylm (θ, φ), with clm = ⟨Ylm |ψ(r0 )⟩. lm

We may use symbolic computing to ﬁnd the coeﬃcients clm . In preparation for so doing, load into a maple or mathematica session the procedure SProd, the coding for which is given in the current section of the text, and (only

288

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS for maple) also load in the procedure SphY for the spherical harmonics. The harmonics are available as part of the basic language in mathematica, with name SphericalHarmonicY. In maple, > PP := sin(2*t)^2*sin(phi)^2: > for L from 0 to 4 do for M from -L to L do >

c[L,M] := SProd(SphY(L,M,t,phi),PP,t,phi) end do end do;

In mathematica, PP = Sin[2*theta]^2*Sin[phi]^2; Do[Do[c[l,m] = SProd[SphericalHarmonicY[l,m,theta,phi],PP], {m,-l,l}], {l,0,4}] If SProd is set to give algebraic expressions instead of numeric (remove the command evalf or N), we get the following results: √ √ √ 8 π 8 5π 4 30π c[0, 0] = , c[2, 0] = , c[2, 2] = c[2, −2] = − , 15 105 105 √ √ 32 π 8 10π c[4, 0] = − , c[4, 2] = c[4, −2] = − . 105 105 If these quantities are evaluated numerically, they are c[0, 0] = 0.9453, c[2, 0] = 0.3020, c[2, 2] = c[2, −2] = −0.3698, c[4, 0] = −0.5402, c[4, 2] = c[4, −2] = −0.4270. The expansion yielding the clm is to be used at r = r0 , so the coeﬃcients blm can be computed as clm r0l+1 , and the potential can be written ψ(r, θ, φ) =

clm

lm

=

15.4.7.

(r ) 0

r

( r )l+1 0

r

c00 Y00 +

Ylm (θ, φ)

( r )3 [ ] 0 c20 Y20 + c22 Y22 + c2,−2 Y2−2 r ( r )5 [ ] 0 + c40 Y40 + c42 Y42 + c4,−2 Y4−2 . r

A sphere of radius r0 contains no charge, but charges external to the sphere cause the electrostatic potential on it to be given by ψ(r0 , θ, φ) = sin2 (2θ) sin2 φ. Find the potential at all points within the sphere.

Solution: This is the same distribution on the sphere as for Exercise 15.4.6, so it has the same spherical harmonic expansion: ∑ ψ(r0 , θ, φ) = clm Ylm (θ, φ) , lm

15.4. SPHERICAL-COORDINATE PROBLEMS

289

with the clm having the values given in that exercise. However, here we must use the clm to ﬁnd the coeﬃcients in the expansion ∑ ψ(r, θ, φ) = alm rl Ylm (θ, φ) . lm

This time we have alm = clm /r0l , so our expansion is ψ(r, θ, φ) =

( clm

lm

r r0

( =

c00 Y00

+

)l

r r0

Ylm (θ, φ) )2

[ ] c20 Y20 + c22 Y22 + c2,−2 Y2−2 ( +

15.4.8.

r r0

)4

] [ c40 Y40 + c42 Y42 + c4,−2 Y4−2 .

A sphere of radius r0 contains no charge. On the sphere the derivative of the potential in the outward normal direction has the position-dependent value ψr (θ, φ) = cos2 θ. Find the potential at all points within the sphere. To what extent is your answer not unique?

Solution: The potential within the sphere is given by the expansion ∑ ψ(r, θ) = al rl Pl (cos θ) , l

and its radial derivative on the sphere is ∑ ∂ψ al l r0l−1 Pl (cos θ) . = ∂r r=r0 l

We have simpliﬁed the expansion in recognition of the fact that ψ is independent of φ; it is convenient to use Pl in place of the Yl0 even though the Pl are not normalized. We can expand the radial derivative as cos2 θ =

1 2 P2 (cos θ) + P0 (cos θ) . 3 3

From this result we see that 2a2 r0 = 2/3, that a0 is undetermined, and that all other ai vanish. Inserting this result into the formula for ψ, we ﬁnd 2r0 ψ(r, θ) = a0 P0 (cos θ) + 3

(

r r0

)2 P2 (cos θ) .

We note that a0 P0 is an undetermined constant, and that the variable portion of ψ is the P2 term. This indeterminacy shows that speciﬁcation of ∂ψ/∂r does not determine the zero of ψ. 15.4.9.

A complete three-dimensional space contains no charge outside a sphere of radius r0 ; within that sphere are charges that cause the electrostatic potential on the sphere to be given by

290

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS ψ(r0 , θ, φ) = cos2 θ sin2 φ. Using symbolic computation if helpful, ﬁnd the terms through r6 in the spherical harmonic expansion of the potential within the sphere.

Solution: The expansion of this ψ in spherical harmonics is an inﬁnite series with l = 0, 2, 4, · · · and with m = −2, 0, 2. The series does not converge rapidly. Its terms are most conveniently obtained using symbolic computation. The expansion of ψ takes the form ψ(r, θ, φ) =

( clm

lm

r r0

)l Ylm (θ, φ) ,

clm = ⟨Ylm | cos2 θ sin2 φ⟩ .

Code for evaluating clm in maple can be, assuming that the procedures SProd, SphY and LegenP have been loaded into the current computing session, > V := (cos(theta)*sin(phi))^2; > C[L,M] := SProd(SphY(L,M,t,p),V,t,p); In mathematica, with SProd loaded, we can run V = (Cos[theta]*Sin[phi])^2; CC[L,M] = SProd[SphericalHarmonicY[L,M,theta,phi],V] The nonzero clm , through l = 6, are: c00 = 0.5908, c20 = 0.5284, c22 = c2,−2 = −0.1618, c42 = c4,−2 = −0.2802, c62 = c6,−2 = −0.1559. 15.4.10.

The three-dimensional quantum harmonic oscillator was the topic of Exercise 15.2.14, where it was solved by the method of separation of variables in Cartesian coordinates. This problem makes use of the answer to part (d) of that earlier Exercise. If you have not done that Exercise and saved your answer, you should do it now in preparation for the present problem. We return to the 3-D quantum oscillator, but now attack it using spherical polar coordinates. This problem uses the time-independent Schr¨ odinger equation for the potential V = (x2 + y 2 + z 2 )/2, which we now write as V = r2 /2. Our Schr¨ odinger equation therefore takes the form 1 r2 − ∇2 ψ + ψ = Eψ , (15.110) 2 2 with ψ = ψ(r, θ, φ). This is a central-force problem, and we set ψ = R(r)Y (θ, φ). We know that the angular part of the solution will be a spherical harmonic Ylm (θ, φ). (a) Show that the radial equation for this problem when the angular solution is the spherical harmonic Ylm is −

1 r2 l(l + 1) 1 ′′ R − R′ + R+ R = ER. 2 r 2 2r2

(b) Show that the substitution R(r) = rl e−r ( r2 U ′′ (r2 ) + l +

3 2

2

/2 U (r 2 )

(15.111)

converts Eq. (15.111) into

) ) 1( − r2 U ′ (r2 ) = − E − l − 23 U (r2 ) . 2

(15.112)

Hint. Keep in mind that the notation U ′ indicates the derivative of U with respect to its argument. Thus, for example, dU (r2 )/dr = 2rU ′ (r2 ). (c) We now take note that Eq. (14.139), the associated Laguerre ODE, has the following form when its dependent variable (denoted z in that equation but here called Lkn ) is a function of x2 : x2 (Lkn )′′ (x2 ) + (k + 1 − x2 )(Lkn )′ (x2 ) = −nLkn (x2 ) ,

(15.113)

15.4. SPHERICAL-COORDINATE PROBLEMS

291

where n is any nonnegative integer and Lkn is a polynomial of degree n in its argument (and therefore here an even polynomial of degree 2n in x). A comparison of the left-hand sides of Eqs. (15.112) and (15.113) shows that Eq. (15.112) l+1/2 2 becomes an identity (and is thereby solved) if U (r2 ) is identiﬁed as Ln (r ). Show that this identiﬁcation leads to the equation ) 1( −n = − E − l − 23 , 2 which rearranges to 3 E = 2n + l + . (15.114) 2 (d) Based on parts (a) through (c), write expressions for the PDE solution ψnlm (r, θ, φ) and for its associated eigenvalue Enlm . (e) Make a list of all the eigenfunctions and their eigenvalues for E ≤ 9/2. Compare with your answer to part (d) of Exercise 15.2.14, and verify that the two methods of solving this problem lead to solution sets with the same degree of degeneracy (i.e., that the two solution sets have equal numbers of eigenfunctions for each E value). (f) Complete the task of verifying that the Cartesian and spherical-coordinate solution sets are equivalent by obtaining the explicit forms of all the individual solutions and showing that they span the same solution spaces. For the spherical-coordinate solutions, the necessary associated Laguerre polynomials are most easily obtained by symbolic computation. The fractional upper index is accepted by the symbolic procedures LaguerreL(n,l+1/2,r^2) or LaguerreL[n,l+1/2,r^2].

Solution: (a) The present PDE is a case of Eq. (15.76) with G(r) = 2E − r2 , and the method of separation of variables leads to an angular equation whose solutions are the spherical harmonics Ylm (θ, φ). The radial equation corresponding to any speciﬁc Ylm is of the form given in Eq. (15.81), with λ = l(l + 1). Inserting the values of G(r) and λ, the radial equation can be brought to the form ( ) 1 2 ′ r2 l(l + 1) ′′ − R − R + R+ R = ER. 2 r 2 2 (b) Taking derivatives of R = rl e−r R′ =

2

/2

U (r2 ),

2 lR − rR + 2rl+1 e−r /2 U ′ (r2 ) , r

R′′ = −

lR lR′ + − R − rR′ r2 r [ ] 2 2 + 2(l + 1)rl − 2rl+2 e−r /2 U ′ (r2 ) + 4rl+2 e−r /2 U ′′ (r2 ) .

Forming R′′ + 2R′ /r, we get R′′ +

2 ′ l(l + 1)R R = + r2 R − (2l + 3)R r r2 + [(4l + 6)rl − 4rl+2 ]e−r

2

/2

U ′ (r2 ) + 4rl+2 e−r

2

/2

U ′′ (r2 ) .

Inserting this into the radial equation, much cancels (including rl e−r from all terms), and we are left with −2r2 U ′′ (r2 ) − (2l + 3 − 2r2 )U ′ (r2 ) + (l + 23 )U (r2 ) = E U (r2 ) , equivalent to the answer required for part (b).

2

/2

292

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS (c) The quantity k of Eq. (15.113) must be such that k+1 = l+ 32 , i.e., k = l+ 12 . Then Eq. (15.114) follows directly from a comparison of Eqs. (15.112) and (15.113). (d) ψnlm = rl Ll+1/2 (r2 ) e−r n n

(e)

l

0

0

2

/2

m 0

0

1

1

0

1

0

0

1

−1

1

0

0

0

2

2

0

2

1

0

2

0

0

2

−1

0

2

−2

1

1

1

1

1

0

1

1

−1

0

3

3

0

3

2

0

3

1

0

3

0

0

3

−1

0

3

−2

0

3

−3

Ylm (θ, φ) ,

Enlm = 2n + l +

ψnlm

3 2

.

E

2 1/2 L0 (r2 ) e−r /2 Y00 (θ, φ) 2 3/2 r L0 (r2 ) e−r /2 Y11 (θ, φ) 2 3/2 r L0 (r2 ) e−r /2 Y10 (θ, φ) 2 3/2 r L0 (r2 ) e−r /2 Y1−1 (θ, φ) 2 1/2 L1 (r2 ) e−r /2 Y00 (θ, φ) 2 5/2 r2 L0 (r2 ) e−r /2 Y22 (θ, φ) 2 5/2 r2 L0 (r2 ) e−r /2 Y21 (θ, φ) 2 5/2 r2 L0 (r2 ) e−r /2 Y20 (θ, φ) 2 5/2 r2 L0 (r2 ) e−r /2 Y2−1 (θ, φ) 2 5/2 r2 L0 (r2 ) e−r /2 Y2−2 (θ, φ) 2 3/2 r L1 (r2 ) e−r /2 Y11 (θ, φ) 2 3/2 r L1 (r2 ) e−r /2 Y10 (θ, φ) 2 3/2 r L1 (r2 ) e−r /2 Y1−1 (θ, φ) 2 7/2 r3 L0 (r2 ) e−r /2 Y33 (θ, φ) 2 7/2 r3 L0 (r2 ) e−r /2 Y32 (θ, φ) 2 7/2 r3 L0 (r2 ) e−r /2 Y31 (θ, φ) 2 7/2 r3 L0 (r2 ) e−r /2 Y30 (θ, φ) 2 7/2 r3 L0 (r2 ) e−r /2 Y3−1 (θ, φ) 2 7/2 r3 L0 (r2 ) e−r /2 Y3−2 (θ, φ) 2 7/2 r3 L0 (r2 ) e−r /2 Y3−3 (θ, φ)

3/2 5/2 5/2 5/2 7/2 7/2 7/2 7/2 7/2 7/2 9/2 9/2 9/2 9/2 9/2 9/2 9/2 9/2 9/2 9/2

The degeneracy is the same as in the answer to Exercise 15.2.14. (f) For obtaining explicit forms of the ψnlm , we need the following expressions for Laguerre polynomials. Either by hand or using the command given in the Exercise (for maple, using also the command simplify to get an explicit result), we ﬁnd 1/2

L0

3/2

= L0

5/2

= L0

7/2

= L0

= 1,

1/2

L1 (r2 ) = 32 −r2 ,

3/2

L1 (r2 ) = 52 −r2 .

We also need explicit forms (in terms of x, y, and z for the spherical harmonics. Discarding scale factors, Y00 ∼ 1, the three r Y1m span the same space as x, y, z, the ﬁve r2 Y2m span the same space as xy, xz, yz, x2 − y 2 , and 2z 2 − x2 − y 2 , and the seven r3 Y3m span the same space as x3 − 3xy 2 , 3x2 y − y 3 , z(x2 − y 2 ), xyz, (5z 2 − r2 )x, (5z 2 − r2 )y, and 5z 3 − 3zr2 .

15.4. SPHERICAL-COORDINATE PROBLEMS

293

The unique eigenfunction with E = 3/2 is the same in both coordinate 2 2 systems. The three eigenfunctions with l = 1 reduce to xe−r /2 , ye−r /2 , −r 2 /2 ze , and correspond to the E = 5/2 eigenfunctions in Cartesian coor2 dinates. The eigenfunction with E = 7/2 and l = 0, ( 23 − r2 )2 e−r /2 , is proportional to the sum of the Cartesian eigenfunctions (2, 0, 0), (0, 2, 0), and (0, 0, 2). Three of the ﬁve eigenfunctions with l = 2 correspond to the xy, xz, and yz Cartesian functions; the other two are linear combinations of the Cartesian functions (2, 0, 0), (0, 2, 0), and (0, 0, 2). The ten spherical-coordinate functions with E = 9/2 can also be made to correspond by forming appropriate linear combinations of the Cartesian functions. 15.4.11.

The hydrogen atom is a quantum central-force problem that (in dimensionless units) is described by the Schr¨ odinger equation −

1 2 1 ∇ ψ − ψ = Eψ . 2 r

(15.115)

Here r is the distance from the origin (the position of the hydrogen nucleus). (a) Using spherical polar coordinates, we know that this central-force problem has angular solutions Ylm (θ, φ). Show, writing ψ = R(r)Ylm (θ, φ), that the radial equation for any nonnegative integer l takes in spherical polar coordinates the form −

1 1 l(l + 1) 1 ′′ R − R′ − R + R = ER. 2 r r 2r2

(15.116)

(b) Show that the substitution R(r) = rl e−r/n U (2r/n) (where n is to be determined later) converts Eq. (15.116) into −

2r ′′ U (2r/n) − n

( ) 2r 2l + 2 − U ′ (2r/n) − (n − l − 1) U (2r/n) n −

r U (2r/n) = nrE U (2r/n) . 2n

(15.117)

(c) The associated Laguerre equation, Eq. (14.139), takes the following form when its independent variable is 2r/n, the n in that equation is renamed p, and the dependent variable is identiﬁed as the associated Laguerre polynomial Lkp (2r/n): ( ) 2r k ′′ 2r (Lp ) (2r/n) + k + 1 − (Lkp )′ (2r/n) + p Lkp = 0 . (15.118) n n Show by comparison of the left-hand side of Eq. (15.118) with the ﬁrst three terms of Eq. (15.117) that those three terms will combine to yield zero if U (2r/n) is taken to be L2l+1 n−l−1 (2r/n). We will not prove it, but the bound-state solutions of the hydrogen atom correspond to L2l+1 n−l−1 with n a positive integer that is at least as large as l + 1. These associated Laguerre functions are polynomials of degree n − l − 1, and were discussed in Section 14.7. (d) Show that, after canceling the terms that combine to yield zero, the remaining terms of Eq. (15.117) become an identity if E = −1/2n2 . (e) Summarize the work of parts (a) through (d) by showing that (i) The smallest value of n consistent with the present development is n = 1, and in that case, l must be zero; (ii) All larger integer values of n are acceptable, but the possible values of l range from zero to n − 1, and (iii) The value of E depends only on n. (f) List all the hydrogen-atom solutions through n = 3, including all relevant l and m values, and (using symbolic computing if helpful) write each wave function ψnlm as an explicit radial function times a spherical harmonic (which can be left as a symbol Ylm ).

294

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS Identify the value of E for each wave function. Note. The Laguerre functions can be accessed symbolically as LaguerreL(p,q,x) or LaguerreL[p,q,x]. (g) Using the notations s, p, d, etc. to represent l values (see Exercise 15.4.5), label the wave functions of part (f) as n followed by the l code, as illustrated by 1s or 3d. Explain why the following labels will not occur, even in a more complete list of hydrogenic wave functions: 1p, 2d, 4g.

Solution: (a) Taking ψ = R(r)Y (θ, φ) and following the procedure leading to Eq. (15.90) with G(r) = 2(E + r−1 ), we can rearrange that equation to the form 1 1 l(l + 1) 1 − Rl′′ − Rl′ − R + R = ER. 2 r r 2r2 (b) Taking derivatives of R = rl e−r/n U (2r/n), lR R 2 − + rl e−r/n U ′ (2r/n) , r n n ) ( ) ( 1 lR 2 l 1 l ′ ′′ − R − 2 + − rl e−r/n U ′ (2r/n) R = r n r n r n R′ =

+

4 l −r/n ′′ re U (2r/n) . n2

Forming R′′ + 2R′ /r, we get [ ] 2 ′ l(l + 1)R 1 2(l + 1) R + R = + 2− R r r2 n nr ′′

l −r/n

+r e

[ ( ) ] l+1 1 4 ′′ ′ 4 − 2 U (2r/n) + 2 U (2r/n) . nr n n

If this is substituted into Eq. (15.116), we recover Eq. (15.117). (c) We need to set k = 2l + 1 and p = n − l − 1 in the Laguerre equation. (d) With these choices of k and p, Eq. (15.117) reduces to −

r 2l+1 L (2r/n) = −nrE L2l+1 n−l−1 (2r/n) . 2n n−l−1

This equation will be satisﬁed for all r only if E = −1/2n2 . (e) The lower index of the Laguerre polynomials must be a nonnegative integer. The smallest possible l is l = 0, thereby requiring n to be at least 1. From n − l − 1 ≥ 0 we see that for general positive integers n, l cannot exceed n − 1. Finally, note that the formula for E is manifestly dependent only on n.

15.5. INHOMOGENEOUS PDES (f)

n 1 2

l 0 0

295

m label ψlmn 0 0

−r

1s

e

2s

(2 − r)e

2

1

1

2p

2

1

0

2p

2

1

−1

2p

3

0

0

3s

3

1

1

3p

3

1

0

3p

3

1

−1

3p

3

2

2

3d

3

2

1

3d

3

2

0

3d

3

2

−1

3d

3

2

−2

3d

E

Y00 (θ, φ) −r/2

−1/2 Y00 (θ, φ)

−r/2

re Y11 (θ, φ) r e−r/2 Y10 (θ, φ) r e−r/2 Y1−1 (θ, φ) (3 − 2r + 92 r2 )e−r/3 Y00 (θ, φ) r(4 − 23 r)e−r/3 Y11 (θ, φ) r(4 − 23 r)e−r/3 Y10 (θ, φ) r(4 − 23 r)e−r/3 Y1−1 (θ, φ) r2 e−r/3 Y22 (θ, φ) r2 e−r/3 Y21 (θ, φ) r2 e−r/3 Y20 (θ, φ) r2 e−r/3 Y2−1 (θ, φ) r2 e−r/3 Y2−2 (θ, φ)

−1/8 −1/8 −1/8 −1/8 −1/18 −1/18 −1/18 −1/18 −1/18 −1/18 −1/18 −1/18 −1/18

(g) The labels are included in the list of part (f). The labels 1p, 2d, and 4g violate the requirement that l ≤ n−1 (1p: n = 1, l = 1; 2d: n = 2, l = 2; 4g: n = 4, l = 4.)

15.5

INHOMOGENEOUS PDEs

Exercises 15.5.1.

A charge outside a sphere of radius a and its image within the sphere lie on a line that passes through the center of the sphere. Using that line as the z-axis of a Cartesian coordinate system, place a unit charge at (x, y, z) = (0, 0, r′ ) and place its image, of strength −a/r′ , at (x, y, z) = (0, 0, a2 /r′ ). Show that the sum of the potentials produced by these charges is zero at an arbitrary point on the sphere, at (x, y, z) = (a sin θ cos φ, a sin θ sin φ, a cos θ), where θ and φ are the usual spherical polar angular coordinates.

Solution: Let R′ = |r − r′ | be the distance from r′ to a point on the sphere at a polar angle θ from the z-axis, and let R′′ = |r − r′′ | be the distance to the same point on the sphere from the image point a2 /r′ . Referring to the ﬁgure below and using the law of cosines, R′2 = r′2 + a2 − 2ar′ cos θ , R′′2 =

(

a2 r′

)2

( + a2 − 2a

a2 r′

) cos θ .

296

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS

R' a

q=1

R''

θ r'' = a2 r'

q = –a r'

r'

Placing a unit charge at r′ and a charge −a/r′ at the image point, the condition that their potentials cancel at a point of arbitrary θ on the sphere is (equating 1/V 2 for the two charges) [( )2 ] ( 2) r′2 ′′2 r′2 a2 a ′2 ′2 2 ′ 2 R = 2 R , i.e., r +a −2ar cos θ = 2 + a − 2a cos θ . a a r′ r′ This is an identity. 15.5.2.

A unit charge is located at the point r′ within a sphere of radius a. Find the position and magnitude of an image charge that will cause the combined potential of the charge and its image to vanish on the sphere.

Solution: One can solve this problem by inserting appropriate data into the solution of Exercise 15.5.1. Letting r′′ be the position of the charge outside the sphere in that exercise (and the image position in the present problem), we set r′ = a2 /r′′ , ﬁnding r′′ = a2 /r′ . The image charge will be the reciprocal of −a/r′′ , which is −r′ /a. It will lie on the extended spherical radius that passes through r′ . 15.5.3.

Find the Coulomb Green’s function G(r, r′ ) for the portion of three-dimensional space with x > 0 and y > 0, subject to the boundary condition that G vanish when r is on the boundary of that region (i.e., on the xz- and yz-planes).

Solution: This Green’s function corresponds to the potential produced by a unit charge at a point r = (x, y, z) (with x > 0 and y > 0), together with image charges as needed to make the potential vanish when r′ is on the xz- or yz-plane. By symmetry, the image charges needed to satisfy the boundary conditions are negative unit charges at (−x, y, z) and (x, −y, z) and a positive unit charge at (−x, −y, z). The original charge and its images form a rectangular array that produces a zero potential on the planes that are perpendicular to the rectangle and bisect its opposite sides.

15.6

INTEGRAL TRANSFORM METHODS

Exercises 15.6.1.

Verify that Ψ(x, s) as given by Eq. (15.127) is a solution to Eq. (15.126) that satisﬁes the boundary conditions on Ψ(0, s) and Ψ(∞, s).

15.6. INTEGRAL TRANSFORM METHODS

297

Solution: By twice diﬀerentiating Ψ(x, s) with respect to x we conﬁrm that the ODE, Eq. (15.126), is satisﬁed. To check the boundary conditions, we set x = 0, obtaining Ψ(0, x); setting x = ∞, we also get Ψ(∞, s) = 0. 15.6.2.

(a) Using symbolic computation or otherwise, verify that Ψ(k, 0) as given in Eq. (15.131) is the Fourier sine transform of ψ(x, 0) = x/(x2 + 1). (b) Verify that ψ(x, y) as given in Eq. (15.133) is the inverse Fourier sine transform of Ψ(k, y), Eq. (15.132). (c) Verify that ψ(x, y), as given in Eq. (15.133), is a solution of the two-dimensional Laplace equation, Eq. (15.129).

Solution: (a) Using symbolic computing, execute one of > fouriersin(x/(x^2+1), x, k); FourierSinTransform[x/(x^2+1), x, k] √ Both give a result equivalent to e−k π/2. (b) maple will not process the requested transform unless we ﬁrst establish that y = 0. mathematica imposes no such requirement. Thus, execute one of > assume(y>0); >

fouriersin(sqrt(Pi/2)*exp(-k*(y+1)), k, x);

FourierSinTransform[Sqrt[Pi/2]*E^(-k*(y + 1)), k, x] Both give a result equivalent to xx2 + (1 + y)2 . (c) Diﬀerentiating, ψxx = − ψyy = −

(x2

6x 8x3 + 2 , 2 2 + (y + 1) ) (x + (y + 1)2 )3

(x2

2x 8x(y + 1)2 + 2 . 2 2 + (y + 1) ) (x + (y + 1)2 )3

These combine to give a zero result. 15.6.3.

Use the Laplace transform method to solve (for all t > 0) the PDE ∂y(x, t) ∂y(x, t) +x =x ∂x ∂t subject to the conditions y(0, t) = 1 and y(x, 0) = 1. Hint. Both because we need a solution only for t > 0 and because the dependence of the PDE on t is simpler than its dependence on x, take the transform with respect to the variable t. Feel free to use symbolic methods to obtain the inverse transform.

Solution: Taking the Laplace transform with respect to t, with s the transform variable, ∂Y (x, s) x + x [sY (x, s) − y(x, 0)] = xL{1} = . ∂x s The condition y(0, t) = 1 becomes Y (0, s) = 1/s.

298

CHAPTER 15. PARTIAL DIFFERENTIAL EQUATIONS Substituting y(x, 0) = 1, the diﬀerential equation becomes ( ) ∂Y 1 + sxY = x 1 + . ∂x s The homogeneous equation has solution e−sx /2 , and a particular solution of the complete inhomogeneous equation is Y = (s + 1)/s2 . 2

Combining this result with the amount of the homogeneous solution that is needed to recover the condition on Y (0, s), we have Y (x, s) =

2 1 1 1 + − 2 e−sx /2 . s s2 s

Taking the inverse transform, y(x, t) = 1 + t − u(t − x2 /2)(t − x2 /2) , where u is the Heaviside (unit step) function. 15.6.4.

Use the Laplace transform method to solve (for all positive x and t) the PDE ∂y ∂y + =0 ∂x ∂t subject to the conditions y(0, t) = 0 and y(x, 0) = sin x. This PDE describes a source that had been transmitting a sinusoidal oscillation toward positive x for an indeﬁnitely long interval before it was turned oﬀ at t = 0. Explain how your answer describes what then happens.

Solution: Take the Laplace transform with respect to the variable x. In the transform space (with transform variable s), the PDE becomes the ODE sY (s, t) − y(0, t) +

∂ Y (s, t) = 0 , ∂t

subject to the condition Y (s, 0) = L{sin x} = 1/(1 + s2 ). Setting y(0, t) = 0, we integrate, getting Y (s, t) = Ce−ts ,

with

C=

1 . 1 + s2

Taking the inverse transform, which is easy if we use Formula 15 of Table 13.1, y(x, t) = u(x − t) sin(x − t) , where u is the Heaviside (unit step) function. This result indicates that the wave train continues to move at unit velocity toward positive x, but is not added to at the source (which is at x = 0) after t = 0. The wave amplitude is therefore zero for x < t.

Chapter 16

CALCULUS OF VARIATIONS 16.1

INTRODUCTION

Exercises 16.1.1.

The velocity of light in vacuum is c; in a medium of refractive index n, it is c/n. When a ray of light reaches a boundary between two media of diﬀerent refractive indices n1 and n2 , the direction of the ray is bent at the boundary in a way that depends upon n1 , n2 , and the angle of incidence of the light ray (relative to the direction normal to the media boundary). See the right panel of Fig. 16.2. Apply Fermat’s principle (that the path actually taken requires a travel time that is stationary with respect to inﬁnitesimally neighboring paths) to ﬁnd the formula connecting n1 , n2 , θ1 , and θ2 . This relationship is known as Snell’s Law of refraction. Hint. The path within each medium is a straight line.

Solution: Referring√to Fig. 16.2, the length of the path from (x1 , y1 ) to the point (x, 0) is d1 = y12 + (x − x1 )2 , and the time for light to travel that path is t1 = (n1 d1 /c).

Figure 16.2: Refraction of light, Exercise 16.1.1.

299

300

CHAPTER 16. CALCULUS OF VARIATIONS √ Similarly, the distance between (x, 0) and (x2 , y2 ) is d2 = y22 + (x − x2 )2 , with travel time t2 = (n2 d2 /c). We now ask for the value of x that makes the total travel time stationary with respect to inﬁnitesimal changes in x, i.e., the value of x such that d(t1 + t2 )/dx = 0. We compute d(t1 + t2 ) n1 (x − x1 ) n2 (x − x2 ) = + = 0. dx cd1 cd2 2 We now identify (x − x1 )/d1 = sin θ1 and (x − x2 )/d2 = − sin θ2 , so the above equation is seen equivalent to n1 sin θ1 = n2 sin θ2 . That is the equation known as Snell’s Law.

16.2

EULER EQUATION

Exercises 16.2.1.

Find and solve the Euler equation that makes each of the following integrals stationary. Hint. Note that y is missing in all these problems. Use that fact to identify the ODEs you generate as ﬁrst-order equations in the dependent variable yx . ∫ x2 √ ∫ x2 1 + yx2 dx (yx2 + yx ) dx (b) (a) x x1 x1 ∫

x2

x

(c)

∫ 1 − yx2 dx

x2

(d)

x1

√ x(1 + yx2 ) dx

x1

Solution: In each case the Euler equation reduces to ∂F/∂yx = C, which is an ODE that can be solved. ∂F = 2yx + 1 = C, equivalent to yx = C. This ODE has general solution ∂yx y = Cx + C ′ . yx ∂F (b) = √ = C. ∂yx x 1 + yx2 (a)

Cx Solving for yx , we get yx = √ . 1 − C 2 x2 This ODE is separable and can be integrated to yield √ 1 1 + C ′ , which can be rearranged to x2 + (y − C ′ )2 = 2 . y=− C 2 − x2 C (c)

∂F xyx = −√ = C. ∂yx 1 − yx2 C Solving for yx , we get yx = √ . x2 + C 2 This ODE is separable and can be integrated to yield √ y = C ln(x + x2 + C 2 ) + C ′ = ln C + sinh−1 (x/C) + C ′ , which can be rearranged to

x = C sinh

(y C

) + C ′′ .

16.2. EULER EQUATION (d)

301

√ ∂F yx x =√ = C. ∂yx 1 + yx2

C Solving for yx , we get yx = √ . x−C This ODE is separable and can be integrated to yield √ y = 2 Cx − C 2 +C ′ , 16.2.2.

which can be rearranged to

(y − C ′ )2 = x−C . 4C

Prove Eq. (16.12). Hint. Be careful to distinguish between ∂/∂x and d/dx and watch for an opportunity to use the original form of the Euler equation to make a simpliﬁcation.

Solution: Apply the total derivative: dF ∂F ∂F dy ∂F dyx = + + dx ∂x ∂y dx ∂yx dx ∂F ∂F ∂F + yx + yxx ; ∂x ∂y ∂yx ( ) ( )( ) d ∂F dyx ∂F d ∂F yx = + yx dx ∂yx dx ∂yx dx ∂yx =

= yxx

∂F d ∂F + yx . ∂yx dx ∂yx

Substituting these expressions into Eq. (16.12) and simplifying where possible, we reach d ∂F ∂F + yx = 0, −yx ∂y dx ∂yx which is simply −yx times the original Euler equation. 16.2.3.

Complete the solution of the geodesic for the cone of Example 16.2.2 by specializing to the situation that the path is from (ρ1 , θ1 ) to (ρ2 , θ2 ) with θ1 = 0, 0 < θ2 < π, and ρ1 > ρ2 . With those parameter choices, the value of C2 is in the range (0, π) and the arccos function of Eq. (16.18) must be assigned a value in the range (−π, 0). Proceed as follows: (a) Write Eq. (16.19) for the two endpoints of the path and solve for C2 as a function of ρ1 , ρ2 , and θ2 , (b) Determine C1 and thereby have a useful expression for θ(ρ), (c) Plot θ as a function of ρ for ρ1 = 2, ρ2 = 1, and several values of θ.

Solution: (a) From the instances of Eq. (16.19) for the two endpoints, we have ( ) ( ) −C2 θ2 − C2 ρ1 cos = ρ2 cos . 2 2 Expanding the cosine on the right-hand side, we get ρ1 cos(C2 /2) = ρ2 cos(θ2 /2) cos(C2 /2) + ρ2 sin(θ2 /2) sin(C2 /2) ,

302

CHAPTER 16. CALCULUS OF VARIATIONS from which we obtain tan(C2 /2) =

ρ1 − ρ2 cos(θ2 /2) , sin(θ2 /2)

which can be solved graphically or otherwise for C2 /2. (b) Now given C2 /2, we can ﬁnd C1 from C1 = ρ1 cos(C2 /2). ◦ (c) We illustrate with √ρ1 = 2, ρ2 = 1, θ2 = 2π/3 = 120 . Here cos(θ2 /2) = 1/2 and sin(θ2 /2) = 3/2, so

tan(C2 /2) =

√ 2 − (1) 12 √ = 3, 3/2

showing that C2 /2 = π/3 (i.e., 60◦ ). From this result we have cos(C2 /2) = 1/2 and C1 = 2(1/2) = 1. With these values of C1 and C2 /2, we can now use Eq. (16.18), with the arccosine in the range (−π, 0). The plot can be produced by one of the following commands: > plot(-2*arccos(1/rho)+2*Pi/3, rho=1 .. 2); Plot[-2*ArcCos[1/rho]+2*Pi/3, {rho, 1, 2}] In both symbolic languages the arccosine function produces a principal value in the range (0, π). The minus sign in the coding switches its range as required here. The plot looks something like this:

The plot is not a straight line because it takes less distance to change θ if done where ρ is smaller. 16.2.4.

(a) Plot a cycloid that passes through the coordinate origin, with x the horizontal axis and y the vertical axis. Use the parametric description given as Eqs. (16.28) and (16.29). (b) Explain how the parametric equations of the cycloid correspond to the trajectory of a point on the circumference of a rolling circle. (c) To make the cycloid of this problem describe a brachistochrone whose endpoints are at the origin and (x, y) = (5, 1) (with the y axis in the downward direction), solve Eq. (16.30) numerically for the θ value of its nonzero endpoint, and thereby determine the parameter C1 . Make an x–y plot of the cycloid, with the graph restricted to the x values between the endpoints. (d) The graph of part (c) shows that the fastest trajectory passes through y values lower than the endpoint. Find another endpoint that will cause the trajectory to only pass through y values above the endpoint.

16.2. EULER EQUATION

303

(e) Find the condition on y/x that determines whether the trajectory will pass through a y value lower than the endpoint.

Solution: (a) The cycloid will pass through the origin if C2 = 0. Then both x and y are proportional to 1/2C12 , which we now call R. Then one lobe of the cycloid (with the y-axis in the negative direction) can be plotted (for R=1) by one of the following commands: > plot([t-sin(t), -(1-cos(t)), t=0 .. 2*Pi], >

scaling=constrained);

ParametricPlot[{t-Sin[t], -(1-Cos[t])}, {t, 0, 2*Pi}] The plot is shown here.

(b) The variable θ can be interpreted as the angle through which a circle on the x-axis has rolled; the center of the circle will be at (x0 , y0 ) = (Rθ, R). A point on the circumference of the circle which was at the origin when θ = 0 will be (for general θ) at (x0 , y0 ) + R(− cos θ, − sin θ), which is consistent with the parametric equations for the cycloid. (c) The value of θ corresponding to y/x = 1/5 can be found via a graphical solution to Eq. (16.30). Adjust the range in θ of a plot of the form given below until the θ value satisfying Eq. (16.30) can be read to suﬃcient precision. Use one of > plot((1-cos(t))/(t-sin(t))-1/5, t = 4.59 .. 4.60); Plot[(1-Cos[t])/(t-Sin[t])-1/5, {t, 4.59, 4.60}] The graph corresponding to either of these commands looks something like this:

From the graph, we ﬁnd θ = 4.595. We need also R = 1/(1 − cos(4.595)) = 0.8952. We can then plot our cycloid, using one of > R:=0.8952; plot([R*(t-sin(t)), -R*(1-cos(t)), t=0 .. 4.595], >

scaling=constrained);

R=0.8952; ParametricPlot[{R*(t-Sin[t]), -R*(1-Cos[t])}, {t, 0, 4.595}]

304

CHAPTER 16. CALCULUS OF VARIATIONS Either command produces a plot similar to that shown here:

(d) One possibility is (x, y) = (1, 1). For this point θ = 2.412, R = 1/(1 − cos(2.412)) = 0.5729, and the resulting plot looks like this:

(e) Inserting θ = π into Eq. (16.30) (the θ value where y is a maximum), we ﬁnd that this equation is then satisﬁed when y/x = 2/π. Since y/x on the cycloid decreases monotonically as x increases, all values of y/x less than 2/π will be reached at x values beyond the maximum in y, i.e., beyond the lowest value of y. 16.2.5.

Find an algebraic equation whose solution makes stationary the integral ∫ θ2 √ r2 rθ2 + r4 dθ . θ1

Solution: Writing this integral ∫

θ2

F dθ with F = r θ1

√ rθ2 + r2 ,

the Euler equation for this problem is F − rθ which evaluates to r

∂F =C, ∂rθ

√ rr2 rθ2 + r2 − √ 2 θ =C. rθ + r2

Solving this equation for rθ , we get √ ∫ r4 dr √ − 1 or = θ + C′ . rθ = r 4 C2 r r /C 2 − 1 Evaluation of the integral yields 1 − cot−1 2

(

) r4 − 1 = θ + C′ , C2

which can be converted to the form r2 sin(2θ + C ′ ) = C.

16.2. EULER EQUATION 16.2.6.

305

Apply Fermat’s principle to obtain an algebraic equation for the path traveled by a light ray when the index of refraction is proportional to (a) ex

(b)

1 2y + 1

(c) (y + 2)1/2

Solution: (a) The quantity to be made stationary is ∫ x2 √ ex 1 + yx2 dx . x1

Because the integrand is independent of y, the Euler equation is ∂ x√ ex y x e 1 + yx2 = √ =C. ∂yx 1 + yx2 Solving for yx , C C dx yx = √ , equivalent to dy = √ . e2x − C 2 e2x − C 2 Integrating, we ﬁnd (√ ) √ e2x − C 2 −1 y = tan + C ′ or C tan(y − C ′ ) = e2x − C 2 . C (b) The quantity to be made stationary is ∫

x2

F dx , x1

with

√ 1 + yx2 . F = 2y + 1

Because the integrand is independent of x, the Euler equation is √ 1 + yx2 yx2 1 ∂F √ √ = − = =C. F − yx 2 ∂yx 2y + 1 (2y + 1) 1 + yx (2y + 1) 1 + yx2 Solving for yx , we get √ 1 − C 2 (2y + 1)2 C(2y + 1) dy = dx . yx = , equivalent to √ C(2y + 1) 1 − C 2 (2y + 1)2 Integrating this ODE, −

1√ 1 − C 2 (2y + 1)2 = x+C ′ , equivalent to (x+C ′ )2 +(2y+1)2 = C 2 . C

(c) The quantity to be made stationary is ∫ x2 √ F dx , with F = (y + 2)1/2 1 + yx2 . x1

Because the integrand is independent of x, the Euler equation is F − yx

√ ∂F (y + 2)1/2 yx2 = (y + 2)1/2 1 + yx2 − √ =C. ∂yx 1 + yx2

306

CHAPTER 16. CALCULUS OF VARIATIONS Solving for yx , we get √ y + 2 − C2 C dy yx = , equivalent to √ = dx . C y + 2 − C2 Integrating, √ 2C y + 2 − C 2 = x − C ′

16.3

or (x − C ′ )2 = 4C(y + 2 − C) .

MORE GENERAL PROBLEMS

Exercises 16.3.1.

Following the method used for the original derivation of the Euler equation, ﬁnd its generalization for the situation that the integrand of Eq. (16.7) also contains an explicit dependence on the second derivative of y, yxx . It will be necessary to carry out two integrations by parts on the equation analogous to Eq. (16.9).

Solution: The equation analogous to Eq. (16.8) must include yxx when y(x) is replaced by y(x) + αη(x), i.e., yxx + αηxx . Including such a term, we start from ∫ x2 dW d = F (y + αη, yx + αηx , yxx + αηxx , x) dx = 0 . dα dα x1 Evaluating at α = 0, the equation analogous to Eq. (16.9) is ) ∫ x2 ( ∂F ∂F dW ∂F η+ ηx + ηxx dx = 0 . = dα ∂y ∂yx ∂yxx α=0

x1

The second term in the integrand is treated as in the original derivation; it is integrated by parts, integrating ηx and diﬀerentiating ∂F/∂yx . The endpoint terms vanish because η(x1 ) = η(x2 ) = 0. We integrate the third term by parts twice; the ﬁrst such integration has no endpoint contributions if we add the additional requirement that ηx (x1 ) = ηx (x2 ) = 0, i.e., that the function y(x) have given values of y ′ (as well as y) at x1 and x2 . The second integration by parts restores a plus sign to the integral, which now contains the second derivative of ∂F/∂yxx . The overall result is ( ) ∫ x2 dW ∂F d ∂F d2 ∂F η(x) − + 2 dx = 0 . = dα α=0 ∂y dx ∂yx dx ∂yxx x1 The corresponding Euler equation is ∂F d ∂F d2 ∂F − + 2 = 0. ∂y dx ∂yx dx ∂yxx

Solve the following mechanics problems using the stationary property of the time integral of the Lagrangian, introduced in Example 16.3.1. Use the fact that in Cartesian coordinates the kinetic energy has the form m[x˙ 2 + y˙ 2 + z˙ 2 ]/2, using the dot to indicate diﬀerentiation with respect to time.

16.3. MORE GENERAL PROBLEMS 16.3.2.

307

(a) Find the equation of motion for a particle of mass m moving without friction on the surface of the paraboloid z = x2 + y 2 and under the inﬂuence of gravity, which exerts a force mg in the −z direction. This problem is easiest if solved in cylindrical coordinates. (b) Find a solution to the equation of motion corresponding to circular motion at constant z, and determine the angular velocity of the particle.

Solution: (a) In cylindrical coordinates, x˙ 2 + y˙ 2 = ρ˙ 2 + ρ2 φ˙ 2 ; this result follows immediately from the relationships hρ = 1 and hφ = ρ or from application of the chain rule to x = ρ cos φ and y = ρ sin φ. Using the constraint equation z = ρ2 , we also have z˙ 2 = 4ρ2 ρ˙ 2 . We therefore can write ) m( 2 x˙ + y˙ 2 + z˙ 2 2 ] m[ = (1 + 4ρ2 )ρ˙ 2 + ρ2 φ˙ 2 , 2

T =

V = mgz = mgρ2 , L=T −V =

] m[ (1 + 4ρ2 )ρ˙ 2 + ρ2 φ˙ 2 − mgρ2 . 2

Making the time integral of L stationary and discarding the common factor m (irrelevant for this purpose), we have Euler equations in which ∂L = 4ρρ˙ 2 + ρφ˙ 2 − 2gρ , ∂ρ We also have d dt

(

∂L ∂ ρ˙

∂L = (1 + 4ρ2 )ρ˙ , ∂ ρ˙

∂L = 0, ∂φ

∂L = ρ2 φ˙ . ∂ φ˙

) = 8ρρ˙ 2 + (1 + 4ρ2 )¨ ρ.

The ρ Euler equation, one of the equations of motion of this system, then takes the form ( ) ( ) ∂L d ∂L − = 4ρρ˙ 2 + ρφ˙ 2 − 2gρ − 8ρρ˙ 2 − (1 + 4ρ2 )¨ ρ ∂ρ dt ∂ ρ˙ = −(1 + 4ρ2 )¨ ρ − 4ρρ˙ 2 − 2gρ + ρφ˙ 2 = 0 . Because ∂L/∂φ = 0, the other Euler equation can be written ∂L = C, ∂ φ˙

i.e., ρ2 φ˙ = C .

(b) For circular motion at constant z (and therefore also constant ρ), we set ρ˙ = ρ¨ = 0 and the ﬁrst Euler equation reduces to φ˙ 2 = 2g, while the second (because ρ is constant) becomes φ˙ = C, providing no new information. √ From the ﬁrst Euler equation, the angular velocity of the particle is ± 2g, indicating angular motion (in either direction) at a constant angular velocity (and also frequency) irrespective of the value √ of z. Of course, at larger z the linear velocity will be larger, as it is ±ρ 2g. 16.3.3.

Two particles of mass m are connected by a string of ﬁxed length that passes through a small hole in a horizontal circular table. One of the particles can move on the surface of the table; the other is subject to the gravitational force mg and can move vertically up or down. Assume all motion is without friction. See Fig. 16.6.

308

CHAPTER 16. CALCULUS OF VARIATIONS

Figure 16.6: Mechanical system of Exercise 16.3.3.

m

m

(a) Find the equations of motion for this system, expressed in terms of the polar coordinates of the particle on the table. (b) Find a solution to the equations of motion corresponding to uniform circular motion of the particle on the table.

Solution: 2 2 2 (a) The kinetic energy of the mass on the table is m ˙ ) (see the 2 (ρ˙ + ρ φ 2 solution of Exercise 16.3.2); that of the mass below the table is m 2 ρ˙ . The potential energy of the system (measured from zero when the mass on the table is at the hole) is mgρ. Thus, setting m = 1, the Lagrangian is

L = ρ˙ 2 +

1 2 2 ρ φ˙ − gρ . 2

The derivatives needed for the Euler equation are ∂L = ρφ˙ 2 − g , ∂ρ

∂L = 2ρ˙ , ∂ ρ˙

∂L = 0, ∂φ

∂L = ρ2 φ˙ . ∂ φ˙

The ρ Euler equation is (ρφ˙ 2 − g) −

d (2ρ) ˙ = ρφ˙ 2 − g − 2¨ ρ = 0. dt

The other Euler equation reduces to ∂L = ρ2 φ˙ = constant . ∂ φ˙ (b) Uniform circular motion corresponds to ρ˙ = ρ¨ = 0 and the two Euler equations reduce to ρφ˙ 2 = g , ρ2 φ˙ = C . Because ρ is constant, the second equation simply states that φ˙ is constant; √ the ﬁrst equation relates its value to the constant value of ρ: φ˙ = ± g/ρ. 16.3.4.

One end of a string is wound around a ﬁxed horizontal cylinder of radius a. A mass m, attached to the free end of the string, executes oscillations in a vertical plane, with the string winding or unwinding as the mass swings back and forth. Assume that the unwound part of the string forms a straight line tangent to the surface of the cylinder, and that when the unwound string is vertical, its unwound length is L. See Fig. 16.7. Write the Lagrangian in terms of the angular coordinate θ (in the ﬁgure) and obtain the equation of motion for this pendulum.

16.3. MORE GENERAL PROBLEMS

309

Figure 16.7: Mechanical system of Exercise 16.3.4.

a

θ

L

Solution: To avoid notational collisions designate the Lagrangian in this problem by the symbol L. The unwound portion of the string has length L + aθ; the point from which the unwound string is suspended (relative to the center of the cylinder) is (x, y) = (a cos θ, a sin θ). The position of the mass at the end of the string is therefore x = a cos θ + (L + aθ) sin θ ,

y = a sin θ − (L + aθ) cos θ .

While we could compute the kinetic energy of the moving mass as a function of θ˙ as m(x˙ 2 + y˙ 2 )/2, it is easier to note that the motion is always perpendicular to the string and therefore has kinetic energy m(L+aθ)2 θ˙2 /2. One who doubts this can verify it by explicit computation in Cartesian coordinates. The potential energy is V = mgy, where y is given above. The Lagrangian for this problem therefore takes the form L=

m (L + aθ)2 θ˙2 − mg[a sin θ − (L + aθ) cos θ] . 2

For the Euler equation we need (setting m = 1) ∂(L) = (L + aθ)aθ˙2 − g(L + aθ) sin θ , ∂θ ∂L = (L + aθ)2 θ˙ , ∂ θ˙ d ∂L = 2(L + aθ)aθ˙2 + (L + aθ)2 θ¨ . dt ∂ θ˙ The Euler equation takes the form (L + aθ)(aθ˙2 − g sin θ) − 2(L + aθ)aθ˙2 − (L + aθ)2 θ¨ = 0 , which reduces to (L + aθ)θ¨ + aθ˙2 + g sin θ = 0 .

310

CHAPTER 16. CALCULUS OF VARIATIONS

16.4

VARIATION WITH CONSTRAINTS

Exercises 16.4.1.

Prove that there is no real value of C1 permitting Eq. (16.47) to be satisﬁed if L < 2x0 . We are therefore prevented from obtaining a “solution” when the problem is physically unrealizable.

Solution: Expanding the right-hand side of Eq. (16.47), we get ] [ ] [ 1 x30 1 x20 x0 + + · · · = 2x0 + 2 + ··· . L = 2C1 C1 3! C13 3! C12 Every term in the series within the square brackets is positive for all real values of C1 , so no real value of C1 will solve this equation if L < 2x0 . 16.4.2.

Plot the catenary with the parameters used for Fig. 16.9 and add to the plot a parabola that passes through the same endpoints and value of y(0).

Solution: The catenary has equation y = C[cosh(x/C) − cosh(x0 /C)], with C = 0.2297 and x0 = 0.5. The corresponding parabola has formula y = C[1 − cosh(x0 /C)](1 − x2 /x20 ) . Code to display both these functions can be one of > plot([C*(cosh(x/C)-cosh(x0/C)), >

C*(1-cosh(x0/C))*(1-x^2/x0^2)],x = -x0 .. x0);

Plot[{c*(Cosh[x/c]-Cosh[x0/c]), c*(1-Cosh[x0/c])*(1-x^2/x0^2)}, {x,-x0,x0}] These codes produce a plot similar to that shown below. The solid curve is the catenary; the dashed curve is the corresponding parabola.

16.4.3.

(a) Find an algebraic equation (not an ODE) for the curve of arc length L that connects the points x1 and x2 on the x-axis and encloses the maximum area above the x-axis. (b) Show that this problem has no solution if L < |x2 − x1 |. (c) Find and plot a speciﬁc solution for x1 = 1, x2 = 2, and L = 3.

16.4. VARIATION WITH CONSTRAINTS

311

Solution: (a) The variational problem is ∫

x2

δ

[

] √ 1 + λ 1 + yx2 dx = 0 ,

x+2

with L =

x1

1 + yx2 dx .

x1

The integrand F is independent of x, so the Euler equation for this problem is √ λy 2 ∂F = C1 , i.e., y + λ 1 + yx2 − √ x = C1 . F − yx ∂yx 1 + yx2 Solving this equation for yx , we obtain √ λ2 − (y − C1 )2 yx = . y − C1 Writing this ﬁrst-order ODE in the form √

(y − C1 ) dy λ2 − (y − C1 )2

= dx ,

we see that it has general solution √ λ2 − (y − C1 )2 = x − C2 , equivalent to (x − x0 )2 + (y − y0 )2 = λ2 , where we have given the constants the new names x0 and y0 . The solution is a circle of radius λ with center at (x0 , y0 ). The points (x1 , 0) and (x2 , 0) must lie on this circle, requiring that (xi − x0 )2 = λ2 − y02 for both i = 1 and i = 2. This requirement indicates that x0 = (x1 + x2 )/2 and that (x2 − x1 )/2.

√ λ2 − y02 =

It remains to choose λ such that the path length is L. Returning to the expression found for yx , we note that yx2 + 1 =

λ2 , (y − y0 )2

and if we write the integral for L entirely in terms of x, we get ∫ x2 λ dx √ L= . 2 λ − (x − x0 )2 x1 Performing the integration, −1

L = 2λ sin

(

x2 − x1 2λ

) .

Given L, x1 , and x2 , we can solve this algebraic equation for λ and therefrom obtain y0 and a complete solution. However, the process is not without some subtleties. To understand them, look at the following ﬁgures, which illustrate (reading from left to right) situations in which

312

CHAPTER 16. CALCULUS OF VARIATIONS L < π(x2 − x1 )/2, L = π(x2 − x1 )/2, and L > π(x2 − x1 )/2.

In the ﬁrst of these scenarios, the center of the circle lies below the x-axis, and: (i) the integral for L is expressed in terms of the principal value of the arcsin; (ii) the path from x1 to x2 is entirely in a direction toward positive x. In the second scenario, the center of the circle is on the x-axis and the path is a semicircle. In the third scenario: (i) the parts of the path below its horizontal diameter are described using the principal value of the arcsin, but those above that diameter require use of a diﬀerent arcsin value; (ii) the path from x1 to x2 involves retrograde motion to an x value less than x1 and also passage through x values larger than x2 . The most convenient way to handle this situation is to compute the length of the entire circle and then subtract from it the direct curve (below the x-axis) from x1 to x2 . The discussion of the previous paragraph suggests the following procedure: • If L < π(x2 − x1 ), determine the λ value giving the path length using the principal value of 2λ sin−1 [(x2 − x1 )/2λ]. • If L = π(x2 − x1 ), take λ = (x2 − x1 )/2. • If L > π(x2 − x1 ), determine the λ value giving the path length using the principal value of the arcsin, but with the formula 2λ(π − sin−1 [(x2 − x1 )/2λ]). (b) To determine what happens if L < x2 − x1 , introduce a series expansion in the formula for L. We get [ ] x2 − x1 1 (x2 − x1 )3 3 (x2 − x1 )5 L = 2λ + + + ··· 2λ 6 23 λ3 40 23 λ5 = (x2 − x1 ) +

1 (x2 − x1 )3 + additional positive terms , 6 23 λ2

showing that this equation will have no solution for real λ if L < x2 − x1 . (c) For x1 = 1, x2 = 2, L = 3 we have x0 = 1.5 but must ﬁnd the values of λ and y0 . Because L > π(x2 −x1 ), ﬁnd (graphically) the value of λ satisfying the third scenario of part (a). In maple, this is done by adjusting the plot range in code such as > plot(3 - 2*lambda*(Pi-arcsin(1/(2*lambda))), >

lambda=0.65820 .. 0.65824);

16.4. VARIATION WITH CONSTRAINTS

313

To determine λ using mathematica, a corresponding plot is produced using Plot[3 - 2*lambda*(Pi-ArcSin[1/2/lambda]), {lambda,0.65820,0.65824}] Either of these commands produces a plot similar to that shown at left below; the plot conﬁrms that the value of λ is 0.6582. √ We then compute y0 = λ2 − 0.25 = 0.4282, set x0 = 1.5, and plot the circle (which has 3 units of arc length between x1 and x2 ). This plot can be produced by code such as one of: > plot([y0+sqrt(lambda^2-(x-x0)^2), >

y0-sqrt(lambda^2-(x-x0)^2)],

>

x=0.5 .. 2.5,scaling=constrained);

Plot[{y0 + Sqrt[lambda^2 - (x - x0)^2], y0 - Sqrt[lambda^2 - (x - x0)^2]}, {x,0.5,2.5}] This code produces a plot similar to that shown at right below. The part of the circle above the x-axis is the solution to our problem.

16.4.4.

A ﬁxed volume of water is in a vertical cylindrical tube of radius r0 ; the tube (and the water inside) are rotating about the rotation axis of the tube at a constant angular frequency ω. The rotation generates a centrifugal potential energy per unit mass −ω 2 r2 /2, where r is the distance from the rotation axis. Find the curve that describes the location of the water surface as a function of r, considering the combined eﬀect of the centrifugal and gravitational energy.

Solution: Points on the curve deﬁning the water surface will all be at the same potential energy. Letting y(r) be the height of the surface at r, the gravitational potential at y(r) is gy (where g is the acceleration due to gravity), and the total potential at y(r) is gy − ω 2 r2 /2. At equilibrium this quantity must be constant on the surface, so gy −

ω 2 r2 = constant , 2

with solution

y−C =

ωr2 . 2g

We do not need to use an Euler equation because the quantity to be made stationary involves no derivatives.

314 16.4.5.

CHAPTER 16. CALCULUS OF VARIATIONS Show that for a perimeter of ﬁxed length the plane ﬁgure of maximum area is a circle.

Solution: Let the perimeter have length L. Refer to Exercise 16.4.3, and consider a situation where the points x1 and x2 on the x-axis are close to each other. Form a curve of any length less than L connecting x1 and x2 on a path above the x-axis. Based on the analysis of Exercise 16.4.3, the curve will enclose maximum area above the x-axis if it is a circular arc of some radius λ, where the center of the circle is directly above (or below) the point (x1 + x2 )/2. Then form a similar curve of arbitrary length less than L connecting x1 and x2 in the lower half-plane. To enclose maximum area below the x-axis, that curve must also be a circular arc of some radius λ′ . But these two curves will connect with slope discontinuities at x1 and x2 unless λ′ = λ and the arcs are concentric, in which case they form a complete circle of radius λ. We therefore choose a λ value such that 2πλ = L. We can remove the requirement that the circle passes through two given points by taking the limit x2 → x1 ; then one of the arcs becomes a complete circle while the other vanishes, with the circle of radius λ = L/2π.

Chapter 17

COMPLEX VARIABLE THEORY 17.1

ANALYTIC FUNCTIONS

Exercises 17.1.1.

Use the Cauchy-Riemann equations to determine whether each of the following functions is analytic. (a) y + ix

(b) x2 − y 2 + 2ixy

(d) ex+iy

(e)

(g) ln |z|

(h) 1/z

y − ix x2 + y 2

(c) x2 − y 2 − 2ixy (f)

x − iy x2 + y 2

(i) z 2 − (z ∗ )2

Solution: Answer is “Yes” if analytic, “No” if not. ∂v ∂u ̸= − . No. ∂y ∂x ∂u ∂v ∂u ∂v u = x2 − y 2 , v = 2xy, = = 2x, =− = −2y. Yes. ∂x ∂y ∂y ∂x ∂u ∂v u = x2 − y 2 , v = −2xy, ̸= . No. ∂x ∂y ∂v ∂u ∂v ∂u = = ex cos y, =− = −ex sin y. u = ex cos y, v = ex sin y, ∂x ∂y ∂y ∂x Yes. y −x ∂u −2xy ∂v 2xy u= 2 , v= 2 , = 2 , = 2 ; x + y2 x + y 2 ∂x (x + y 2 )2 ∂y (x + y 2 )2

(a) u = y, v = x, (b) (c) (d)

(e)

∂v ∂u ̸= . No. ∂x ∂y (f) u =

x −y ∂u y 2 − x2 ∂v , v = , = = , x2 + y 2 x2 + y 2 ∂x (x2 + y 2 )2 ∂y

−2xy ∂v ∂u = 2 = − . Yes. ∂y (x + y 2 )2 ∂x 315

316

CHAPTER 17. COMPLEX VARIABLE THEORY 1 ∂u x ∂v ln(x2 + y 2 ), v = 0, = 2 . No. ̸= 2 2 ∂x x +y ∂y 1 1 x − iy (h) = = 2 . Same as (f). Yes. z x + iy x + y2 ∂u ∂v (i) (x + iy)2 − (x − iy)2 = 4ixy, u = 0, v = 4xy, ̸= . No. ∂x ∂y (g) u =

17.1.2.

Use the two-dimensional Laplace equation to determine whether each of the following can be the real part of an analytic function. (a) xy

(b) x3

(c) x2 − y 2

(d) sinh y sin x

(e) y

(f)

(g) x2 + y 2

(h) sin x sin y

(i) x2 y − y 2 x

y (1 − x)2 + y 2

Solution: “Yes” if can be real part of analytic function, “No” if not. (a)

∂ 2 xy ∂ 2 xy = = 0. Yes. ∂x2 ∂y 2

(b)

∂ 2 x3 ∂ 2 x3 + = 6x + 0 ̸= 0. No. 2 ∂x ∂y 2

(c)

∂ 2 (x2 − y 2 ) ∂ 2 (x2 − y 2 ) + = 2 − 2 = 0. Yes. ∂x2 ∂y 2

(d) Let f = sinh y sin x. (e)

∂2f ∂2f = −f, = f, ∂x2 ∂y 2

− f + f = 0. Yes.

∂2y ∂2y = = 0. Yes. 2 ∂x ∂y 2

(f) Let f =

y . (1 − x)2 + y 2

∂2f 2y(3 − 6x + 3x2 − y 2 ) = , ∂x2 ((1 − x)2 + y 2 )3

∂2f 2y(3 − 6x + 3x2 − y 2 ) = − . These sum to zero. Yes. ∂y 2 ((1 − x)2 + y 2 )3 (g)

∂ 2 (x2 + y 2 ) ∂ 2 (x2 + y 2 ) = = 2. 2 + 2 ̸= 0. No. 2 ∂x ∂y 2

(h) Let f = sin x sin y. (i) 17.1.3.

∂2f ∂2f = = f. f + f ̸= 0. No. ∂x2 ∂y 2

∂ 2 (x2 y − y 2 x) ∂ 2 (x2 y − y 2 x) + = 2y − 2x ̸= 0. No. ∂x2 ∂y 2

For each of the quantities in Exercise 17.1.2 that can be the real part of an analytic function, ﬁnd the corresponding imaginary part and write the analytic function in terms of z = x + iy alone.

Solution: Let C denote an arbitrary constant, and F and G arbitrary functions. (a) From

∂xy ∂v = y, we have = y. ∂x ∂y

17.1. ANALYTIC FUNCTIONS Also, from

317

∂xy ∂v = x, we have = −x. Integrating, ∂y ∂x ∫ y2 v = y dy = + F (x) , 2 ∫ v=−

x dx = −

x2 + G(y) . 2

y 2 − x2 + C, so 2 ( 2 ) y 2 − x2 z w = u + iv = xy + i + iC = −i −C . 2 2

These expressions are consistent only if v =

(c) Using the Cauchy-Riemann equations as in part (a), we integrate ∫ ( ) ∫ ∂v v= dy = 2x dy = 2xy + F (x) , ∂y ∫ ( v=

∂v ∂x

)

∫ dx =

2y dx = 2xy + G(y) .

These are consistent if v = 2xy + C, so w = u + iv = x2 − y 2 + 2ixy = z 2 . (d) Using the Cauchy-Riemann equations as in part (a), we integrate ∫ ( ) ∫ ∂v v= dy = sinh y cos x dy = cosh y cos x + F (x) , ∂y ∫ ( v=

∂v ∂x

)

∫ dx = −

cosh y sin x dx = cosh y cos x + G(y) .

These are consistent if v = cosh y cos x + C, so w = u + iv = sinh y sin x + i cosh y cos x + iC = i(cos z + C) . (e) Using the Cauchy-Riemann equations as in part (a), we integrate ∫ ( ) ∫ ∂v v= dy = 0 dy = F (x) , ∂y ∫ ( v=

∂v ∂x

)

∫ dx = −

1 dx = −x + G(y) .

These are consistent if v = −x + C, so w = u + iv = y − ix + iC = −i(z − C) . (f) Using the Cauchy-Riemann equations as in part (a), we integrate ∫ ( ) ∫ ∂v x−1 2y(1 − x) v= dy = dy = + F (x) , ∂y ((1 − x)2 + y 2 )2 (1 − x)2 + y 2 ∫ ( v=

∂v ∂x

)

∫ dx = −

1−x (1 − x)2 − y 2 dx = − + G(y) . (1 − x)2 + y 2 )2 (1 − x)2 + y 2

318

CHAPTER 17. COMPLEX VARIABLE THEORY x−1 + C, so (1 − x)2 + y 2 ( ) y + i(x − 1) 1 w = u + iv = + iC = i +C . (x − 1)2 + y 2 z−1

These are consistent if v =

17.1.4.

By introducing ∆z and letting it approach zero, use techniques similar to those the text used to show that z 2 is analytic, prove that if f (z) and g(z) are analytic, [ ] [ ] d d f (z) f ′ g − f g′ (a) f (z)g(z) = f ′ (z)g(z) + f (z)g ′ (z) , (b) = . dz dz g(z) g2

Solution: (a) Write f (z + ∆z) = f (z) + f ′ (z)∆z + O([∆z]2 ) , g(z + ∆z) = g(z) + g ′ (x)∆z + O([∆z]2 ) . Then form f (z + ∆z)g(z + ∆z) = f (z)g(z) + [f ′ (z)g(z) + f (z)g ′ (z)]∆z + O([∆z]2 ) , so f (z + ∆z)g(z + ∆z) − f (z)g(z) [f ′ (z)g(z) + f (z)g ′ (z)]∆z + O([∆z]2 ) = ∆ ∆z = f ′ (z)g(z) + f (z)g ′ (z) + O(∆z) , which in the limit ∆z → 0 becomes f ′ (z)g(z) + f (z)g ′ (z). (b) Start by writing 1 1 1 1 = = g(z + ∆z) g(z) + g ′ (z)∆z + O([∆z]2 ) g(z) 1 + (g ′ /g)∆z + O([∆z]2 ) =

1 1 g′ [1 − (g ′ /g)∆z + O([∆z]2 ) = − 2 ∆z + O([∆z]2 ) . g(z) g g

Insert the above result into ] ] [1 g′ f (z + ∆z) [ = f (z) + f ′ (z)∆z + O([∆z]2 ) − 2 ∆z + O([∆z]2 ) g(z + ∆z) g g [ ′ ] f f f g′ = + − 2 ∆z + O([∆z]2 ) , g g g and form [

f (z + ∆z) f (z) − g(z + ∆z) g(z)

]

1 f ′ g − f g′ = + O(∆z) , ∆z g2

which in the limit ∆z → 0 becomes (f ′ g − f g ′ )/g 2 .

17.1. ANALYTIC FUNCTIONS 17.1.5.

319

Find the Cauchy-Riemann equations in polar coordinates. Hint. Set z = reiθ and work with f (z) = u(r, θ) + iv(r, θ). The chain rule may be helpful.

Solution: Use the chain rule to express the Cartesian Cauchy-Riemann equations in terms of derivatives with respect to the polar coordinates. Use the relationships ∂r x = , ∂x r

∂r y = , ∂y r

∂θ y = − 2, ∂x r

∂θ x = 2. ∂y r

We get ∂u ∂v = −→ ∂x ∂y ∂u ∂v =− −→ ∂y ∂x

( (

∂u ∂r ∂u ∂r

) )

x − r y + r

( (

∂u ∂θ ∂u ∂θ

) )

y = r2

(

x =− r2

∂v ∂r (

)

∂v ∂r

y + r )

(

x + r

∂v ∂θ (

)

∂v ∂θ

x , r2 )

y . r2

Multiplying the ﬁrst of these equations by x and the second by y, they combine to produce the ﬁrst of the equations given below. The other equation given below results when we multiply the ﬁrst of the above equations by y and the second by x and combine appropriately. ∂u 1 ∂v = , ∂r r ∂θ 17.1.6.

∂v 1 ∂u =− . ∂r r ∂θ

Using the Cauchy-Riemann equations in polar coordinates, determine whether the following functions are analytic. √ (a) |z| (b) ln z (c) z (d) e|z|

(e) |z|2

(f) |z|1/2 eiθ/2

Solution: Answer is “Yes” if analytic, “No” if not. ∂u ∂v = 1, = 0. No. ∂r ∂θ ∂u 1 ∂v ∂u 1 ∂v ln z = ln r + iθ, u = ln r, v = θ, = , = 1, = , ∂r r ∂θ ∂r r ∂θ ∂u ∂v and = = 0. Yes. ∂θ ∂r √ √ iθ/2 √ √ z = re , u = r cos(θ/2), v = r sin(θ/2), √ ∂u cos(θ/2) ∂v ∂u 1 ∂v r cos(θ/2) √ = = , so = . , ∂r ∂θ 2 ∂r r ∂θ 2 r √ ∂u r sin(θ/2) ∂v sin(θ/2) 1 ∂u ∂v √ , so Also =− , = =− . Yes. ∂θ 2 ∂r r ∂θ ∂r 2 r ∂u ∂v e|z| = er , u = er , v = 0 . = er , = 0. No. ∂r ∂θ ∂u ∂v |z|2 = r2 , u = r2 , v = 0 . = 2r, = 0. No. ∂r ∂θ √ |z|1/2 eiθ/2 = r eiθ/2 . Same as part (c). Yes.

(a) |z| = r, u = r, v = 0. (b)

(c)

(d) (e) (f)

320

CHAPTER 17. COMPLEX VARIABLE THEORY

17.1.7.

If u(x, y) and v(x, y) are the real and imaginary parts of the same analytic function of z = x + iy, show that in a plot using Cartesian coordinates, the lines of constant u intersect the lines of constant v at right angles.

Solution: The equations below are satisﬁed when all the derivatives are evaluated at the same point in the complex plane: ( ) ( ) ∂u ∂v ( ) ( ) ∂x ∂y x ∂x ∂y y ( ) ( ) =− . = =− ∂u ∂v ∂x u ∂y v ∂y x ∂x y We have used the Cauchy-Riemann equations to replace the derivatives of u with those of v. The slope dy/dx on a line of constant u and the slope dy/dx on a line of constant v therefore satisfy ( ) ( ) ∂y ∂y = −1 , ∂x u ∂x v which indicates that the curves are perpendicular at their intersections.

17.2

SINGULARITIES

Exercises 17.2.1.

Identify all the singularities of the following functions, including any that may be at inﬁnity. For each pole or branch point, also specify its order. (a) z 2 +

1 2

(b) cosh z

(d) ze1/z ( )2 3 (g) z + z −1

(c) (z − 1)−1/2

(e)

tan z z

(f) ln(1 + z)

(h)

1 z 3 − 3z 2 + 2z

(i)

1 z 4 + 2z 2 + 1

Solution: (a) Pole of order 2 at inﬁnity. (b) Essential singularity at inﬁnity (expand in terms of exponentials). (c) Branch points of order 2 at z = 1 and inﬁnity. (d) Essential singularity at z = 0; pole of order 1 at inﬁnity, (e) Poles of order 1 at z = (n + 12 )π, all positive and negative integers n. (f) Branch points of inﬁnite order at z = −1 and inﬁnity. (g) Poles of order 2 at z = i and z = −i. (h) Poles of order 1 at z = 0, z = 1, and z = 2. (i) Poles of order 2 at z = i and z = −i. 17.2.2.

If the function

f (z) =

(z 2 + 1)1/2 z

is made single-valued by making the branch cuts shown in Fig. 17.4, and we take a branch

17.2. SINGULARITIES

321

Figure 17.4: Branch cuts for Exercise 17.2.2. 2i

i

–1

1

2

–i

–2i on which f (1) = f (−2i).

2, ﬁnd the values of f (−1), f (2i), and (on each side of the branch cut)

Solution: Write f (z) = (r1 r2 )1/2 z −1 ei(φ1 +φ2 )/2 , where r1 , r2 , φ1 , φ2 are as identiﬁed in the ﬁgure shown here, r1 ϕ1

i r2

ϕ2

–i

and when z = 1 have the values φ1 = −π/4, φ2 = +π/4. Note that we do not take φ1 =√7π/4. With these deﬁnitions of the angles, we are on the branch with f (1) = + 2. We now move to the point z = −1 by a path that does not cross the branch cuts; φ1 then becomes −3π/4 and φ2 becomes +3π/4, so f (−1) =

√ 2

(

1 −1

)

√ e0 = − 2 .

Continuing to z = 2i, we have φ1 = −3π/2 (not π/2) and φ2 = π/2. Then we also have r1 = 1, r2 = 3, so √ f (2i) = 3

(

1 2i

)

√ e−iπ/2 = − 3/2 .

322

CHAPTER 17. COMPLEX VARIABLE THEORY At z = −2i + ε, φ1 = φ2 = −π/2, so √ f (z = −2i + ε) = 3

(

1 −2i

)

e−iπ/2 =

3/2 .

But at z = −2i − ε, φ1 = −π/2 but φ2 = 3π/2, so ( ) √ √ 1 f (−2i − ε) = 3 e+iπ/2 = − 3/2 . −2i 17.2.3.

(a) Show that the function

f (z) =

1 [z 2 (1 − z)]1/3

is regular at z = ∞, and prove that f (z) can be made single-valued by a branch cut that connects its singularities. (b) Using the branch cut you found in part (a) and choosing a branch such that f (−1) is real, ﬁnd the values of f (i), f (−i), and f (2).

Solution: (a) Write z = reiφ and 1 − z = r1 ei(φ1+π) . Then f (z) = (r2 r1 )−1/3 e−i(2φ+φ1 )/3 e−iπ/3 . In the limit of large r we can set φ1 ≈ φ, and the exponential remains unchanged if both angles are increased by any multiple of 2π. There is therefore no branch point at z = ∞ (and no other type of singularity there either). We need a branch cut to prevent the possibility of increasing either angle by 2π without a corresponding increase in the other angle; this can be accomplished by a cut connecting the two branch points at z = 0 and z = 1. (b) Diﬀerent branches of this function result from increases in φ1 (but not φ) by multiples of 2π. If we take −1 to be at φ = φ1 = π, f (−1) will not be real. We can make it real by adding 2π to φ1 (equivalent to leaving φ1 in the range (0, 2π) and adding e−2iπ/3 as an additional factor in f ). This branch of f then has the form f (z) = −(r2 r1 )−1/3 e−i(2φ+φ1 )/3 , and f (−1) = −2−1/3 e−iπ = 1/21/3 . We can now make computations: For z = i, φ = π/2, φ1 = 3π/4, f (i) = 2−1/6 e−3πi/4 . For z = −i, φ = 3π/2, φ1 = 5π/4, f (−i) = 2−1/6 e−5πi/4 . For z = 2, φ = φ1 = 0, f (2) = −4−1/3 e0 = −1/22/3 .

17.3

POWER SERIES EXPANSIONS

Exercises 17.3.1.

For each of the following functions,

 Identify its singularity nearest to z = 0,  Obtain (by methods such as were developed in Chapter 2) its power-series expansion about z = 0, and

17.3. POWER SERIES EXPANSIONS

323

 Determine the radius of its disk of convergence, verifying that this radius is equal to the distance from the origin to the nearest singularity. (a)

1 z − 3i

1 1+z

(b)

(c)

z z 2 + 16

(d) sin z

(e) ln(1 − z)

(f) e−iz

(g) (1 + z 2 )1/3

(h) cosh(z − 1)

(i) tan−1 z

Solution: (a) Nearest singularity to zero is at z = 3i. ∞ ( ∑ 1 z )n =i . z − 3i 3i n=0

This series converges for |z| < 3. (b) Nearest singularity to zero is at z = −1. ∞ ∑ 1 = (−1)n z n . 1 + z n=0

This series converges for |z| < 1. (c) Nearest singularities to zero are at z = ±4i. ∞ ( )2n+1 z 1∑ n z = (−1) . z 2 + 16 4 n=0 4

This series converges for |z| < 4. (d) Nearest singularity to zero is at inﬁnity. sin z =

∞ ∑ (−1)n z 2n+1 . (2n + 1)! n=0

This series converges for all ﬁnite z. (e) Nearest singularity to zero is at z = 1. ln(1 − z) = −

∞ ∑ zn . n n=1

This series converges for |z| < 1. (f) Nearest singularity to zero is at inﬁnity. e−iz =

∞ ∑ (−1)n . n! n=0

This series converges for all ﬁnite z. (g) Nearest singularity to zero is a branch point at z = i. ∞ ( ) ∑ ( (1 + z 2 )1/3 = /3, n)z 2n . 1 n=0 This series converges for |z| < 1.

324

CHAPTER 17. COMPLEX VARIABLE THEORY (h) Nearest singularity to zero is at inﬁnity. ] ∞ [ ∑ cosh(−1)z 2n sinh(−1)z 2n+1 cosh(z − 1) = + . (2n)! (2n + 1)! n=0 This series converges for all ﬁnite z. (i) Nearest singularities to zero are branch points at z = ±i. To check this look at the formula for the arctangent in terms of logarithms. tan−1 z =

∞ ∑ (−1)n z 2n+1 . 2n + 1 n=0

This series converges for |z| < 1. 17.3.2.

Find Laurent expansions about z = 0 for each of the following functions (valid for a region near z = 0), and determine the range of z for which each expansion converges. (a)

z−1 z2

(d) (z + 2) sin

( ) 1 z

(b)

ez+1 z3

(c)

1 z(z − 3)2

(e)

cosh z z5

(f)

1 z(z + 2)

Solution: z−1 1 1 (a) =− 2 + . z2 z z This is a ﬁnite expansion so there is no convergence issue. ∞ ∑ z n−3 (b) e . n! n=0 This expansion converges for all |z| > 0. (c) Expand (z − 3)−2 in a binomial series. This leads to ∞ ∑ 1 (n + 1)z n−1 = . z(z − 3)2 3n+2 n=0

This series converges for 0 < |z| < 3.

 (−1)n/2   , n even,  (n + 1)! an =   2(−1)(n−1)/2  , n odd. n! This series converges for all |z| > 0.

( ) ∑ ∞ 1 (d) (z + 2) sin = an z −n , z n=0

(e)

∞ ∑ z 2n−5 . (2n)! n=0

This expansion converges for all |z| > 0. (f) Make a partial fraction decomposition and then expand 1/(z + 2). We get ∞ ( z )n 1 1 1∑ = − (−1)n . z(z + 2) 2z 4 n=0 2

This expansion converges for 0 < |z| < 2. This convergence limit occurs because there is a singularity at z = −2.

17.4. CONTOUR INTEGRALS 17.3.3.

325

For each of the expansions in Exercise 17.3.2 that does not converge for z → ∞, ﬁnd another Laurent expansion about z = 0 that converges for larger |z| than the expansion found in that Exercise.

Solution: There are two series in Exercise 17.3.2 with ﬁnite convergence limits. (c) We can make a Laurent expansion about z = 0 that is valid at radii outside the singularity at z = 3. Write 1 1 = 3 2 z(z − 3) z (1 − 3/z)2 and expand (1 − 3/z)2 in powers of 1/z. This leads to ∞ ∑ 1 (n + 1)3n = , 2 z(z − 3) z n+3 n=0

which converges for |z| > 3. (f) We can make a Laurent expansion about z = 0 that is valid at radii outside the singularity at z = −2. Make a partial fraction decomposition and expand 1/(z + 2) in powers of 1/z. We get ( ) 1 1 1 1 = − . z(z + 2) 2z 2z 1 + 2z −1 The result is

( )n+2 ∞ 1 1∑ 2 n = (−1) . z(z + 2) 4 n=0 z

This expansion converges for |z| > 2.

17.4

CONTOUR INTEGRALS

Exercises 17.4.1.

Compute by explicit evaluation of the line integrals, for a square path connecting (in the order given) the points (x = 0, y = 0), (1, 0), (1, 1), (0, 1), (0, 0): I I I (a) (2iy − 1) dx , (b) (x − iy)(dx + idy) , (c) (z 2 − 1) dz .

Solution: (a) The integrals over the four segments are, in order, ∫ 1 ∫ 0 (−1) dx = −1 , 0, (2i − 1) dx = 1 − 2i , 0

0.

1

These add to −2i. (b) The integrals over the four segments are, in order, ∫ 1 ∫ 1 ∫ 0 1 1 1 x dx = , (1 − iy)i dy = i + , (x − i) dx = − + i , 2 2 2 0 0 1 ∫

0 1

1 (−iy)i dy = − . 2

326

CHAPTER 17. COMPLEX VARIABLE THEORY (c) The integrals over the four segments are, in order, ∫

1 0

2 (x2 − 1) dx = − , 3 ∫

1

1

(−y 2 + 2iy)i dy = −1 −

[(1 + iy) − 1]i dy = 2

0

0

0

0

[(i + x)2 − 1] dx = 1

(x2 + 2ix − 2) dx = −i + 1

i , 3

0

(−y 2 − 1)i dy = 1

5 , 3

4i . 3

Comment on the relation between your answers to Exercise 17.4.1 and the analyticity of the integrands involved.

Solution: Part (a) is irrelevant to analyticity. The nonzero integral of part (b) conﬁrms that x − iy is not analytic within the unit square. The zero integral of part (c) is consistent with the fact that z 2 − 1 is analytic within the unit square. 17.4.3.

Compute by explicit evaluation the following contour integrals, for a counterclockwise traverse of the unit circle. Work in polar coordinates, for which z = reiθ : (a) x3 − 3xy 2 ,

(b) 3x2 y − y 3 ,

(c) z 3 .

(d) Find the real and imaginary parts of z 3 and comment on the relevance of those quantities to your answers for parts (a) through (c).

Solution: In polar coordinates, with z = eiθ , dz = ieiθ dθ = i(cos θ + i sin θ) dθ, x = cos θ, and y = sin θ,

(a)

I

(x3 − 3xy 2 ) dz = i

(cos3 θ − 3 cos θ sin2 θ)(cos θ + i sin θ) dθ 0

(cos4 θ − 3 cos2 θ sin2 θ) dθ

=i 0

∫ −

(cos3 θ sin θ − 3 cos θ sin3 θ) dθ . 0

Inserting the values of these integrals, we get ( ) I 3π π 3 2 (x − 3xy ) dz = i −3 − (0 − 0) = 0 . 4 4

17.4. CONTOUR INTEGRALS

327

I

(b)

(3x2 y − y 3 ) dz = i

(3 cos3 θ sin θ − sin3 θ cos θ) dθ 0

(3 cos2 θ sin2 θ − sin4 θ) dθ

− 0

( ) π 3π = i(0 + 0) − 3 − = 0. 4 4 (c)

I

z 3 dz =

e3iθ i dθ = 0 . 0

(d) Noting that z 3 = x3 − 3xy 2 + i(x2 y − y 3 ), we see that the answers to parts (a) and (b) correspond to the separate vanishing of the integrals over a closed path of the real and imaginary parts of the analytic function z 3 . The answer to part (c) is a direct conﬁrmation the z 3 is analytic within the unit circle. I

17.4.4.

The integral C

dz z−2

is zero when the contour C is the square deﬁned in Exercise 17.4.1. Conﬁrm that this is the case by explicit evaluation of the line integral, and explain why this result is to be expected. Note. Assume all the logarithms you may encounter are to be evaluated on the same branch.

Solution:

I

Write the integral in the form

dx + idy . x + iy + 2

Break into four straight-line segments, written in the order of a counterclockwise path starting and ending at the origin: ∫ 1 ∫ 0 ∫ 0 ∫ 1 i dy dx i dy dx + + + . 0 3 + iy 1 x+i+2 1 iy + 2 0 x+2 Evaluate each of the above: (ln 3 − ln 2) + [ln(3 + i) − ln 3] + [ln(i + 2) − ln(i + 3)] + [ln 2 − ln(2 + i)] . These add to zero. That result is expected because the integrand is an analytic function in the entire area enclosed by the contour. 17.4.5.

For C the unit circle, evaluate (in any legitimate way) the following contour integrals: I I I 3z 2 dz 4z 2 − 1 (a) (b) dz (c) (z 2 − 3z + 2) dz 2 C z − 3i C (2z − 1) C I (d) C

(z + 1) sin z dz z2

I (e) C

z 2 dz (6z − 1)(z + 6i)

I (f) C

sinh z dz z(2z + i)

Solution: (a) This integral is zero; the only singularity lies outside the unit circle. (b) If we diﬀerentiate Cauchy’s integral formula, Eq. (17.13), we get I f (z) 1 dz = f ′ (z0 ) . 2πi c (z − z0 )2

328

CHAPTER 17. COMPLEX VARIABLE THEORY Applying this formula, we have here z0 = 1/2, f (z) = z 2 − 1/4, so f ′ (z0 ) = 2z0 = 1. Our integral therefore has the value 2πi. (c) This integral is zero. The integrand is analytic at all ﬁnite z. (d) This integral can be evaluated by Cauchy’s integral theorem because the quantity (z + 1) sin z/z is analytic; its value at z = 0 is 1, so the integral has the value 2πi. (e) Do a partial fraction decomposition of the denominator. The contour integral then becomes ] I [ z2 1 z2 − dz . 1 + 36i z − 1/6 z + 6i The second of these integrals vanishes because the integrand is only singular outside the contour. The ﬁrst integral can be evaluated using Cauchy’s integral theorem. The result is ( )2 I z 2 dz 2πi 1 = . (6z − 1)(z + 6i) 1 + 36i 6 C (f) Make a partial fraction expansion of the denominator. The integral becomes ] I I [ sinh z −i sinh z 2i sinh z dz = + dz . z 2z + i C z(2z + i) The ﬁrst term does not contribute to the integral because it is analytic; the second term can be evaluated using Cauchy’s integral formula. The result is I i sinh z dz = −2π sinh(−i/2) . z + i/2 This expression simpliﬁes to 2iπ sin(1/2).

17.5

THE RESIDUE THEOREM

Exercises 17.5.1.

Using any convenient method of hand computation, ﬁnd the residues of the following functions at all their ﬁnite singularities. Then check your work using symbolic computing. (c)

2z + 1 z2 − z − 2

(f)

(z − 1)2 (z + 2)3

z2 (1 − 3z)(z − 3)

(i)

sin z z3

(k)

sin z π − 2z

(l)

(a)

1 z(2z − 3)

(b)

(d)

z−3 z2 + 1

(e)

(g)

z2 (z 2 + 1)2

(h)

1 +1

(j)

e2z

1 (z − 1)(z − 5) (

1+z 1−z

)2

z6 + 1 − 2)

z 3 (z

(m)

cosh z − 1 z7

(n)

e−z 3 cosh z − 5

(o)

e2z − 2z − 1 z3

(p)

ez z3 + 1

(q)

cos z + 1 (π − z)3

(r)

eiz 4z 2 + 1

17.5. THE RESIDUE THEOREM

329

Solution: To check the residue of f (z) at z = z0 by symbolic computing, use one of > residue(f (z), z = z0 ); Residue[ f (z), {z, z0 }] The following indicate methods for hand computation: (a) Write denominator as 2(z)(z − 32 ). The poles are ﬁrst order, so can delete the singular factor and substitute z0 in the remainder. Poles are at z = 0, residue −1/3, and at z = 3/2, residue 1/3. (b) Can do as in part (a). There are ﬁrst-order poles, at z = 1, residue −1/4, and at z = 5, residue 1/4. (c) Factor the denominator into (z + 1)(z − 2). Can do as in part (a). There are ﬁrst-order poles, at z = −1, residue 1/3, and at z = 2, residue 5/3. (d) Factor the denominator into (z − i)(z + i). Can do as in part (a). There are ﬁrst-order poles, at z = i, residue (1 + 3i)/2, and at z = −i, residue (1 − 3i)/2. d (e) There is a second-order pole at z = 1. Compute (1 + z)2 = 2(1 + z) dz and evaluate at z = 1. The residue is 4. 1 d2 (f) There is a pole of order 3 at z = −2. Compute (z − 1)2 = 1 . The 2! dz 2 residue is 1. (g) In factored form, the denominator is (z − i)2 (z + i)2 , so there are poles of order 2 at z = i and z = −i. For z = i compute [ ] [ ] d z2 2z 2z 2 i = − =− . dz (z + i)2 z=i (z + i)2 (z + i)3 z=i 4 (h) (i)

(j)

(k) (l)

A similar process for the pole at z = −i shows its residue to be i/4. Write as −z 2 /3(z − 13 )(z − 3), and proceed as in part (a). There are ﬁrst-order poles at z = 1/3, residue 1/72 and at z = 3, residue −9/8. The function sin z has no singularities at ﬁnite z, so the only singularity is at z = 0. Expand sin z as a power series in z. The expansion contains only odd powers, so sin z/z 3 contains only even powers and its residue must be zero. There are singularities where 2z = (2n + 1)πi for n = 0, ± 1, ± 2, · · · . Deﬁne these as zn = (n + 12 )πi. For each singularity we compute (using l’Hˆopital’s rule) 1 1 z − zn = 2zn = − . lim z→zn e2z + 1 2e 2 Thus, there are ﬁrst-order poles at all zn , each with residue −1/2. Write as −(sin z)/(z − π/2). Proceed as in part (a). There is a ﬁrst-order pole at z = π/2, with residue − sin(π/2)/2 = −1/2. There is a pole of order 3 at z = 0 and a ﬁrst-order pole at z = 2. For the ﬁrst-order pole, proceed as in part (a); the residue is 65/8. For the pole of order 3, ( ) 1 d2 z 6 + 1 1 =− . 2! dz 2 z − 2 z=0 8

330

CHAPTER 17. COMPLEX VARIABLE THEORY We have found singularities at z = 2, residue 65/8 and at z = 0, residue −1/8. (m) The only singularity is a pole of order 7 at z = 0. A good approach here is to look at the power-series expansion of cosh z and to note that the coeﬃcient of z 6 in cosh z − 1 is 1/6!. Thus the residue at z = 0 is 1/6! = 1/720. 2 (n) Rearrange to . z 3(e − 3)(ez − 13 ) There are poles arising from ez − 3 at zn = ln 3 + 2nπi, and additional poles arising from ez − 1/3 at ζn = − ln 3 + 2nπi, in both cases for all integer n. For the set of poles at z = zn , using l’Hˆopital’s rule to evaluate the limit (and afterwards replacing ezn by 3) residue = lim

z→zn

=

2(z − zn ) 2 = 3(ez − 3)(ez − 13 ) 3[ezn (ezn − 13 ) + (ezn − 3)(ezn )]

2 1 = . 12 3[3(3 − 13 ) + 0]

For the set of poles at z = ζn , residue = lim

z→ζn

=

3( 13

3(ez

2(z − ζn ) 2 1 = 1 z ζ ζ n n − 3)(e − 3 ) 3[e (e − 3 ) + (eζn − 3)(eζn )]

2 3 =− . 4 − 3)(1/3)

(o) There is a pole of order 3 at z = 0. The residue is most easily found by expanding e2z ; the coeﬃcient of z 2 from e2z −2z −1 is 2. So the singularity at z = 0 has residue 2. √ √ (p) Factor the denominator into (z + 1)(z − (1 + i 3)/2)(z − We √ (1 − i 3)/2). iπ/3 identify three ﬁrst-order poles, at z = −1, z = (1 + i 3)/2 = e , and 0 1 √ z2 = (1 − i 3)/2 = e−iπ/3 . Proceeding as in part (a), the pole at z = z0 has residue ez0 /(z0 − z1 )(z0 − z2 ) = 1/3e. The pole at z = z1 has residue √

2 e(1+i 3)/2 √ residue = e /(z1 − z0 )(z1 − z2 ) = . 3 i 3−1 z1

2 e(1−i 3)/2 √ The pole at z = z2 has residue − . 3 i 3+1 (q) There is a pole of order 3 at z = π. The residue is given by 1 d2 1 1 [−(cos z + 1)] = cos π = − . 2 2! dz 2 2 z=π The singularity at z = π has residue −1/2. (r) Write the denominator as 4(z − i/2)(z + i/2). There are two ﬁrst-order poles. Treat as in part (a). For the pole at z = i/2, the residue is e−1/2 /4i = −(i/4)e−1/2 . For the pole at z = −i/2, the residue is e1/2 /(−4i) = (i/4)e1/2 .

17.5. THE RESIDUE THEOREM 17.5.2.

331

Use the residue theorem to evaluate the contour integral of each of the functions in Exercise 17.5.1, with the contour a circle of radius 5/2 with center at z = 0.

Solution: Each contour integral has the value 2πi× (sum of the residues at the singularities enclosed by the contour). In particular, calling the contour integral I, (a) Both singularities are within the contour; the residues at these points add to zero, so I = 0. (b) The pole at z = 1 (residue −1/4) is within the contour; that at z = 5 is not. Therefore I = 2πi(−1/4) = −πi/2. (c) Both poles are within the contour. Therefore I = 2πi(1/3 + 5/3) = 4πi. (d) Both poles are within the contour. Therefore [ ] 1 + 3i 1 − 3i I = 2πi + = 2πi . 2 2 (e) The pole is within the contour. I = 2πi(4) = 8πi. (f) The pole is within the contour. I = 2πi. (g) Both poles are within the contour. The residues at these poles add to zero, so I = 0. (h) The pole at z = 1/3 (residue 1/72) is within the contour; that at z = 3 is not. Therefore I = 2πi(1/72) = πi/36. (i) The residue at the only singularity is zero. Therefore I = 0. (j) zn = (n + 12 )πi has a magnitude of about 1.57 for n = 0 and also for n = −1. However, for all other n it has a magnitude greater than π, so only the poles at z0 and z−1 are within the contour. Each has residue −1/2; the residues sum to −1, so I = −2πi. (k) The pole at π/2 is within the contour. I = 2πi(−1/2) = −πi. (l) Both poles are within the contour. Adding their residues (65/8 − 1/8 = 8), we get I = 2πi(8) = 16πi. (m) The pole is within the contour. I = 2πi(1/720) = πi/360. (n) Note that | ± ln 3| < 2.5 while |2πi| > 2.5. Therefore the only poles within the contour are at z0 = ln 3 and at ζ0 = − ln 3. Using the residues from these poles, ] [ 3 4πi 1 − =− . I = 2πi 12 4 3 (o) The pole is within the contour. I = 2πi(2) = 4πi. (p) All the singularities are on the unit circle and are therefore within the contour. Adding the three residues, multiplying by 2πi, and simplifying, I=

] √ √ √ 2πi [ 1 − e3/2 cos( 3/2) + 3 e3/2 sin( 3/2) . 3e

(q) The singularity is outside the contour, so I = 0.

332

CHAPTER 17. COMPLEX VARIABLE THEORY (r) Both poles are within the contour. Their residues sum to i e1/2 − e−1/2 i sinh(1/2) = . 2 2 2 ( ) i sinh(1/2 Therefore, I = 2πi = −π sinh(1/2) . 2

17.5.3.

A multiple-valued function may have singularities that only occur on speciﬁc branches. Functions containing z 1/2 have two branches, which we can distinguish by their values of z 1/2 at z = 1. Identify the value of z and the branch(es) at which the following functions have poles, and for each pole compute its residue. (a)

z2 2z 1/2

+1

,

(b)

z 3/2 , z − z 1/2 − 2

z 1/2 . 3z − 1

(c)

Solution: (a) The singularity occurs when z 1/2 = −1/2, which only occurs on the branch for which z 1/2 = −1 at z = 1. The singularity is then a ﬁrst-order pole at z = 1/4, with residue 1/32. (b) There is a pole on each branch of z 1/2 ; on the branch for which z 1/2 = 1 it is at z = 4; on the other branch it is at z = 1. Letting z0 be the location of the pole, the residue is the limit, taken using l’Hˆopital’s rule: 3/2

lim

z→z0

(z − z0 )z 3/2 z0 = . 1/2 z − z 1/2 − 2 1 − 1/2z0

For z0 = 4 (and z 1/2 = 2) the residue is 32/3; for z0 = 1 (and z 1/2 = −1) it is −2/3. (c) The singularity occurs at z = 1/3, irrespective of the branch of z 1/2 , but the value of the residue depends upon the branch. The residue has the value z 1/2 /3 with z = 1/3. On the branch for which z 1/2 = 1 at z = 1, the residue is +(3−3/2 ); on the other branch the residue is −(3−3/2 ). 17.5.4.

The only ﬁnite-z singularities of the function Γ(z) are at zero and the negative integers. Find a general formula for the residues of Γ(z) at z = −n, n = 0, 1, 2, · · · . Hint. Relate Γ(−n + z) to Γ(1 + z).

Solution: The residue of Γ(z) at z = −n is given by the limit lim (z + n)Γ(z) .

z→−n

We can use a the recurrence formula for the gamma function to evaluate this limit; set Γ(z) = Γ(z + n + 1)/z(z + 1) · · · (z + n), leading after canceling z + n to Γ(z + n + 1) . lim z→−n z(z + 1) · · · (z + n − 1) There are no longer any singular factors; the limit (and thus the residue) is Γ(1) (−1)n = . (−n)(−n + 1) · · · (−1) n!

17.6. EVALUATION OF DEFINITE INTEGRALS

17.6

333

EVALUATION OF DEFINITE INTEGRALS

Exercises 17.6.1.

Evaluate the following deﬁnite integrals: ∫ 2π dθ (a) 5 + 4 cos θ 0 ∫

(c) 0

(e) 0

dθ 13 − 5 sin θ

dθ (5 − 3 sin θ)2

dθ (2 − cos θ)2

(b) 0

dθ 3 + 2 cos θ + sin θ

(d)

cos 3θ dθ 5 − 4 cos θ

(f)

0

∫ 0

Solution: Set z = eiθ , so dθ = dz/iz and the integral is over the unit circle in z. Also, cos θ = (z + z −1 )/2 and sin θ = (z − z −1 )/2i. Then use the residue theorem to evaluate the integral. (a) The above change of variable yields I I I 1 dz 1 dz 1 dz I= = = . 5 + 2(z + z −1 ) iz 2i 2i z 2 + 25 z + 1 (z + 12 )(z + 2) It is now apparent that the integrand has ﬁrst-order poles at z = −1/2 and at z = −2. Only the pole at z = −1/2 is within the contour. (residue of integrand at z = −1/2) =

1 2 = . 3 − 21 + 2

) 2π 1 2 = . 2i 3 3 (b) The above change of variable yields I I 1 dz 2 dz I= = − 26 −1 2 13 − 5(z − z )/2i iz 5 z − 5 iz − 1 (

Therefore, I = (2πi)

=−

2 5

I

dz . (z − 5i)(z − i/5)

It is now apparent that the integrand has ﬁrst-order poles at z = 5i and at z = i/5. Only the pole at z = i/5 is within the contour. (residue of integrand at z = i/5) =

i 5

( )( ) 2 5i π Therefore, I = (2πi) − = . 5 24 6 (c) The above change of variable yields I I dz 2 1 I= = 3 + z + z −1 + (z − z −1 )/2i iz 2i + 1

=

2 2i + 1

I

( z+

dz )( ). 5i i z+ 2i + 1 2i + 1

1 5i =− . 24 − 5i

dz 2i − 1 6iz + z2 + 2i + 1 2i + 1

334

CHAPTER 17. COMPLEX VARIABLE THEORY Only the pole at z = −i/(2i + 1) is within the contour. The residue at this ﬁrst-order pole is (2i + 1)/4i, and therefore ( I = (2πi)

2 2i + 1

)(

2i + 1 4i

) = π.

(d) The above change of variable yields I I=

1 1 dz = −1 2 5 − 3[(z − z )/2i] iz i

4 =− 9i

I

(

2i 3

)2 I (

z dz

)2 10iz −1 z − 3 2

z dz . (z − 3i)2 (z − i/3)2

Only the second-order pole at z = i/3 is within the contour. The residue at this pole is computed as [ [ ] ] 1 45 d z 2z = =− residue = − . 2 2 3 dz (z − 3i) z=i/3 (z − 3i) (z − 3i) z=i/3 256 ( )( ) 4 45 5π Therefore, I = (2πi) − − = . 9i 256 32 (e) Here use the relation cos 3θ = (z 3 + z −3 )/2. I I=

=−

(z 3 + z −3 )/2 dz 1 =− 5 − 2(z + z −1 ) iz 4i 1 4i

I

I

(z 6 + 1) dz z 3 (z 2 − 25 z + 1)

(z 6 + 1) dz . − 12 )(z − 2)

z 3 (z

The third-order pole at z = 0 and the ﬁrst-order pole at z = 1/2 are within the contour. [ ] 1 d2 z 6 + 1 21 residue at z = 0 = z − 2) = . 2! dz 2 (z − 12 4 z=0 [

residue at z =

1 2)

z6 + 1 = 3 z (z − 2)

] =− z=1/2

65 . 12

( )[ ] 1 21 65 π Therefore, I = (2πi) − − = . 4i 4 12 12 (f) The above change of variable yields I I dz 4 1 z dz I= = [2 − (z + z −1 )2] ∗ 2 iz i (z 2 − 4z + 1)2 I 4 z dz √ √ = . i (z − 2 − 3)2 (z − 2 + 3)2

17.6. EVALUATION OF DEFINITE INTEGRALS

335

√ There are two second-order poles; only that at z0 = 2 − 3 is within the contour. We therefore calculate [ ] d z 1 √ residue = = √ . dz (z − 2 − 3)2 z=z 6 3 0 ( )( ) 4π 4 1 √ = √ . Therefore, I = (2πi) i 6 3 3 3 17.6.2.

Consider the integral

dθ , 1 + a cos θ with a a real constant restricted to the range 0 < a < 1. I=

0

(a) Show that when converted to a contour integral in z = eiθ , the integrand has two poles. Prove that one pole lies within the unit circle, while the other is outside. 2π (b) Show that the integral has the value I = √ . 1 − a2

Solution:

I

I 1 dz 2 dz = , 1 + (a/2)(z + z −1 ) iz ia (z − z1 )(z − z2 ) √ √ where z1 = (−1 − 1 − a2 )/a and z2 = (−1 + 1 − a2 )/a. I=

(a) It is clear that if 0 < a < 1, z1 has a magnitude that is larger than 1/a and that therefore |z1 | > 1. We next note that z2 = −1 when a = 1 and has the binomial expansion [ ( )] 1 1 1 1 1 3 2 2 2 2 3 1 2 (−a ) 2 (− 2 )(−a ) 2 (− 2 )(− 2 )(−a ) z2 = −1 + 1 + + + + ··· . a 1 2! 3! From this expansion (which converges for 0 < a < 1), we see that lima→0 z2 = 0 and that, after canceling the initial 1 and −1, all the surviving terms of the expansion are negative and increase in magnitude as a increases. Therefore, z2 monotonically decreases from zero to −1 as a increases from zero to 1. This behavior indicates that z2 will be inside the unit circle for all a in the range 0 < a < 1. (b) With the result from part (a) in hand, we need only the residue at z = z2 , which is 1 a . = √ z2 − z1 2 1 − a2 ( )( ) 2 a 2π √ Therefore, I = (2πi) =√ . 2 ia 2 1−a 1 − a2 17.6.3.

Verify the ﬁnal formula, Eq. (17.37), given for the integral I in Example 17.6.3.

Solution: Curve C1 (which closes the contour in the upper half-plane) is appropriate for the integral containing eiz because that factor becomes exponentially small when the imaginary part of z ia large and positive. This contour encloses only the pole at z = i, which is encircled in the counterclockwise (mathematically positive) direction. The residue is then the ﬁrst expression within the braces of

336

CHAPTER 17. COMPLEX VARIABLE THEORY Eq. (17.37). Curve C2 encloses only the pole at z = −i, and does so on a clockwise (mathematically negative) path. This corresponds to the second term within the braces of Eq. (17.37) (including the prepended minus sign).

17.6.4. Evaluate, using contour integration methods: ∫

(a) 0

(c) 0

(b)

dx (x2 + 1)2

(d)

(e) −∞

(g) 0

dx x4 + 1

(x2

x2 dx x4 + 16

dx x4 + x2 + 1

cos 2x dx 9x2 + 4

cos x dx 1 + x2 + x4

0

∫ 0

dx + 4x + 5)2

(f) 0

x sin x dx x2 + 1

(h) 0

Solution: (a) The integrand is even, so change the integration interval to (−∞, ∞) and introduce a factor 1/2. Then consider the contour integral I 1 dz I= 2 z4 + 1 on a contour that includes the entire real axis and is closed by a large arc in the upper half-plane. The arc does not contribute to the integral, so I is the integral we seek to evaluate. Now factor the denominator of the integrand, leading to I 1 dz I= . 2 (z + eiπ/4 )(z − eiπ/4 )(z + e3iπ/4 )(z − e3iπ/4 ) We see that the integrand has four ﬁrst-order poles, with those at z = eiπ/4 and z = e3iπ/4 within the contour and encircled counterclockwise. (residue at z = eiπ/4 ) =

1 2eiπ/4 (eiπ/4

+

e3iπ/4 )(eiπ/4

− e3iπ/4 )

1 (e3iπ/4 + eiπ/4 )(e3iπ/4 − eiπ/4 )(2e3iπ/4 ) √ √ Noting that eiπ/4 + e3iπ/4 = i 2 and that eiπ/4 − e3iπ/4 = 2, these two residues sum to [ ] √ 1 1 2 1 . − 3iπ/4 = 4i eiπ/4 4i e √ ( ) (√ ) π 2 1 2 = . Therefore, I = (2πi) 2 4i 4 (residue at z = e3iπ/4 ) =

(b) The integrand is even, so change the integration interval to (−∞, ∞) and introduce a factor 1/2. Then consider the contour integral I 1 z 2 dz I= 2 z 4 + 16

17.6. EVALUATION OF DEFINITE INTEGRALS

337

on a contour that includes the entire real axis and is closed by a large arc in the upper half-plane. The arc does not contribute to the integral, so I is the integral we seek to evaluate. Now factor the denominator of the integrand, leading to I 1 z 2 dz I= . iπ/4 iπ/4 2 (z + 2e )(z − 2e )(z + 2e3iπ/4 )(z − 2e3iπ/4 ) We see that the integrand has four ﬁrst-order poles, with those at z = 2eiπ/4 and z = 2e3iπ/4 within the contour and encircled counterclockwise. (residue at z = 2eiπ/4 ) =

(2eiπ/4 )2 4eiπ/4 (2eiπ/4 + 2e3iπ/4 )(2eiπ/4 − 2e3iπ/4 )

(2e3iπ/4 )2 (2e3iπ/4 + 2eiπ/4 )(2e3iπ/4 − 2eiπ/4 )(4e3iπ/4 ) √ These expressions simplify( as in part (a), with a sum equal to 2/8i. ) √ ( ) √ 2 1 π 2 Therefore, I = (2πi) = . 2 8i 8 (residue at z = 2e3iπ/4 ) =

(c) The integrand is even, so change the integration interval to (−∞, ∞) and introduce a factor 1/2. Then consider the contour integral I 1 dz I= 2 2 (z + 1)2 on a contour that includes the entire real axis and is closed by a large arc in the upper half-plane. The arc does not contribute to the integral, so I is the integral we seek to evaluate. Now factor the denominator of the integrand, leading to I 1 dz I= . 2 (z + i)2 (z − i)2 We see that the integrand has two second-order poles, with only the pole at z = i within the contour and encircled counterclockwise. ( )2 d 1 1 (residue at z = i) = . = dz z + i 4i z=i

( )( ) 1 1 π Therefore I = (2πi) = . 2 4i 4 (d) The integrand is even, so change the integration interval to (−∞, ∞) and introduce a factor 1/2. Then consider the contour integral I 1 dz I= 4 2 z + z2 + 1 on a contour that includes the entire real axis and is closed by a large arc in the upper half-plane. The arc does not contribute to the integral, so I is the integral we seek to evaluate.

338

CHAPTER 17. COMPLEX VARIABLE THEORY Now factor the denominator of the integrand, leading to I 1 dz I= . 2 (z − eiπ/3 )(z + eiπ/3 )(z − e−iπ/3 )(z + eiπ/3 ) We see that the integrand has four ﬁrst-order poles, with those at z = eiπ/3 and z = −e−iπ/3 within the contour and encircled counterclockwise. (residue at z = eiπ/3 ) =

1 (2eiπ/3 )(eiπ/3

e−iπ/3 )(eiπ/3

+ e−iπ/3 )

1 (−e−iπ/3 − eiπ/3 )(−e−iπ/3 + eiπ/3 )(−2e−iπ/3 ) √ = 1 and eiπ/3 − e−iπ/3 = i 3, the sum of these

(residue at z = −e−iπ/3 ) = Setting eiπ/3 + e−iπ/3 residues simpliﬁes to

[ ] 1 1 1 1 √ + = √ . e−iπ/3 2i 3 eiπ/3 2i 3 ( )( ) 1 1 π √ Therefore I = (2πi) = √ . 2 2i 3 2 3 (e) Consider the contour integral I dz I= 2 (z + 4z + 5)2 on a contour that includes the entire real axis and is closed by a large arc in the upper half-plane. The arc does not contribute to the integral, so I is the integral we seek to evaluate. Now factor the denominator of the integrand, leading to I dz I= . (z + 2 + i)2 (z + 2 − i)2 We see that the integrand has two second-order poles, with only that at z = −2 + i within the contour and encircled counterclockwise. d 1 i (residue at z = −2 + i) = =− . 2 dz (z + 2 + i) z=−2+i 4 ) ( π i = . Therefore I = (2πi) − 4 2 (f) The integrand is even, so change the integration interval to (−∞, ∞) and introduce a factor 1/2. It is also useful to write cos 2x = (e2ix + e−2ix )/2. We therefore consider the contour integral I 2iz 1 e + e−2iz I= dz . 4 9z 2 + 4 We need to separate this integral into its two terms, closing the contour for the e2iz term by an arc in the upper half-plane and that for the e−2iz term by an arc in the lower half-plane. These choices are necessary to cause the

17.6. EVALUATION OF DEFINITE INTEGRALS

339

exponentials to become small on the arc, so I is then the integral whose value we seek. Also factor the denominator of the integrand, leading to I I 1 e2iz dz 1 e−2iz dz I= + . 36 C1 (z − 2i/3)(2 + 2i/3) 36 C2 (z − 2i/3)(2 + 2i/3) The contour C1 encircles only the ﬁrst-order pole at z = 2i/3 in the positive (counterclockwise) direction, but note that C2 encircles only the pole at z = −2i/3 in the negative (clockwise) direction. Thus, from (residue at z = 2i/3) =

(residue at z = −2i/3) =

e2i(2i/3) 3e−4/3 = 4i/3 4i 3e−4/3 e−2i(−2i/3) = −4i/3 −4i

we form the ﬁrst residue minus the second, reaching 3e−4/3 /2i. Thus, ( ) ( −4/3 ) 1 3e πe−4/3 I = (2πi) = . 36 2i 12 (g) The integrand is even, so change the integration interval to (−∞, ∞) and introduce a factor 1/2. It is also useful to write sin x = (eix e−ix )/2i. We therefore consider the contour integral I 1 z(eiz − e−iz ) I= dz . 4i z2 + 1 We need to separate this integral into its two terms, closing the contour for the eiz term by an arc in the upper half-plane and that for the e−iz term by an arc in the lower half-plane. These choices are necessary to cause the exponentials to become small on the arc, so I is then the integral whose value we seek. Also factor the denominator of the integrand, leading to I I 1 z eiz dz 1 z e−iz dz I= − . 4i C1 (z − i)(z + i) 4i C2 (z − i)(z + i) The contour C1 encircles only the ﬁrst-order pole at z = i in the positive (counterclockwise) direction, but note that C2 encircles only the pole at z = −i in the negative (clockwise) direction. Thus, from i eiz 1 (residue at z = i) = = z + i z=i 2e (−i)e−iz 1 (residue at z = −i) = , = z − i z=−i 2e keeping in mind that the second integral has both a minus sign and has a clockwise ) we add the two residues, reaching 1/e. Thus, ( ) (path, 1 π 1 = . I = (2πi) 4i e 2e

340

CHAPTER 17. COMPLEX VARIABLE THEORY (h) The integrand is even, so change the integration interval to (−∞, ∞) and introduce a factor 1/2. It is also useful to write cos x = (eix + e−ix )/2. We therefore consider the contour integral I iz 1 e + e−iz I= dz . 4 1 + z2 + z4 We need to separate this integral into its two terms, closing the contour for the eiz term by an arc in the upper half-plane and that for the e−iz term by an arc in the lower half-plane. These choices are necessary to cause the exponentials to become small on the arc, so I is then the integral whose value we seek. Also factor the denominator of the integrand, leading to I=

1 4

I C1

eiz dz (z − eiπ/3 )(z + eiπ/3 )(z − e−iπ/3 )(z + e−iπ/3 ) +

1 4

I C2

(z −

eiπ/3 )(z

+

e−iz dz . − e−iπ/3 )(z + e−iπ/3 )

eiπ/3 )(z

The contour C1 encircles the ﬁrst-order poles at z = eiπ/3 and z = −e−iπ/3 in the positive (counterclockwise) direction, and C2 encircles the ﬁrst-order poles at z = −eiπ/3 and e−iπ/3 √ in the negative (clockwise) direction. Thus, noting that e±iπ/3 = (1 ± i 3/2, we have for the ﬁrst integral √

(residue at z = e

iπ/3

e(i− 3)/2 )= iπ/3 iπ/3 (2e )(e − e−iπ/3 )(eiπ/3 + e−iπ/3 ) √

e(i− 3)/2 = √ 2i 3 eiπ/3 √

(residue at z = −e

−iπ/3

e(−i− 3)/2 )= −iπ/3 iπ/3 (−e −e )(−e−iπ/3 + eiπ/3 )(−2e−iπ/3 ) √

e(−i− 3)/2 = √ 2i 3 e−iπ/3 Combining these residues, the ﬁrst integral has the value ( ) √ ( )[ ] 1 1 − 3/2 √ (2πi) e ei(1/2−π/3) + ei(−1/2+π/3) 4 2i 3 √ π = √ e− 3/2 cos 2 3

(

π 1 − 3 2

) .

Similar steps reveal that the second integral has the same value as the ﬁrst, so our ﬁnal result is ) ( √ π π 1 I = √ e− 3/2 cos − . 3 2 3

17.6. EVALUATION OF DEFINITE INTEGRALS ∫

17.6.5.

One can evaluate I = 0

341

ln(x2 + 1) dx x2 + 1

by writing it as

∫ ∫ 1 ∞ ln(x + i) 1 ∞ ln(x − i) dx + dx , 2 2 −∞ x + 1 2 −∞ x2 + 1 and then converting each of the above integrals into a contour integral in which the branch point of its integrand is outside the contour (and the branch cut can also be chosen to be entirely outside). In addition, for both integrals, one should choose the branch of the logarithm that causes it to have real values on the real line. I=

Following the above suggestions, show that I = π ln 2 .

Solution: Decomposing I as suggested in the problem statement, the ﬁrst integrand has a branch point at z = −i and its branch cut can be directed toward inﬁnity in the lower half-plane. Its integral can be evaluated on a contour C1 that is closed with a large arc in the upper half-plane. The second integrand has a branch point at z = i and its integral can be evaluated on a contour C2 that is closed with a large arc in the lower half-plane. Writing the denominator in factored form, we have I I 1 ln(z + i) dz 1 ln(z − i) dz I= + . 2 C1 (z − i)(z + i) 2 C2 (z − i)(z + i) The contour C1 encloses, in the positive (counterclockwise) direction a ﬁrstorder pole at z = i; the contour C2 encloses, in the negative (clockwise) direction, a ﬁrst-order pole at z = −i. The residue of the ﬁrst integrand at z = i and that of the second integrand at z = −i are (ﬁrst residue at z = i) = (second residue at z = −i) =

ln 2i , 2i ln(−2i) . −2i

Including a minus sign for the second integral, we have (noting that ln(−i) = − ln i) ( )( ) ( )( ) 1 ln 2 + ln i 1 ln 2 + ln(−i) I = (2πi) − (2πi) = π ln 2 . 2 2i 2 −2i 17.6.6.

Use contour integration to evaluate the following: ∫ ∞ ∫ ∞ x dx dx (a) , (b) . 3 x +1 x5 + 1 0 0

Solution: (a) Use a sector contour, as in Fig. 17.12, with angle 2π/3. The large arc will not contribute in the present problem. On a ray at this angle, z 3 = r3 , and, letting I denote the integral whose value we seek, the two straight-line portions of the following contour integral (denoted J) have these values: ∫ ∞ ∫ 0 2iπ/3 I x dx re z dz = + e2iπ/3 dr = I − e4iπ/3 I . J= 3+1 3 z3 + 1 x 0 ∞ r +1

342

CHAPTER 17. COMPLEX VARIABLE THEORY

Figure 17.12: Contour for Exercise 17.6.6. C

eπ i/3

B 2 π /3 –1

A

e –π i/3 We now evaluate J using the residue theorem. The poles of the integrand are at the cube roots of −1, namely at z0 = eiπ/3 , z1 = e−iπ/3 , and at z2 = −1. Only z0 is within the contour. We ﬁnd the residue by using l’Hˆopital’s rule to evaluate the limit (z − z0 )z 2z − z0 1 1 (residue at z = z0 ) = lim = = iπ/3 . = z→z0 z 3 + 1 3z 2 z=z0 3z0 3e We now get J = (2πi) I= 1 − e4iπ/3 =

(

)

1 3eiπ/3

1 2πi = 4iπ/3 iπ/3 1−e 3(e − e5iπ/3 )

2πi 2πi 2π = = √ . 3[2i sin(π/3)] 3(eiπ/3 − e−iπ/3 ) 3 3

(b) Use a sector contour with angle 2π/5. The large arc will not contribute in the present problem. On a ray at this angle, z 5 = r5 , and, letting I denote the integral whose value we seek, the two straight-line portions of the following contour integral (denoted J) have these values: I J=

z dz = z5 + 1

∞ 0

dx + x5 + 1

0 ∞

e2iπ/5 dr = I − e2iπ/5 I . r5 + 1

We now evaluate J using the residue theorem. The poles of the integrand are at the cube roots of −1; the only root within the contour is at z0 = eiπ/5 . Using l’Hˆopital’s rule to ﬁnd the residue, (residue at z = z0 ) = lim

z→z0

z − z0 1 1 = = 4iπ/5 . 5 4 z +1 5z z=z0 5e

17.6. EVALUATION OF DEFINITE INTEGRALS

343

Figure 17.13: Contour for Exercise 17.6.7. y

x

We now get I=

= 17.6.7.

J = (2πi) 1 − e2iπ/5

(

1 5e4iπ/5

)

1 2πi = 1 − e2iπ/5 5(e4iπ/5 − e6iπ/5 )

2πi 2πi π = = . iπ/5 10i sin(π/5) 5 sin(π/5) +e )

5(−e−iπ/5

The Fresnel integrals of inﬁnite argument, ∫ ∞ cos u2 du , Fc (∞) = 0

∫ Fs (∞) =

sin u2 du ,

0

arise in optics problems. One way to obtain Fc (∞) and Fs (∞) starts by recognizing that ∫ ∞ ix e I = Fc (∞) + iFs (∞) = dx . 1/2 2x 0 This integral can be evaluated using the contour shown in Fig. 17.13. The integration along the positive real axis evaluates to I; the integration along the imaginary axis can be identiﬁed as proportional to Γ(1/2). Verify that I = Fc (∞) + iFs (∞), show that the small and large arcs of the contour do not contribute to I, and complete the evaluation of Fc (∞) and Fs (∞).

Solution: To verify that I = Fc (∞) + iFs (∞), rewrite I, making the substitution u2 = x, with du = dx/2u = dx/2x1/2 . Then cos u2 +i sin u2 becomes cos x+i sin x = eix , completing the veriﬁcation. We now evaluate I by considering the contour integral I iz e dz J= . 2z 1/2 Jordan’s lemma, Eq. (17.36), shows that the z −1/2 dependence of the integrand suﬃces to cause a zero contribution to the integral from the large arc. The

344

CHAPTER 17. COMPLEX VARIABLE THEORY small arc does not contribute, as can be seen by writing the integrand in polar form. The straight line along the real axis contributes I to the integral, and the line directed toward zero on the imaginary axis has the value shown here: ∫ 0 ∫ ei(iy) i dy i1/2 i1/2 ∞ −1/2 −y J =I+ = I − y e dy = I − Γ(1/2) 1/2 2 0 2 y=∞ 2(iy) ( =I−

1+i √ 2 2

)

√ π.

The contour for J encloses no singularities and J is therefore zero. Thus, √ √ π π I = √ +i √ , 2 2 2 2 √ π showing that Fc (∞) = Fs (∞) = √ . 2 2 17.6.8.

Evaluate by contour integration, using a contour similar to that of Fig. 17.14: ∫ ∞ sx e dx I= , 1 + ex −∞ where 0 < s < 1. Hint. Place the upper horizontal line of the contour at a value of y that makes the integral on this line proportional to the integral along the real axis.

Solution: Consider the contour integral I J=

esz dz , 1 + ez

with the upper horizontal line of the contour at y = 2π so that ez (in the denominator) will still be equal to ex . The vertical parts of the contour do not contribute to the integral, and we have ∫ −∞ sx e dx 2iπs J =I +e = I(1 − e2iπs ) . 1 + ex ∞ The integrand has a ﬁrst-order pole at z = iπ. Using l’Hˆopital’s rule, esz + s(z − iπ) (z − iπ)esz eiπs = (residue at z = iπ) = lim = iπ = −eiπs , z z z→iπ 1+e e e z=iπ so J = (2πi)(−eiπs ) . Using this value of J, we ﬁnd I= 17.6.9.

−2πi eiπs −2πi π = −iπs = . 1 − e2iπs e − eiπs sin πs

Evaluate by contour integration, using the contour of Fig. 17.14: ∫ ∞ cos px dx , cosh x 0 where −1 < p < 1.

17.6. EVALUATION OF DEFINITE INTEGRALS

345

Solution: Consider the contour integral I J=

cos pz dz . cosh z

The integrand is even, so the integral from −∞ to ∞ along the real axis is 2I, where I denotes the integral whose evaluation is sought. The vertical portions of the contour do not contribute to the integral. The contribution from the negatively directed line at y = π can be evaluated by noting that cosh(x+iπ) = − cosh x, while the even portion of cos p(x + iπ) is cos px cosh pπ. We therefore get J = 2I + 2I cosh pπ = 4I cosh2 (pπ/2) . The integrand of J has a ﬁrst-order pole at z = iπ/2. Using l’Hˆopital’s rule, (z − iπ/2) cos pz cos pz cos(ipπ/2) = = cosh z sinh z z=iπ/2 sinh(ipπ/2) z=iπ/2

(residue at z = iπ/2) = lim

cosh(pπ/2) . i ( ) cosh(pπ/2) Thus, J = (2πi) = 2π cosh(pπ/2). i =

We now have 4I cosh2 (pπ/2) = 2π cosh(pπ/2), so I = 17.6.10.

π . 2 cosh(pπ/2)

Use contour integration to evaluate the following integrals. The parameter s can be assumed to be in the range 0 < s < 1. ∫ ∞ ∫ ∞ s xs dx x dx (b) (a) 2+1 x (x + 1)2 0 0 ∫

(c) 0

ln x dx x2 + 1

∫ (d) 0

xs ln x dx x2 + 1

Solution: (a) Consider the contour integral I I z s dz z s dz J= = , 2 z +1 (z + i)(z − i) where the contour includes the entire real axis, closed by a large arc in the upper half-plane. The arc does not contribute to J. For the real range (0, ∞), the contribution to J is I (the integral whose value we seek), and for (−∞, 0) write z = reiπ , with dz = eiπ dr and z 2 + 1 = r2 + 1. Then we have ∫ ∞ s ∫ 0 r dr (reiπ )s iπ iπs e dr + I = e + I = (1 + eiπs )I . J= 2+1 2+1 r r 0 ∞ The integrand of J has a ﬁrst-order pole at z = i = eiπ/2 that is encircled by the contour; eiπs/2 (residue at z = i) = . 2i

346

CHAPTER 17. COMPLEX VARIABLE THEORY ( Thus J = (2πi)

eiπs/2 2i

) = (1 + eiπs )I ;

Solving for I, we get I =

π eiπs/2 π = . iπs 1+e 2 cos(πs/2)

(b) Consider the contour integral I J=

z s dz , (z + 1)2

where the contour is that shown here.

The small and large arcs do not contribute to the integral, and the outward line just above the positive real axis produces a contribution I (the value of the integral we seek). The inward line just below the positive real axis corresponds to z = re2πi , so (z + 1)2 = (r + 1)2 and dz = dr, but z s = e2iπs rs . Thus, ∫

0

e2iπs

J =I+ ∞

rs dr = (1 − e2iπs )I . (r + 1)2

We now evaluate J using the residue theorem. Its integrand has secondorder pole at z = −1. (residue at z = −1) =

d s z = seiπ(s−1) = −seiπs . dz z=eiπ

Thus, J = (2πi)(−seiπs ), and I = −

2πis eiπs πs = . 1 − e2iπs sinπs

(c) and (d) Consider the contour integral I J=

z s ln z dz = z2 + 1

with the contour given here.

I

z s ln z dz , (z + i)(z − i)

17.6. EVALUATION OF DEFINITE INTEGRALS

347

The small arc is needed to avoid the branch point of ln z; neither it nor the large arc contribute to the integral. The portion of the contour along the positive real axis has contribution I, the value of the integral we seek in part (d). To evaluate the contribution from the line along the negative real axis, set z = eiπ r, dz = eiπ dr, z 2 + 1 = r2 + 1, ln z = ln r + iπ, z s = rs eiπs , and therefore ∫ 0 ∫ ∞ s (ln r + iπ)rs eiπs iπ r dr iπs iπs J= e dr + I = e I + iπe +I 2+1 2+1 r r ∞ 0 ) ( π = (1 + eiπs )I + iπeiπs 2 cos(πs/2) [ iπs/2

=e

] iπ 2 eiπs/2 2I cos(πs/2) + . 2 cos(πs/2)

The integral that was evaluated in the last step above is that occurring in part (a) of this Exercise. We now use the residue theorem to evaluate J. Its integrand has a ﬁrstorder pole within the contour at z = i = eiπ/2 : (residue at z = i) = ( so J = (2πi)

eiπs/2 (iπ/2) 2i

) =

eiπs/2 (iπ/2) , 2i

iπ 2 eiπs/2 . 2

Equating the two expressions for J, we ﬁnd 2I cos(πs/2) +

iπ 2 eiπs/2 iπ 2 = , 2 cos(πs/2) 2

which we solve for I: [ ] ( ) iπ 2 cos(πs/2) − eiπs/2 iπ 2 −i sin(πs/2) π 2 sin(πs/2) I= = = . 2 2 4 cos (πs/2) 4 cos (πs/2) 4 cos2 (πs/2) We obtained −i sin(πs/2) by writing eiπs/2 in trigonometric form. This is the result for part (d). Setting s = 0, we get for part (c) the simple answer I = 0.

348

CHAPTER 17. COMPLEX VARIABLE THEORY

17.7

EVALUATION OF INFINITE SERIES

Exercises 17.7.1.

Use contour integration methods to evaluate the following sums.

(a)

∞ ∑ n=0

1 1 1 1 1 = 2 + 2 + 2 + 2 + ··· (2n + 1)2 1 3 5 7

(b)

∞ ∑ 1 1 1 1 1 = 4 + 4 + 4 + 4 + ··· 4 n 1 2 3 4 n=1

(c)

∞ ∑ 1 1 1 1 1 = 6 + 6 + 6 + 6 + ··· 6 n 1 2 3 4 n=1

(d)

∞ ∑ n=0

(e)

∞ ∑ n=1

(f)

(g)

1 1 1 1 1 = 4 + 4 + 4 + 4 + ··· (2n + 1)4 1 3 5 7 1 1 1 1 1 = + + + + ··· n(n + 2) 1·3 2·4 3·5 4·6

∞ ∑ (−1)n−1 1 1 1 1 = 2 − 2 + 2 − 2 + ··· 2 n 1 2 3 4 n=1 ∞ ∑ n=1

1 1 1 1 1 = 2 + 2 + 2 + 2 + ··· (n2 + 1)2 2 5 10 17

Solution: Let S denote the sum to be evaluated, with S ′ or S ′′ a related sum that is of a form appropriate to the use of a contour integration. (a) Start by noting that ∞ ∑

1 1 1 1 1 = + + + + ··· = S , 2 2 2 2 2 (−2n + 1) (−1) (−3) (−5) (−7) n=1 so S′ =

∞ ∑

1 = 2S . (2n + 1)2 n=−∞

Next observe that S ′′ = 4S ′ =

We see that S ′′ is of the form

∞ ∑

1 = 8S . (n + 21 )2 n=−∞ ∞ ∑

f (n + 21 ), with f (z) = 1/z 2 .

n=−∞ ′′

S can therefore be evaluated as the sum of the residues of π tan(πz) at the singularities of f (z). The only singularity of f (z) is at z = 0, so we have S ′′ = residue of π tan(πz)/z 2 at z = 0.

17.7. EVALUATION OF INFINITE SERIES

349

Writing tan(πz) = πz + O(z 3 ), we identify the residue as having the value π 2 , yielding S = S ′′ /8 = π 2 /8. (b) The summation

∞ ∑

1 also has value S, and (−n)4 n=1 S′ =

∞ ∑ ′ 1 = 2S , n4 n=−∞

where the prime on the summation indicates omission of the term for which n = 0. Therefore, following the analysis at Eq. (17.52), we identify S ′ as the sum of the residues of F (z) = π cot(πz)/z 4 at all the nonzero integers, so that S ′ is also (minus the residue of the only other singularity of F ), namely that at z = 0. The residue of F at z = 0 is most easily found by noting that the z 3 term in the expansion of cot πz is −π 3 z 3 /45. The residue of F is therefore π(−π 3 /45); the negative of this is S ′ = π 4 /45. Thus, S = S ′ /2 = π 4 /90. (c) This problem is similar to that of part (b) in that S ′ = 2S and that the value of S ′ is minus the residue of F (z) at z = 0. But here F (z) = π cot(πz)/z 6 , and the residue is most easily obtained by noting that the z 5 term in the expansion of cot πz is −2π 5 z 5 /945. The residue of F is −2π 6 /945, so S ′ = 2π 6 /945 and S = S ′ /2 = π 6 /945. (d) This problem is similar to that of part (a) but diﬀers in that (in addition to extending the sum to −∞) we must multiply the terms by 24 to obtain S′ =

∞ ∑

f (n + 21 ) = 32S ,

with

f (z) =

−∞

1 . z4

S ′ can now be evaluated as the sum of the residues of π tan(πz)f (z) at the singularities of f (z). The only singularity of f (z) is at z = 0; the simplest way of ﬁnding the residue there is to introduce the power-series expansion tan(πz) = πz + (πz)3 /3 + O(z 5 ). The residue of F is S ′ = π(π 3 /3) = π 4 /3. Therefore S = S ′ /32 = π 4 /96. (e) Note that 1 1 = , n(n + 2) n=−3 (−3)(−1) so that

1 1 = ··· , n(n + 2) n=−4 (−4)(−2)

−3 ∑

1 =S. n(n + 2) n=−∞ For n = 0 and n = −2 the summand 1/n(n + 2) is singular, but for n = −1 we have 1/(−1)(1) = −1. From the above information we conclude that ′

S =

∞ ∑ ′

1 = 2S − 1 , n(n + 2) n=−∞

350

CHAPTER 17. COMPLEX VARIABLE THEORY where the prime on the summation sign indicates the exclusion of the singular points n = 0 and n = −2 from the sum. Based on the discussion at Eq. (17.52), S ′ will be minus the sum of the residues of F (z) = π cot(πz)/z(z + 2) at the singularities of 1/z(z + 2), i.e., minus the sum of the residues of F (z) at z = 0 and z = −2. The cotangent has a simple pole at each of these values of z, so the singularity of F at each point is a pole of order 2. Finding the residues, residue at z = 0:

d πz cot(πz) 1 =− , dz z+2 4

lim

z→0

residue at z = −2:

lim

z→−2

d π(z − 2) cot(πz) 1 =− . dz z 4

( ) 1 1 1 = , and S = (S ′ + 1)/2 = 3/4. Therefore S ′ = − − − 4 4 2 (f) Because

−1 ∑ (−1)n−1 =S, n2 n=−∞

S′ =

∞ ′ ∑ (−1)n−1 = 2S . n2 n=−∞

The prime on the summation indicates that the term for n = 0 is to be omitted from the sum. S ′ is a summation of the type ∞ ∑

(−1)n f (n) ,

with f (z) = −1/z 2 ;

n=−∞

the minus sign in f (z) arises from the sign factor (−1)n−1 in S ′ . This sum can be evaluated using F (z) = −π csc(πz)/z 2 . Following the analysis in Eq. (17.52), S ′ will have a value equal to minus the residue of F (z) at z = 0, the only singularity of f (z). Note that the z term in the expansion of csc πz is πz/6, so the residue of F (z) is −π 2 /6, S ′ = π 2 /6, and S = S ′ /2 = π 2 /12. [ ] 1 1 1 (g) Because 2 = and = 1, (n + 1)2 ((−n)2 + 1)2 (n2 + 1)2 n=0 −1 ∑

1 =S 2 (n + 1)2 n=−∞

and

S′ =

∞ ∑ n=−∞

(n2

1 = 2S + 1 . + 1)2

We now evaluate S ′ as minus the sum of the residues of F (z) = π cot(πz)/(z 2 + 1)2 at the singularities of 1/(z 2 + 1)2 , which are at z = ±i. The denominator of F factors into (z − i)2 (z + i)2 , so both poles are second order. The residue at z = i can be calculated from d π cot(πz) 1 = (π 2 − π 2 coth2 π − π coth π) . dz (z + i)2 z=i 4

17.7. EVALUATION OF INFINITE SERIES

351

The residue at z = −i has the same value. Therefore 1 S ′ = − (π 2 − π 2 coth2 π − π coth π) , 2 S= 17.7.2.

S′ − 1 1 π2 π =− − (1 − coth2 π) + coth π . 2 2 4 4

Check your answers for Exercise 17.7.1 using your symbolic computing system. The command for the formal evaluation of the sum of terms un for n from n1 to n2 is (maple)

sum( un , n=n1 ..

(mathematica)

Sum[ un , {n,n1,n2} ]

n2 )

Possible values for summation limits include infinity (maple) or Infinity (mathematica).

Solution: (a) Use one of > sum(1/(2*n+1)^2,

n=0 .. infinity);

Sum[1/(2*n + 1)^2, {n, 0, Infinity}] Both give π 2 /8. (b) Use one of > sum(1/n^4, n=1 .. infinity); Sum[1/n^4, {n, 1, Infinity}] Both give π 4 /90. (c) Use one of > sum(1/n^6, n=1 .. infinity); Sum[1/n^6, {n, 1, Infinity}] Both give π 2 /945. (d) Use one of > sum(1/(2*n+1)^4, n=0 .. infinity); Sum[1/(2*n + 1)^4, {n, 0, Infinity}] Both give π 4 /96. (e)

> sum(1/n/(n+2), n=1 .. infinity); Sum[1/n/(n + 2), {n, 1, Infinity}] Both give 3/4.

(f)

> sum((-1)^(n-1)/n^2, n=1 .. infinity); Sum[(-1)^(n - 1)/n^2, {n, 1, Infinity}] Both give π 2 /12.

(g)

> sum(1/(n^2+1)^2, n=1 .. infinity); 1 1 1 1 − π 2 + π 2 coth(π)2 − + π coth(π) 4 4 2 4

352

CHAPTER 17. COMPLEX VARIABLE THEORY Sum[1/(n^2 + 1)^2, {n, 1, Infinity}] 1 (−2 + π Coth[π] + π 2 Csch[π]2 ) 4 Both these expressions are equivalent to that given in Exercise 17.7.1.

17.8

CAUCHY PRINCIPAL VALUE

Exercises 17.8.1.

Evaluate the following principal-value integrals. Assume the quantities k and a to be real, and restrict s to the range 0 < s < 1. ∫ ∞ ∫ ∞ cos kx x sin kx (a) dx (b) dx x 2 − a2 x 2 − a2 0 0 ∫

(c) 0

x−s dx x−1

(d) −∞

sin x dx (3x − a)(x2 + a2 )

Solution: (a) Extend the principal-value integral I to −∞ as an even function and identify that integral (which is cut in two places) as 2I. Then replace cos kx by eikx and retain only the real part of our answer. Finally, consider a contour integral that passes along the real axis but avoids the poles at z = a and z = −a by passing them with small clockwise arcs, closing the contour with a large arc in the upper half-plane. This contour encloses no singularities and therefore deﬁnes an integral of value zero. We therefore have [∫ ] ∫ eikz dz eikz dz 2I + Re + = 0. arc at z 2 − a2 arc at z 2 − a2 z=−a

z=a

Remembering that the arcs around the poles are clockwise half-circles, ∫ arc at z=−a

∫ arc at z=a

eikz dz iπ −ika = −iπ(residue at z = −a) = − e , z 2 − a2 −2a iπ eikz dz = −iπ(residue at z = a) = − eika . z 2 − a2 2a

We get [ 2I = Re

] [ ] ) iπ ( ika iπ π (2i sin ka) , or I = − sin(ka) . e − e−ika = Re 2a 2a 2a

(b) We relate this principal-value integral to a contour integral by the same steps that were taken for part (a), except that we need the imaginary part of eikx . We have ] [∫ ∫ z eikz dz z eikz dz + = 0. 2I + Im arc at z 2 − a2 arc at z 2 − a2 z=−a

z=a

17.8. CAUCHY PRINCIPAL VALUE

353

Remembering that the arcs around the poles are clockwise half-circles, ∫ z eikz dz iπ −ika e , = −iπ(residue at z = −a) = − 2 2 arc at z − a −2a z=−a

∫ arc at z=a

We get

z eikz dz iπ = −iπ(residue at z = a) = − eika . 2 2 z −a 2a

[

] [ ] iπ ika iπ π −ika 2I = Im (e + e = Im (2 cos ka) , or I = cos ka . 2 2 2

(c) Relate this principal-value integral I to a contour integral that passes along the entire real axis except for a small clockwise semicircle about z = 1, with the contour closed by a large arc in the upper half-plane. This contour encloses no singularities and the contour integral is zero. The contour includes a contribution from z = −∞ to z = 0 that is not in the integral I in addition to the small clockwise arc. We therefore have I −s ∫ 0 −s ∫ z dz z dz z −s dz =I+ + = 0. arc at z − 1 z−1 −∞ z − 1 z=1

The integral over negative z can be rearranged and then identiﬁed as proportional to that discussed in Example 17.6.6, which was there shown to have the value π/ sin sπ: ∫ 0 −s ∫ ∞ −s ( π ) z dz r dr = −e−iπs = −(cos πs − i sin πs) r+1 sin πs −∞ z − 1 0 = −π cot πs + iπ . Remembering that the arc around the pole is a clockwise half-circle, ∫ z −s dz = −iπ(1−s ) = −iπ. arc at z − 1 z=1

Inserting these results into the expression involving I, we ﬁnd I = π cot πs. (d) Relate this principal-value integral I to the imaginary part of a contour integral in which sin z has been replaced by eiz . The contour passes along the real line except for a small clockwise arc at z = a/3, and is closed by a large arc in the upper half-plane. In contrast to the situation in parts (a) through (c), the contour encloses a ﬁrst-order pole at z = ia. Therefore, [∫ ] eiz dz I + Im = Im [2πi(residue at z = ia)] . arc at (3z − a)(z 2 + a2 ) z=a/3 Remembering that the arc around the pole is a clockwise half-circle, ∫ eiz dz eia/3 3iπeia/3 = −πi =− , 2 2 2 2 arc at (3z − a)(z + a ) 3[(a/3) + a ] 10a2 z=a/3 (residue at z = ia) =

e−a (i − 3)e−a = . (3ia − a)(2ia) 20a2

354

CHAPTER 17. COMPLEX VARIABLE THEORY Inserting these results, taking the imaginary parts, and solving for I, I=

17.8.2.

] 3π [ cos(a/3) − e−a . 10a2

Write symbolic code that provides a numerical veriﬁcation of the result of part (c) of Exercise 17.8.1. Your code should calculate directly ∫ 1−ε −s ∫ ∞ −s x dx x dx + , x−1 0 1+ε x − 1 and you should run it for a value of ε suﬃciently small that the integral sum approaches a limit. Your result should then be compared numerically for several values of s with the principal values obtained when you carried out part (c).

Solution: In maple, code for a typical case is > s:=0.7: eps:=0.0001: int(x^(-s)/(x-1), x = 0 .. 1-eps) >

+ int(x^(-s)/(x-1), x =1+eps .. infinity); −2.282360672

In mathematica, typical code is s = 0.7;

eps = 0.00001;

Integrate[x^(-s)/(x - 1), {x, -Infinity, 1 - eps}] + Integrate[x^(-s)/(x - 1), {x, 1 + eps, Infinity}] −2.28249 − 2.44914 × 10−11 i An accurate value of this principal-value integral is π cot πs = −2.282500671 . 17.8.3.

Use contour integral methods to evaluate

x dx . sinh x Use a contour similar to that illustrated in Fig. 17.14, but modify it to allow for the fact that I=

0

there is a pole at z = iπ. Explain why there is not a pole at z = 0.

Solution: Move the upper horizontal line of the contour to y = π/2; there are then no singularities on or within the contour. The part of the contour along the real axis contributes 2I to the integral. On the line at y = π/2, note that sinh(x + iπ/2) = sinh x cosh iπ/2 + cosh x sinh iπ/2 = i cosh x , and that portion of the integral can be written ∫ −∞ (π) x + iπ/2 dx = − π. i cosh x 2 ∞ In obtaining this result we have noted that x/ cosh x is an odd function whose integral vanishes and that the integral of 1/ cosh x was found in Example 17.6.5 to have the value π. The vertical connections at x = ±∞ do not contribute to the contour integral, so we have π2 π2 = 0, and I = . 2I − 2 4

17.9. INVERSE LAPLACE TRANSFORM

355

There is not a pole at z = 0 because limz→0 (x/ sinh x) = 1. 17.8.4.

The dispersion relations can be expressed as integrals on the range (0, ∞) if u(x) is an even function and v(x) is odd. Assuming that u and v have these properties, show that Eqs. (17.63) can be expressed as integrals on the range (0, ∞) and brought to the following forms: ∫ ∫ 1 ∞ 2xv(x) 1 ∞ 2x0 u(x) dx , v(x0 ) = − dx . u(x0 ) = 2 2 π 0 x − x0 π 0 x2 − x20 These equations are known as the Kronig-Kramers relations.

Solution: To obtain the ﬁrst K-K relation, note that 1 1 2x = + x2 − x20 x − x0 x + x0

and that 0

v(x) dx =− x + x0

0 −∞

v(−x) dx . x − x0

Because v(x) is assumed to be odd, this substitution and rearrangement converts the ﬁrst K-K relation into [∫ ∞ ] ∫ 0 1 v(x) dx v(x) dx u(x0 ) = + . π 0 x − x0 −∞ x − x0 Only the integral for the partial range including x0 has a principal-value singularity; when they are combined they yield the ﬁrst of Eqs. (17.63). The second KK relation is obtained starting from ∫ ∞ ∫ 2x0 1 u(x) dx 1 = − and − = x2 − x20 x − x0 x + x0 0 x + x0

0 −∞

u(−x) dx . x − x0

Because u(x) is assumed to be even, the second KK relation becomes [∫ ∞ ] ∫ 0 1 u(x) dx u(x) dx v(x0 ) = − + . π 0 x − x0 −∞ x − x0 This formula is equivalent to the second of Eqs. (17.63).

17.9

INVERSE LAPLACE TRANSFORM

Exercises 17.9.1.

Show that Jordan’s lemma, Eq. (17.36), can be identiﬁed as applicable to the large arc to the left that is needed to close the contour for the Bromwich integral.

Solution: In the original statement of Jordan’s lemma, replace y by −x and x by y, causing the arc on which the integral vanishes to be in the left half-plane (i.e. a curve toward large negative x). 17.9.2.

Use the Bromwich integral to ﬁnd the inverse Laplace transforms of the following functions: (a)

(d)

(g)

1 s(s + 1) s2

1 − 3s + 2

s5 s6 + 26

(b)

(e)

(h)

1 s4 − 1 (s2

s + 4)2

s (s + 1)(s2 + 4)

(c)

(f)

(i)

s s4 − 1 (s2

s2 − 1)(s2 − 4)

(s − 1)2 s(s + 1)2

356

CHAPTER 17. COMPLEX VARIABLE THEORY Solution: The functions of this exercise have only poles as singularities, so the inverse transform will be the sum of the residues of etz F (z). (a) F (z) has ﬁrst-order poles at z = 0 and z = −1; the respective residues of etz F (z) are e0 /1 and e−t /(−1). so f (t) = 1 − e−t . (b) F (z) has ﬁrst-order poles at z0 = 1, i, − 1, and −i; the residue of etz F (z) at z = z0 is (applying l’Hˆopital’s rule) etz0 (z − z0 )etz = 3 . 4 z −1 4z0

lim

z→z0

Evaluating these residues: et ; 4

z0 = 1 :

z0 = −1 :

e−t ; −4

z0 = i :

eit ; 4i3

z0 = −i :

e−it . 4(−3)3

Combining the residues: f (t) = 12 (sinh t − sin t) . (c) Here F (z) has the singularities already found in part (b), with residues lim

z→z0

etz0 (z − z0 )zetz z0 etz0 = . = z4 − 1 4z03 4z02

Evaluating these residues for the z0 values found in part (b), z0 = 1 :

et ; 4

z0 = −1 :

e−t ; 4

z0 = i :

eit ; 4i2

z0 = −i :

e−it . 4i2

Combining the residues: f (t) = 12 (cosh t − cos t) . (d) Here F (z) = 1/(z − 2)(z − 1), so F has ﬁrst-order poles at z0 = 2 and z0 = 1. Evaluating the residues of etz F (z), we get z0 = 2 :

e2t ; 1

z0 = 1 :

et . −1

Combining the residues: f (t) = e1t − et . (e) We note that (z 2 + 4)2 = (z + 2i)2 (z − 2i)2 , so F (z) has second-order poles at z0 = 2i and z0 = −2i. The inverse transform will be the sum of the residues of etz z/(z 2 + 4) at these poles. At z = 2i, the residue is

ite2it ; −8

ite−2it . 8 Adding these and recognizing the result as a trigonometric function, we reach f (t) = (t sin 2t)/4. (f) Here F (z) has ﬁrst-order poles at z = 1, z = −1, z = 2, and z = −2. etz Evaluating the residues of , we get (z − 1)(z + 1)(z − 2)(z + 2) At z = −2i, the residue is

z=1:

z=2:

et ; 2(−1)3 4e2t ; 1(3)4

z = −1 : z = −2 :

e−t ; (−2)(−3)1 4e−2t . −3(−1)(−4)

17.9. INVERSE LAPLACE TRANSFORM

357

1 (2 sinh 2t − sinh t) . 3 √ (g) Here √ F (z) has ﬁrst-order poles at z0 = √ √ 2i, z0 = −2i, z0 = tz 3 + i, z0 = 3 − i, z0 = − 3 + i, and z0 = − 3 − i. The residue of e F (z) at z = z0 is (using l’Hˆopital’s rule) Combining the residues: f (t) =

lim

z→z0

z 5 (z − z0 )etz z 5 etz etz0 = = . 6 6 5 z +2 6z 6

Evaluating these residues, z0 = 2i :

e2it ; 6

z0 = −2i :

√ z0 = 3 ± i :

e−2i ; 6

√ z0 = − 3 ± i :

e(−

e(

3±i)t

6

;

3±i)t

6

.

√ 2 cosh 3 t cos(2t/3) . 3 (h) Writing (s + 1)(s2 + 4) = ((s + 1)(s + 2i)(s − 2i), we see that F (z) has ﬁrst-order poles at z = −1, z = 2i, and z = −2i. Combining the residues: f (t) =

The residue of etz F (z) at z = −1 is − The residue at z = 2i is

e−t . 5

e2it . 2(1 + 2i)

The residue at z = −2i is

e−2it . 2(1 − 2i)

Combining these, we get f (t) = −

] e−t 1[ + cos 2t + 2 sin 2t . 5 5

(i) Here F (z) has a ﬁrst-order pole at z = 0 and a second-order pole at z = −1. The residues of etz F (z) are (−1)2 d (z − 1)2 etz z=0: ; z = −1 : = −4te−t 12 dz z z=−1 Therefore, f (t) = 1 − 4t e−t . 17.9.3.

Use the Bromwich integral to ﬁnd the inverse Laplace transform of F (s) =

1 . (s2 + a2 )1/2

Proceed as follows: Close the contour for the Bromwich integral by a large arc to the left, and note that the contour then encloses two branch points (at z = ±ia) which can be connected by a branch cut. To evaluate the contour integral, shrink the contour so that it goes up one side of the branch cut and down the other, and reduce the contour integral to an ordinary integral from −a to a. Use your symbolic computing system to evaluate this integral, and organize the result in a way that solves this exercise.

358

CHAPTER 17. COMPLEX VARIABLE THEORY Solution: The proper branch for F (s) is that which is positive for real s. Then, taking s = iy, the integral from −a to a on the right-hand side of the branch cut (with a > 0) is ∫ a eiyt 1 (i dy) . 2πi a (a2 − y 2 )1/2 The contour then connects by a small loop around z = ia to a straight-line segment along the left-hand side of the branch cut from a to −a and is then closed by a small loop around z = −ia. The small loops do not contribute to the integral. A minus sign arising because the left-hand straight line is traversed in the downward direction is oﬀset by the fact that the circuit of the branch point changes (a2 − y 2 )1/2 into −(a2 − y 2 )1/2 , so the entire contour integral is twice that of the right-hand segment, and ∫ 1 a eiyt f (t) = dy . π −a (a2 − y 2 )1/2 Evaluating this integral using maple, explicitly assuming a to be positive, > assume(a>0); >1/Pi*int(exp(I*y*t)/sqrt(a^2 - y^2), y = -a .. a); BesselJ(0, ta) In mathematica, 1/Pi*Integrate[E^(I*y*t)/Sqrt[a^2 - y^2], {y, -a, a}] ConditionalExpression[BesselJ[0, at], a > 0] mathematica decided to restrict a to positive values when evaluating the integral.

Chapter 18

PROBABILITY AND STATISTICS 18.1

DEFINITIONS AND SIMPLE PROPERTIES

Exercises 18.1.1.

For the two-coin tosses of the present section, (a) Explain why the set {ht, (at least one head), tt} is not a valid sample set. (b) Find the probability of each member of the following sample set: {hh, (at least one tail)}.

Solution: (a) The members of the set are not mutually exclusive. (b) hh, 1/4; (at least one tail), 3/4. 18.1.2.

A standard deck of playing cards has four suits (spades, hearts, diamonds, clubs), each with 13 cards: (2 to 10, jack, queen, king, ace). If two cards are drawn “at random” (i.e., from a well-shuﬄed deck), what is the probability that they are of the same suit? Hint. Don’t forget that when the second card is drawn, the deck is then missing the ﬁrst card.

Solution: The ﬁrst draw can be any card (probability = 1). There are 12 of the remaining 51 cards that are of the same suit as the ﬁrst card; the probability of drawing one of these is 12/51. 18.1.3.

For a shuﬄed standard deck of playing cards, compute the probability that (a) A single card that is drawn is a spade, (b) A single card that is drawn is an ace, (c) A single card that is drawn is a spade or an ace, (d) When three cards are drawn, exactly two are of the same suit, (e) When three cards are drawn, none is an ace.

359

360

CHAPTER 18. PROBABILITY AND STATISTICS Solution: (a) 13/52 (b) 4/52 (c) There are 13 spades and 3 other aces: probability is 16/52. (d) If s and s′ denote diﬀerent suits, the three mutually exclusive success events are sss′ , ss′ s, and s′ ss. We can add their probabilities: Psss′ = (1)(12/51)(39/50), Pss′ s = (1)(39/51)(12/50), Ps′ ss = (1)(39/51)(12/50). These are all equal, so the overall probability is 3 · 12 · 39/51 · 50. (e) First card not ace: 48/52, next card not ace either: 47/51; third also not ace: 46/50. Overall probability: 48 · 47 · 46/52 · 51 · 50.

18.1.4.

A card is drawn from a standard deck of playing cards, its identity is noted and it is returned to the deck, which is then reshuﬄed and a second drawing is carried out. Compute the probability that (a) Both cards are spades, (b) Both cards are of the same suit, (c) The two cards are of diﬀerent suits, (d) The same card was drawn twice. (e) Can you obtain the result for part (b) or for part (c) from that for part (a)? If so, explain why.

Solution: (a) (13/52)2 = 1/16 (b) (1)(13/52) = 1/4 (c) (1)(39/52) = 3/4 (d) (1)(1/52) (e) Pb is four times Pa because there are four mutually exclusive suits. Because (b) and (c) constitute a division of the entire sample space into two mutually exclusive parts, Pc = 1 − Pb . 18.1.5.

A box contains a white balls and b black balls; a second box contains c white balls and d black balls. A ball is transferred at random (wiithout observing its color) from the ﬁrst box to the second box. If a ball is then withdrawn from the second box, what is the probability that it will be white?

Solution: Letting P1 refer to probabilities for the ﬁrst box and P2 those for the second box, the overall probability that the draw from the second box will be white is the sum of the two mutually exclusive events (white transferred, white drawn) and (black transferred, white drawn). Using conditional-probability notation, this sum is P P = P1 (w)P2 (w, w) + P1 (b)P2 (b, w) .

18.1. DEFINITIONS AND SIMPLE PROPERTIES

361

The ﬁrst-box probabilities are P1 (w) = a/(a + b), P1 (b) = b/(a + b). The second-box conditional probabilities are P2 (w, w) = (c + 1)/(c + d + 1) and P2 (w, b) = c/(c + d + 1). Thus, ( )( ) ( )( ) a c+1 b c a + ac + bc PP = + = . a+b c+d+1 a+b c+d+1 (a + b)(c + d + 1) 18.1.6.

Two boxes are of identical appearance. The ﬁrst contains 2 white balls and 3 red balls; the second contains 6 white balls and 4 red balls. One box is selected at random and a ball is withdrawn from it. What is the probability that this ball is white?

Solution: Each box is selected with probability 1/2. The conditional probability for drawing a white ball from Box 1 is 2/5; that for drawing a white ball from Box 2 is 6/10 = 3/5. The overall probability of drawing a white ball is ( )( ) ( )( ) 1 2 3 1 1 + = . 2 5 2 5 2 18.1.7.

A for-proﬁt school prepares high-school students for a college admission test. At the end of the preparatory process the students take a practice test and then go on to take the college admission test. The school reports that its students have 80% success rates for both the practice test and the college admission test, and that 60% of its students who fail the practice test also fail the college admissions test. What fraction of the students who pass the preparatory test fail the college admission test?

Solution: First ﬁnd the fraction of the total population that fails both the practice and the admission tests; it is 0.20 × 0.60 = 0.12. The fraction of the total population failing the admission test is 1 − 0.80 = 0.20, consisting of 0.12 who also fail the practice test and 0.20 − 0.12 = 0.08 who pass the practice test. Because 0.08 of the 0.80 fraction that pass the practice test fail the admission test, the probability of this occurrence is 0.08/0.80 = 10%. 18.1.8.

(a) What is the probability that two people have diﬀerent birthdays (assume a 365-day year and exclude February 29)? (b) What is the probability that three people all have diﬀerent birthdays? (c) What is the smallest number of people for which the probability that at least two have the same birthday exceeds 0.5? Note. Consider writing symbolic code to make this computation.

Solution: (a) 364/365 364 363 (b) 365 365 (c) The probability that n people (n > 1) all have diﬀerent birthdays is Pn =

364 363 364 − (n − 2) (364)! 1 ··· = , 365 365 365 (365 − n)! (365)n−1

362

CHAPTER 18. PROBABILITY AND STATISTICS and the probability that at least two people have the same birthday is Qn = 1 − Pn . Coding this quantity symbolically, we can search for the smallest n for which Qn exceeds 1/2. In maple, > Q := 1.-364!/(365-n)!/365^{n-1} > n:=22: Q, n:=23: Q; In mathematica,

0.475695, 0.507297

Q[n_] := 1. - 364!/(365 - n)!/365^(n - 1)

Q[22]

0.475695

Q[23]

0.507297

The probability of having the same birthday exceeds 1/2 when there are as few as 23 people! 18.1.9.

You are dealt ﬁve cards from a standard 52-card deck of playing cards. What is the probability that (a) Four are aces? (b) You have any four-of-a-kind? (c) All ﬁve are the same suit (a ﬂush)? (d) All ﬁve are spades?

Solution: (a) There are ﬁve equally probable mutually exclusive ways to get four aces in ﬁve cards. Taking ﬁve times that in which the ﬁrst four cards are aces: 5

4 3 2 1 5! 48! = = 0.0000185 . 52 51 50 49 52!

(b) The other 12 four-of-a-kind possibilities are mutually exclusive relative to four aces and of equal probability, so we multiply the result of part (a) by 13, reaching 0.0000185 · 13 = 0.000240. (c) We need the second through ﬁfth cards to be of the same suit as the ﬁrst card, so the probability is 12 11 10 9 = 0.001981 . 51 50 49 48 (d) If all ﬁve cards are to be spades, the net probability will be 1/4 that of part (c). The probability is 0.001981/4 = 0.000495 . 18.1.10.

How many diﬀerent ways can a committee consisting of a designated chair and four other members be selected from among 12 candidates?

Solution: The chair can be selected in 12 ways. The remainder of the committee consists of four people chosen from 11, which can occur in C(11, 4) ways. Thus, the total number of distinct committees with a designated chair is ( ) 11 12 = 3960 . 4

18.1. DEFINITIONS AND SIMPLE PROPERTIES 18.1.11.

363

Cards are drawn at random from a standard 52-card deck. What is the probability that exactly 8 cards are drawn before obtaining the ﬁrst ace?

Solution: The eight non-aces are drawn with probability 41 48 47 ··· . 52 51 45 An ace is then drawn, with probability 4/44. The overall probability is 0.045585 . 18.1.12.

What is the probability that if a fair coin is tossed 100 times, it will land “heads” exactly 50 times? Hint. Use symbolic computing or Stirling’s formula.

Solution: We have 50 heads, each at probability 1/2, and 50 tails, each at probability 1/2. They can occur in C(100, 50) diﬀerent orders. The overall probability is therefore ( ) 1 1 100 C(100, 50) = 100 = 0.0796 . 2100 2 50 Symbolic computation of this expression can be carried out using one of > 1./2^100 * binomial(100,50); 1./2^100 * Binomial[100,50] 18.1.13.

A computer byte consists of n consecutive bits (a bit has two values, 0 and 1). Most modern computers have 8-bit bytes. There were once computers with 6-bit bytes. (a) How many diﬀerent 6-bit bytes are there? (b) How many 6-bit bytes contain (anywhere) the sequence 1001?

Solution: (a) Each bit can have two values: 26 . (b) The three possibilities are xy1001, x1001y, and 1001xy. Each of these can occur with four choices for x and y: 12 bytes. 18.1.14.

By explicit enumeration, determine the number of ways 2 identical particles can be put into 4 states using Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein statistics. Compare your results with the general formulas given earlier in this section of the text.

Solution: In the following table, the columns for Maxwell-Boltzmann statistics list (by number) the distinguishable particles in the indicated states (A, B, C, D); the columns for Fermi-Dirac and Bose-Einstein statistics list the numbers of indistinguishable particles in the indicated states.

364

CHAPTER 18. PROBABILITY AND STATISTICS

Maxwell-Boltzmann A

B

C

1, 2 0 0 1 2 0 2 1 0 1 0 2 2 0 1 1 0 0 2 0 0 0 1, 2 0 0 1 2 0 2 1 0 1 0 0 2 0 0 0 1, 2 0 0 1 0 0 2 0 0 0

Fermi-Dirac

Bose-Einstein

D

A

B

C

D

A

B

C

D

0 0 0 0 0 2 1 0 0 0 2 1 0 2 1 1, 2

1 1 1 0 0 0

1 0 0 1 1 0

0 1 0 1 0 1

0 0 1 0 1 1

2 1 1 1 0 0 0 0 0 0

0 1 0 0 2 1 1 0 0 0

0 0 1 0 0 1 0 2 1 0

0 0 0 1 0 0 1 0 1 2

Here we have N = 2 particles in m = 4 states. We see that there are 16 Maxwell-Boltzmann states; the general formula for the number of these states is mN = 42 = 16; our count is correct. There are 6 Fermi-Dirac states; the general formula is C(m, N ) = C(4, 2) = 6, the number we found. There are 10 Bose-Einstein states; the general formula is C(N + m − 1, N ) = C(5.2) = 10, also in agreement with our table. 18.1.15.

Each of n balls is placed at random in any one of n boxes (without regard for whether there is already another ball there). Show that in the limit of large n the probability that each box will contain one ball is (2πn)1/2 e−n .

Solution: The number of ways n balls can be placed with one in each of n boxes is n!. The number of possible placements with no occupancy limitations is nn . The probability of each box having one ball is therefore n!/nn . In the limit of large n this ratio can be computed using Stirling’s formula. Referring to Eq. (9.27), we have √ √ n! Γ(n + 1) 2π nn+1/2 e−n = = = 2πn e−n . n n n n n n 18.1.16.

What is the probability of getting 20 points with 6 dice?

Solution: This is a problem best approached with symbolic computing. Let’s write code that runs through all possibilities for the points on six dice and accumulates the number of times each total occurs.

18.1. DEFINITIONS AND SIMPLE PROPERTIES

365

In maple, Count the totals (initialized to zero) > for i1 from 1 to 6 do for i2 from 1 to 6 do > NN := Array(6 .. 36):

> for i3 from 1 to 6 do for i4 from 1 to 6 do > for i5 from 1 to 6 do for i6 from 1 to 6 do > j := i1+i2+i3+i4+i5+i6:

Number of points

>

Increment counter

NN[j]:=NN[j]+1:

> end do end do end do end do end do end do: Check the total number of possibilities (should be 66 = 46656) > add(NN[i], i=6 .. 36);

46656

Get the number of times 20 occurs and compute probability: > NN[20], NN[20]/46656.;

4221, 0.0904707

In mathematica, NN = Table[0,{i,36}] Do[Do[Do[Do[Do[Do[j = i1+i2+i3+i4+i5+i6; NN[[j]] = NN[[j]] + 1,

Count the totals (initialized to zero) Number of points Increment counter

{i6,1,6}],{i5,1,6}],{i4,1,6}], {i3,1,6}],{i2,1,6}],{i1,1,6}] Check the total number of possibilities (should be 66 = 46656) Sum[NN[[i]],{i,6,36}]

46656

Get the number of times 20 occurs and compute probability: NN[[20]] NN[[20]]/46656. 18.1.17.

4221 0.0904707

A red, a blue, a green, and a white ball are placed one each, at random, into boxes that are also colored red, blue, green, and white. What is the probability that no ball is in the box of the same color?

Solution: If the blue ball is placed in the red box, each of the three possible assignments of the red ball leads to a unique way of placing the remaining two balls in boxes not of the same color. There will also be three acceptable assignments when each of the other balls (green and white) is placed in the red box. Thus, the total number of assignments with no color match is 9. The total number of possible assignments is 44 = 256, so the probability that there is no color match will be 9/256. 18.1.18.

A box contains a white balls and b black balls. Balls are withdrawn until all the balls remaining in the box are the same color. What is the probability that this color is white?

366

CHAPTER 18. PROBABILITY AND STATISTICS Solution: Assuming that all the balls are lined up in the random order in which they are to be drawn, this problem is equivalent to asking for the probability that the last ball is white. That probability is a/(a + b).

18.1.19.

Two dice were thrown 36 times. What is the probability that one or more of these throws produced a double-six?

Solution: The probability of not getting a double-six is 35/36; the probability of (no double-six) 36 consecutive times is (35/36)36 , so the probability that this will not happen is 1 − (35/36)36 = 0.637.

18.2

RANDOM VARIABLES

Exercises 18.2.1.

Find the mean, the variance, and the standard deviation for the sum obtained from throws of a pair of dice.

Solution: The numbers of ways the various sums can be formed is xi

2

3

4

5

6

7

ni

1 2 3 4 5 6 ∑ ∑ The mean, ⟨X⟩, is ni xi / ni = 7. ∑ ∑ The value of ⟨X 2 ⟩ is ni x2i / ni = 329/6.

8

9

10

11

12

5

4

3

2

1

The variance is σ 2 = ⟨X 2 ⟩ − ⟨X⟩2 = 35/6. √ The standard deviation is σ = σ 2 = 2.415. 18.2.2.

The following table gives ni , the number of times each value xi of a random variable X occurred in a collection of data: xi

0

1

2

3

4

5

6

7

8

ni

1

0

1

10

20

5

1

1

1

(a) Find the mean, the variance, and the standard deviation of X. (b) Find the number of data points further from the mean than 2σ and compare with the predictions of Chebyshev’s inequality. (c) Repeat part (b) for points further from the mean than 3σ.

Solution: ∑ ni xi / ni = 3.95. ∑ ∑ The value of ⟨X 2 ⟩ is ni x2i / ni = 17.2.

(a) The mean, ⟨X⟩, is

The variance is σ 2 = ⟨X 2 ⟩ − ⟨X⟩2 = 1.5975. √ The standard deviation is σ = σ 2 = 1.2639.

18.2. RANDOM VARIABLES

367

(b) The region within 2σ of the mean is (1.422, 6.477); three of the 40 data points (total probability 0.075) lie outside this region. Chevbyshev’s inequality states that the points outside 2σ will not occur with probability greater than 1/22 = 0.25. This inequality is satisﬁed. (c) The region within 3σ of the mean is (0.158, 7.741); two of the 40 data points (total probability 0.05) lie outside this region. The bound from Chebyshev’s inequality is 1/32 = 0.11. The inequality is satisﬁed. 18.2.3.

Repeat the analysis of Example 18.2.3 for a box that originally contained 4 black balls and 2 red balls.

Solution: Following the terminology of the Example, we establish a sample space with the following four members and with diﬀering probabilities: ( ) ( ) 4 3 6 4 4 2 x1 = 1, y1 = 1, p1 = = = , x2 = 1, y2 = 0, p2 = , 6 5 15 6 5 15 (

4 2 4 = x3 = 0, y3 = 1, p3 = 6 5 15

)

( ,

2 1 1 x4 = 0, y4 = 0, p4 = = 6 5 15

) .

We then compute ⟨X⟩ =

6 4 2 x1 + x2 = 15 15 3

⟨X 2 ⟩ =

6 2 4 2 2 x1 + x2 = 15 15 3

⟨Y ⟩ =

4 2 6 y1 + y3 = 15 15 3

⟨Y 2 ⟩ =

6 2 4 2 2 y1 + y3 = 15 15 3

From these data, σx2 = σy2 =

2 2 4 − = . 3 9 9

For the covariance, we need 6 6 x1 y1 = , 15 15 6 4 2 −2/45 1 cov(X, Y ) = ⟨XY ⟩ − ⟨X⟩⟨Y ⟩ = − = − , corr(X, Y ) = =− . 15 9 45 2/9 5 ⟨XY ⟩ =

18.2.4.

Two cards are drawn from a standard 52-card deck of playing cards. The ﬁrst card is not returned to the deck before drawing the second card. Let X be a random variable with value 1 if the ﬁrst card is a spade and zero otherwise, let Y be a random variable with value 1 if the second card is a spade and zero otherwise, and let Z be a random variable with value 1 if the second card is red and zero otherwise. (a) Calculate the mean values and variances of X, Y , and Z. (b) Calculate the covariance and correlation of each pair of two variables (X, Y ), (X, Z), and (Y, Z). (c) Make comments that provide qualitative explanations for the results of part (b).

368

CHAPTER 18. PROBABILITY AND STATISTICS Solution: (a) The mean values and variances are the same as if calculated for a single card draw. ⟨X⟩ = ⟨X 2 ⟩ = 1/4, ⟨Y ⟩ = ⟨Y 2 ⟩ = 1/4, and ⟨Z⟩ = ⟨Z 2 ⟩ = 1/2. 1 From these data, σx2 = σy2 = 14 − 16 = 3/16, σz2 = 12 − 14 = 1/2. (b) For the covariances, ⟨XY ⟩ =

1 12 3 = , 4 51 51

⟨XZ⟩ =

1 26 13 = , 4 51 102

⟨Y Z⟩ = 0 . From these data and the results of part (a), cov(X, Y ) =

3 1 − , 51 16

cov(X, Z) =

13 1 − , 102 8

cov(Y, Z) = 0 −

1 . 8

These results lead to corr(X, Y ) =

cov(X, Y ) = −0.020, σx σy

corr(Y, Z) =

cov(Y, Z) = −0.577. σy σz

corr(X, Z) =

cov(X, Z) = 0.011, σx σz

(c) The correlation corr(X, Y ) is negative because the draw of a ﬁrst spade decreases the probability of a second; the eﬀect is small because only one of the 13 spades is unavailable for the second draw. The correlation corr(X, Z) is slightly positive because the draw of a ﬁrst spade increases the proportion of red cards in the remainder of the deck. The large negative correlation corr(Y, Z) arises because it is not possible for the second card to be both red and a spade. 18.2.5.

Repeat Exercise 18.2.4 for a situation in which the ﬁrst card drawn is returned to the deck and the deck is reshuﬄed before drawing the second card. Your answer should include the comments demanded in part (c).

Solution: (a) The results are the same as for Exercise 18.2.4. (b) Because Y is now independent of X, ⟨XY ⟩ = ⟨X⟩⟨Y ⟩ and therefore cov(X, Y ) = corr(X, Y ) = 0. Because Z is independent of X, we also have cov(X, Z) = corr(X, Z) = 0. The result for cov(Y, Z) is the same as for Exercise 18.2.4. (c) The return of the ﬁrst card to the deck makes Y and Z independent of X but has no eﬀect on the obvious dependence between Y and Z. 18.2.6.

For the continuous probability density p(x) =

{

xe−x ,

x≥0

0,

x < 0,

18.2. RANDOM VARIABLES

369

(a) Make a plot of p(x) for a range of x at least as large as (−1, +5). (b) Locate on your graph the most probable x value and verify that it corresponds to that found by diﬀerentiation of p(x).

Solution: (a) Here is the plot.

(b) p′ (x) = (1 − x)e−x . It is zero at x = 1. 18.2.7.

For the probability density of Exercise 18.2.6, (a) Verify that p(x) is normalized, i.e., that its integral over all x is unity. (b) By evaluating appropriate integrals, ﬁnd µ, σ 2 , and the standard deviation σ. (c) Obtain an integral representing the cumulative distribution function and use it to ﬁnd the probabilities that x > 1,

|x − µ| < σ,

|x − µ| < 2σ,

|x − µ| < 3σ.

(d) Verify that the results of part (c) are consistent with Chebyshev’s inequality, Eq. (18.30).

Solution: ∫ ∞ (a) xe−x dx = Γ(2) = 1! = 1. 0

(b) We need the following integrals: ∫ ∞ ∫ ⟨X⟩ = x p(x) dx = 0

∫ 0

x2 e−x dx = 2! = 2 ,

0 ∞

⟨X 2 ⟩ =

∫ x2 p(x) dx =

x3 e−x dx = 3! = 6 .

0

From√the above, µ = ⟨X⟩ = 2; σ 2 = ⟨X 2 ⟩ − ⟨X⟩2 = 6 − 22 = 2 , and σ = 2. ∫ b b (c) Let P (a, b) = p(x)e−x dx = −(1 + x)e−x a . a

For x > 1, we have P (1, ∞) = −0 + 2e−1 = 0.736. √ √ For |x − µ| < σ, we have P (2 − 2, 2 + 2) = 0.7375. For |x − µ| < 2σ and |x − µ| < 3σ, we must take note that µ − 2σ < 0, so for both these cases the lower limit of the integral for P (a, b) must be zero. Thus, √ For |x − µ| < 2σ, we have P (0, 2 + 2 2) = 0.9534. √ For |x − µ| < 3σ, we have P (0, 2 + 3 2) = 0.9859.

370

CHAPTER 18. PROBABILITY AND STATISTICS (d) Here the probability that |x − µ| > σ is 1 − 0.7375 = 0.2625; the probability that |x − µ| > 2σ is 1 − 0.7375 = 0.0466; the probability that |x − µ| > 3σ is 1 − 0.7375 = 0.0141; For each |x − µ| > kσ, these cumulative probabilities are smaller than the respective values of 1/k 2 .

18.2.8.

For the probability density of Exercise 18.2.6, ﬁnd the average of X for the portion of the probability distribution that lies outside the range (µ − σ, µ + σ), i.e., for the x values more than one standard deviation from the mean. Hint. Don’t forget to allow for the fact that the total probability for X to be in only a part of its range is not unity.

Solution:

√ From Exercise 18.2.7 we know that µ = 2 and σ = 2. Therefore the average required here is [∫ √ ] ∫ ∞ 2− 2 1 xp(x) dx + √ xp(x) dx = 2.611. 0.2625 0 2+ 2 The number 0.2625 is the probability that |x − µ| > σ; it was calculated in Exercise 18.2.7. The integrals appearing in the present exercise can be evaluated either analytically or numerically, using symbolic computing. 18.2.9.

The probability density for a pair of random variables X and Y has the following form: { 2 2 e−(x+y) , x, y ≥ 0 , p(x, y) = 0, otherwise. (a) Verify that p(x, y) is normalized, i.e., that its integral over all x and y is unity. Hint. Use symbolic computing to evaluate the double integral. It is probably easiest to do so in two steps: integrate over x for an undeﬁned value of y; then integrate the result over y. (b) Compute the mean and variance of X and Y . (c) Compute the covariance and correlation of X and Y . (d) Comment on the qualitative signiﬁcance of your result for part (c).

Solution: ∫ (a) maple code for the integral

dx 0

dye−(x+y) can be 2

0

> int(exp(-(x+y)^2, x = 0 .. infinity): > int(%, y = 0 .. infinity); mathematica code for this integral can be Integrate[E^(-(x + y)^2), {x, 0, Infinity}]; Integrate[%, {y, 0, Infinity}] Either of the above code sequences produces the result 1/2, which is consistent with the claim that p(x, y) is normalized.

18.3. BINOMIAL DISTRIBUTION

371

(b) To get the mean and variance of X (the values for Y will be the same), we need (adding a factor x or x2 in the above codes) √ ∫ ∞ ∫ ∞ 2 π ⟨X⟩ = 2 dx dy xe−(x+y) = . 8 0 0 ∫ ∞ ∫ ∞ 2 1 2 ⟨X ⟩ = 2 dx dy x2 e−(x+y) = . 6 0 0 1 π − 2 = 0.1176. 6 8 (c) For this part we need the integral (which we can obtain by inserting xy as a factor in the codes of part (a): ∫ ∞ ∫ ∞ 2 1 dx dy xye−(x+y) = ⟨XY ⟩ = . 12 0 0 From the above, we ﬁnd σx2 =

The covariance and correlation are cov = ⟨XY ⟩ − ⟨X⟩ ⟨Y ⟩ =

1 π − = 0.0342 , 12 82

corr =

0.0342 = 0.2908. 0.1176

(d) The probability cannot be factored into p(x)p(y), and the additional exponent −2xy causes the variables to be correlated.

18.3

BINOMIAL DISTRIBUTION

Exercises 18.3.1.

Enter into a workspace the procedure BinomialB for your symbolic computing system and verify that it gives correct values for all s when n = 3 and p = 1/4.

Solution: Enter the symbolic code from the text. Check values for n = 3 and p = 1/4: B(3, 0, 1/4) = 18.3.2.

27 27 9 1 , B(3, 1, 1/4) = , B(3, 2, 1/4) = , B(3, 3, 1/4) = . 64 64 64 64

Referring to the material in Appendix A on point plots and using the symbolic code for B(n, s, p), make plots of the following binomial distributions (giving the probability of each s for given values of n and p). For n = 20, take p = 0.1, 0.4, and 0.7. For n = 100 and for n = 1000, take p = 0.01, 0.1, and 0.4.

Solution: maple code for these plots: > n:=100: p:=0.1:

(set to desired values)

> pointplot([seq([s,BinomialB(n,s,p)],s=0 .. n)]);

372

CHAPTER 18. PROBABILITY AND STATISTICS mathematica code for these plots: n=100; p=0.1;

(set to desired values)

T = Table[{s, BinomialB[n, s, p]}, {s, 0, n}]

Make table of points

ListPlot[T, PlotStyle->{PointSize[Medium]}, PlotRange->{0,0.2}] It may be necessary to adjust PlotRange (the vertical range of the plot) to get the entire graph displayed. 18.3.3.

Write a formula for (x + y)n using the binomial theorem, and from those results show how it can be concluded that the binomial distribution is normalized, i.e., that n ∑

B(n, s, p) = 1 ,

s=0

irrespective of the value of p.

Solution: The binomial formula for [p + (1 − p)] = 1 is n

n ( ) ∑ n s=0

This is also

n ∑

s

ps (1 − p)n−s = 1.

B(n, s, p)ps q n−s .

s=0

18.3.4.

Form the diﬀerence B(n, s + 1, p) − B(n, s, p) and from it show that the s value of maximum probability (for given n and p) is within one unit of np.

Solution:

(

B(n, s + 1, p) − B(n, s, p) =

=

) ( ) n n s ps+1 (1 − p)n−s−1 − p (1 − p)n−s s+1 s

n!ps (1 − p)n−s s!(n − s)!

[(

n−s s+1

)(

p 1−p

)

] −1 .

Combining the two terms within brackets over a common denominator, we ﬁnd [( )( ) ] p np − s + p − 1 n−s −1 = . s+1 1−p (s + 1)(1 − p) We see that B(n, s + 1, p) > B(n, s, p) if and only if np − s + p − 1 > 0. This condition is satisﬁed when s < np − (1 − p), an s value that is less than 1 unit smaller than np. We thus see that B(n, s, p) increases until s is within some point between np −1 and np and decreases for all larger s. This behavior causes the maximum in B(n, s, p) to be within one unit of np. 18.3.5.

Write symbolic code that returns, as a decimal, the probability of the s value of a binomial distribution that is closest to np. Remember that s must be an integer. Don’t worry about the special case in which np is exactly half way between two integers. Hint. The integer closest to an arbitrary number is obtained by calling round (maple) or Round (mathematica).

Solution: Assuming that BinomialB (a procedure in Section 18.3 of the text), n, and p (given as a decimal) have already been deﬁned in the current computing session,

18.3. BINOMIAL DISTRIBUTION

373

in maple write > s := round(n*p): > BinomialB(n,s,p); In mathematica, with the procedure BinomialB (Section 18.3 of the text), n, and p (given as a decimal) already deﬁned in the current computing session, s = Round[n*p]; BinomialB[n,s,p] 18.3.6.

Write symbolic code that computes the cumulative probability distribution for n trials, singletrial probability p, and numbers of successes within the integer range s1 ≤ s ≤ s2 . Test your code by seeing if it gives correct results for various cases that you can compute easily by hand.

Solution: Evaluate by summing the probabilities for the included set of s values: BB(n, s1 , s2 , p) =

s2 ∑

B(n, s, p) .

s=s1

Here is code to do this, assuming that the procedure BinomialB from Section 18.3 and the values of n, s1, s2, and p have been loaded into your symbolic computing session. In maple, > BinCum := proc(n,s1,s2,p) >

local s;

> end proc; In mathematica, BinCum[n_,s1_,s2_,p_] := Module[{s}, Sum[BinomialB[n,s,p], {s, s1, s2}] ] Some test values: BB(n, 0, n, p) = 1, any n and p; BB(4, 1, 3, 0.5) = 0.875; BB(4, 0, 2, 0.5) = 0.6875; BB(5, 1, 3, 0.24) = 0.733. 18.3.7.

(a) For n = 20 and p = 0.4, compute the mean and standard deviation of the binomial distribution. (b) Identify the range of successes s1 through s2 that fall within one standard deviation of the mean (they must be integers), and compute the fraction of the distribution that is within this range of s. The code needed for this calculation was the topic of Exercise 18.3.6. (c) Repeat the analysis of part (b) for regions within 2 and 3 standard deviations about the mean. Verify that these results are consistent with the Chebyshev inequality.

Solution: (a) Letting X be the random variable subject to a binomial distribution with n trials of individual probability p, µ = ⟨X⟩ = np ,

σ 2 = np(1 − p) .

Substituting n = 20, p = 0.4, we get µ = 8, σ 2 = 4.8.

374

CHAPTER 18. PROBABILITY AND STATISTICS √ (b) The standard deviation is 4.8 = 2.19, so the range of the successes s consists of the integers between µ − 2.19 and µ + 2.19: In the present case, 6 through 10. Using the code of Exercise 18.3.6, we execute one of > BinCum(20,6,10,0.4); BinCum[20,6,10,0.4] From either of these, we get 0.74688. (c) For 2 standard deviations about the mean, the range of s consists of the integers between 8 − 2(2.19) and 8 + 2(2.19), i.e., 4 to 12. For 3 standard deviations about the mean, the range of s consists of the integers within 8 ± 3(2.19), which is 2 to 14. Inserting these arguments into the symbolic code BinCum, we ﬁnd BB(20, 4, 12, 0.4) = 0.96301,

BB(20, 2, 14, 0.4) = 0.997864.

The respective probabilities outside these ranges are about 0.04 and 0.003, both smaller than the Chebyshev limits of 1/22 and 1/32 . 18.3.8.

An advanced digital circuit board contains 100 individual microprocessors. The manufacturing process has been pushed toward its technical limits, with the result that each microprocessor has a 1% chance of being defective. (a) What fraction of the circuit boards can be, on average, expected to be entirely defectfree? (b) How low would the individual-microprocessor defect rate need to be to cause the fraction of defect-free circuit boards to reach 50%?

Solution: (a) The 1% defect rate corresponds to p = 0.99. With this value of p and n = 100, we require B(100, 100, 0.99), which is 0.366, or 36.6%. (b) An easy way to answer this question is to compute B(100, 100, p) for various values of p (between 0.99 and 1) to ﬁnd a value of p for which B(100, 100, p) just exceeds 0.5. Using the symbolic code of Exercise 18.3.6, we ﬁnd B(100, 100, 0.9931) = 0.500 · · · . This corresponds to an individualprocessor defect rate of 1 − 0.9931 = 0.0069, which is 0.69%.

18.4

POISSON DISTRIBUTION

Exercises 18.4.1.

Establish the formula for Fn (t) of general n, Eq. (18.61), by mathematical induction (see Appendix F). Values for n = 0 and n = 1 are available, so it is suﬃcient to show that Fn can be obtained from Fn−1 for general integer n ≥ 2.

Solution: The quantities Fn (t) are the solutions to the set of simultaneous ODEs given in Eq. (18.59) with the initial conditions F0 (0) = 1, Fn (0) = 0 (n > 0). These conditions assert that no events have occurred when no time has elapsed. To use mathematical induction we ﬁrst need to show that, given the formula for Fn−1 (t), we can verify the formula for Fn (t).

18.4. POISSON DISTRIBUTION

375

Taking the asserted form for Fn (t), we diﬀerentiate it, obtaining [ ] d (νt)n −νt ν(νt)n−1 −νt ν(νt)n −νt dFn = e = e − e = ν [Fn−1 (t) − Fn (t)] . dt dt n! (n − 1)! n! This analysis shows that the asserted Fn is a particular solution of Eq. (18.59) with an inhomogeneous term that is the given value of Fn−1 (t). We note that this Fn is the solution we want because it satisﬁes the required initial condition. To complete the establishment of the Fn , we now note that we already have a valid formula for F0 , so our above analysis conﬁrms a value for F1 , therefrom a value for F2 , and we may continue this process, obtaining values for all Fn of larger integer n. 18.4.2.

Enter into a workspace the procedure PoissonP for your symbolic computing system and verify that it gives correct results for several diﬀerent values of n and nu.

Solution: There are check values in the text. 18.4.3.

Referring to the material in Appendix A on point plots and using the symbolic code for P (n, ν), make plots of the following Poisson distributions (showing the probability of each n from zero to a value such that P (n, ν) is negligible). Take ν = 3, 10, 100, and 1000.

Solution: Load the procedure PoissonP into your symbolic computing session. You may then use the following code for the plots. In maple: > nmax:=20: nu:=10:

(set to desired values)

> pointplot([seq([n,PoissonP(n,nu)],n=0 .. nmax)]); In mathematica: nmax=20; nu=10;

(set to desired values)

T = Table[{n, PoissonP[n,nu]}, {n, 0, nmax}]

Make table of points

ListPlot[T,PlotStyle->{PointSize[Medium]},PlotRange->{0,0.2}] It may be necessary to adjust PlotRange (the vertical range of the plot) to get the entire graph displayed. 18.4.4.

If you receive spam e-mail at an average rate of ﬁve messages per day (an unrealistically small number), how many days per 30-day month would you expect to receive (a) exactly ﬁve spam messages; (b) more than ﬁve; (c) more than 10; (d) exactly one; (e) none at all?

Solution: The probability of receiving n messages in any day is P (n, 5), where P (n, ν) is given by Eq. (18.62). You may evaluate P (n, ν) in your symbolic computing system by calling PoissonP(n,nu) or PoissonP[n,nu]. (a) The average number of days per month with exactly ﬁve spam messages is 30P (5, 5) = 5.26. (b) The average number of days per month with more than ﬁve spam messages

376

CHAPTER 18. PROBABILITY AND STATISTICS is most easily calculated as 30 − (days with 5 or less): [ ] 5 ∑ 30 1 − P (n, 5) = 18.48 . n=0

(c) The average number of days per month with more than ten spam messages is most easily calculated as 30 − (days with 10 or less): [ ] 10 ∑ 30 1 − P (n, 5) = 0.41 . n=0

(d) The average number of days per month with exactly one message is given by 30P (1, 5) = 1.01 . (d) The average number of days per month with no messages is given by 30P (0, 5) = 0.20 . 18.4.5.

Calculate the probability that exactly two of a group of 500 randomly chosen people have January 1 as their birthday, assuming a 365-day year and excluding February 29, in the following two ways: 1. Using the binomial distribution (which is exact) and 2. Using the Poisson distribution (which assumes that persons whose birthdays have already been identiﬁed are not removed from subsequent probability calculations).

Solution: 1. The probability that a random person has a January 1 birthday is 1/365. The probability of two January 1 birthdays in 500 random trials (people) is B(500, 2, 1/365) = 0.2388. 2. The mean number of people in a random group of 500 with January 1 birthdays is 500/365. The probability that two people in the group will have that birthday (in the approximation of the Poisson distribution) is P (2, 500/365) = 0.2385.

18.5

GAUSS NORMAL DISTRIBUTION

Exercises 18.5.1.

Verify as follows that the Gauss normal probability distribution given in Eq. (18.67) has variance σ 2 : (a) Write integrals for ⟨X⟩ and ⟨X 2 ⟩.

√ (b) Rewrite the integrals of part (a) in terms of t = (x − µ)/σ 2 and use symmetry to simplify the resulting expressions. (c) Evaluate any integrals that remain after you have completed part (b). Integration by parts may be helpful here.

Solution: Assuming that we know that g(x) is normalized, write ∫ ∞ ∫ ∞ ⟨X⟩ = xg(x) dx , ⟨X 2 ⟩ = x2 g(x) dx . −∞

−∞

18.5. GAUSS NORMAL DISTRIBUTION

377

√ Insert√the formula for g(x) and make the substitution x = 2 σt + µ, with dx = 2 σ dt. The above integrals become (dropping terms of odd symmetry from the integrands because they vanish upon integration) ⟨X⟩ =

1 √ σ 2π

⟨X 2 ⟩ =

1 √ σ 2π

∫ ∞ √ 2 √ 2 µ ( 2 σt + µ)e−t ( 2 σ)dt = √ e−t dt , π −∞ −∞ ∞

∫ ∞ √ 2 √ 2 1 ( 2 σt + µ)2 e−t ( 2 σ)dt = √ (2σ 2 t2 + µ2 )e−t dt . π −∞ −∞

The above expression includes the known integral

e−t dt = 2

√ π and

−∞

t2 e−t dt =

−∞

2

1 2

√ ∫ ( ) 2 1 ∞ −t2 π t 2te−t dt = e dt = . 2 2 −∞ −∞ ∞

The third member of this equation was obtained by carrying out an integration 2 by parts in which we diﬀerentiated t and integrated 2te−t . Using the values of these integrals, we reach ⟨X⟩ = µ ,

⟨X 2 ⟩ = σ 2 + µ2 .

We see that the parameter µ is appropriately named, as it is in fact the mean of the distribution. The variance is ⟨X 2 ⟩ − ⟨X⟩2 = σ 2 , conﬁrming that σ 2 also has its expected meaning. 18.5.2.

Enter into a workspace the procedure GaussG for your symbolic computing system and verify its correctness by comparing its output with a hand computation.

Solution: The code and several check values are in the text. Additional check values, if desired, can be obtained by evaluating Eq. (18.67). 18.5.3.

Using the procedure GaussG, make plots of the following Gauss normal distributions (showing a range of width at least 3σ on either side of the maximum at x = µ): g(1, 0.707, x),

g(−1, 0.5, x),

where g(µ, σ, x) =

g(0, 0.2, x),

g(4, 1, x),

( ) (x − µ)2 1 √ exp − . 2σ 2 σ 2π

Solution: Assuming the procedure GaussG has been loaded into your current symbolic session, execute one of the following with appropriate input: > plot(GaussG(µ, σ, x), x = x1 .. x2); Plot[GaussG[µ, σ, x], {x,x1,x2}] The plots to be produced are shown here. They can be individually identiﬁed by the locations in x of their maxima (their values of µ).

378

CHAPTER 18. PROBABILITY AND STATISTICS

18.5.4.

Enter into a workspace the procedure GaussPhi for your symbolic computing system and verify that if gives results consistent with Eq. (18.72) for one of the cases given after that equation.

Solution: The coding can be entered by copying it from the text. To check GaussPhi we can take any convenient values of µ, σ, and k, and then make computations for the x values µ±kσ. For example, taking µ = 1, σ = 1.5, and k = 2, we must use x1 = −2 and x2 = 4, and compute one of > GaussPhi(1, 1.5, 4) - GaussPhi(1, 1.5, -2) GaussPhi[1, 1.5, 4] - GaussPhi[1, 1.5, -2] √ These results can be compared with erf(2/ 2), which is given in the text as 0.954. 18.5.5.

Using the procedure GaussPhi, calculate the following Gauss normal cumulative probabilities: (a) For µ = 2, σ = 2, probability that x is in the interval 1 ≤ x ≤ 4. (b) For µ = 0, σ = 0.5, probability that x is in the interval 0.25 ≤ x ≤ 0.75. √ (c) For µ = 1, σ = 2, probability that x is in the interval 0 ≤ x ≤ 2. (d) Probability that x is in the interval µ ≤ x ≤ µ + σ.

Solution: The calculations can be made after GaussPhi has been loaded into the current symbolic computing session. (a) Execute one of > GaussPhi(2,2,4)-GaussPhi(2,2,1); GaussPhi[2,2,4]-GaussPhi[2,2,1] The cumulative probability is 0.533.

18.5. GAUSS NORMAL DISTRIBUTION

379

(b) Execute one of > GaussPhi(0,0.5,0.75)-GaussPhi(0,0.5,0.25); GaussPhi[0,0.5,0.75]-GaussPhi[0,0.5,0.25] The cumulative probability is 0.242. (c) Execute one of > GaussPhi(1,sqrt(2.),2)-GaussPhi(1,sqrt(2.),0); GaussPhi[1,Sqrt[2.],2]-GaussPhi[1,Sqrt[2.],0] The cumulative probability is 0.520. (d) The result does not depend upon the numerical values of µ σ, so choose µ = 0, σ = 1, making the interval in x (0,1). Thus, execute one of > GaussPhi(0,1,1)-GaussPhi(0,1,0); GaussPhi[0,1.,1]-GaussPhi[0,1.,0] The cumulative probability is 0.341. 18.5.6.

Use your symbolic computing system to solve the transcendental equations for k and k′ in Example 18.5.2. √ Hint. It is easier to substitute trial values of k into erf(k/ 2) or to solve graphically than it is to seek a formal inversion of the equation.

Solution:

√ One way to proceed is to plot erf(k/ (2))−0.80 against k. Plots of this function and that with 0.90 in place of 0.80 are shown here. They show k = 1.282, k ′ = 1.645.

18.5.7.

For a standard normal distribution (σ = 1, µ = 0), Find the value of x0 (with x0 > µ) such that the probability that x is in the interval µ ≤ x ≤ µ + x0 is (a) 0.1, (b) 0.25, (c) 0.5.

Solution:

√ The probability for the range from 0 to x0 is (1/2) erf(x0 / 2); it is half the probability of the range (−x0 , x0 ). These values of x0 are most easily found graphically. √ For part (a), use a plot to ﬁnd the zero of (1/2)erf(x0 / 2) − 0.1. This value of x0 is 0.253. √ For part (b), ﬁnd the zero of (1/2) erf(x0 / 2) − 0.25. This value of x0 is 0.674. For part (c), no computation is necessary; probability 1/2 is reached when x0 = ∞.

380

CHAPTER 18. PROBABILITY AND STATISTICS Here are plots showing the results for parts (a) and (b).

18.5.8.

(a) For n = 100 and p = 0.4 create a binomial distribution plot that is assigned to the symbol GraphB. If needed, consult Exercises 18.3.1 and 18.3.2 for tips on making this plot. (b) Determine the mean and variance of the binomial distribution that was plotted in part (a) and plot a Gauss normal distribution of that mean and variance, storing it by assignment to the symbol GraphG. If needed, consult Exercises 18.5.2 and 18.5.3 for tips on making this plot. (c) Referring to the material in Appendix A on combination plots, combine GraphB and GraphG into a single graph so that they can be compared. (d) If the combined plot of part(c) does not give a good picture of the diﬀerence between the two plots, plot their diﬀerence.

Solution: (a) The binomial distribution plot can be obtained and stored by executing one of the following: > GraphB := plot(BinomialB(100,x,0.4), x = 0 .. 100); GraphB = Plot[BinomialB[100,x,0.4], {x,0,100}] (b) The binomial distribution of this problem has mean √ µ = 100(0.4) = 40 and variance σ 2 = 100(0.4)(1 − 0.4) = 24, or σ = 24. Using these data, the corresponding Gauss normal distribution can be plotted and stored by one of the following commands: > GraphG := plot(GaussG(40,sqrt(24),x), x = 0 .. 100); GraphG = Plot[GaussG[40,sqrt(24),x], {x,0,100{}] (c) To combine these distributions on a single plot, execute one of > with(plots): > display(GraphB, GraphG); Show[GraphB, GraphG] The graphs are virtually indistinguishable. The combined plot is shown here.

18.5. GAUSS NORMAL DISTRIBUTION

381

(d) A diﬀerence plot can be obtained from one of > plot(GaussG(40,sqrt(24),x)-BinomialB(100,x,0.4), >

x = 0 .. 100);

Plot[GaussG[40,Sqrt[24],x]-BinomialB[100,x,0.4], {x, 0, 100}, PlotRange -> {-0.0008, 0.0008} ] The diﬀerence plot is shown here.

18.5.9.

Proceed as in Exercise 18.5.8 to compare a Poisson distribution with ν = 80 and a corresponding Gauss normal distribution.

Solution: (a) The Poisson distribution plot can be obtained and stored by executing one of the following: > GraphP := plot(PoissonP(n,80), n = 40 .. 120); GraphP = Plot[PoissonP[n,80], {n, 40, 120} ] (b) The Poisson distribution has µ = σ 2 = 80. Using these data, the corresponding Gauss normal distribution can be plotted and stored by one of the following commands: > GraphG := plot(GaussG(80,sqrt(80),x), x = 40 .. 120); GraphG = Plot[GaussG[80,Sqrt[80],x], {x, 40, 120} ] (c) To combine these distributions on a single plot, execute one of > Display(GraphG, GraphP); Show[GraphG, GraphP]

382

CHAPTER 18. PROBABILITY AND STATISTICS This is the result:

(d) It is just possible to see a diﬀerence when the two graphs are plotted together. A diﬀerence plot is not needed.

18.6

STATISTICS AND APPLICATIONS TO EXPERIMENT

Exercises 18.6.1.

For each of the following data sets containing values of a random variable X, ﬁnd an estimate of the mean value ⟨X⟩, the variance σ 2 (X) of the individual values of X relative to the mean of the parent distribution, and the variance of ⟨X⟩. (a) The ﬁve data points 5.9, 6.0, 6.1, 6.2, 6.5. (b) The data of part (a) plus additional data points 6.5, 6.2, 6.1, 6.0, 5.9. (c) The data of part (a) plus ﬁve additional data points, all 5.9.

Solution: If there are n measurements, ⟨X⟩ = σ 2 (X) =

1 ∑ (xi − ⟨X⟩)2 , n−1 i

1∑ xi , n i σ 2 (⟨X⟩) =

1 2 σ (X) . n

⟨X⟩ = (5.9 + 6.0 + 6.1 + 6.2 + 6.5)/5 = 6.14 .

(a) n = 5;

σ 2 (X) =

σ 2 (⟨X⟩) = (b) n = 10;

1[ (5.9 − 6.14)2 + (6.0 − 6.14)2 + (6.1 − 6.14)2 4 ] +(6.2 − 6.14)2 + (6.5 − 6.14)2 = 0.053 1 (0.053) = 0.0106 5

⟨X⟩ = 2((5.9 + 6.0 + 6.1 + 6.2 + 6.5)/10 = 6.14 . σ 2 (X) =

σ 2 (⟨X⟩) =

1 [ 2 (5.9 − 6.14)2 + (6.0 − 6.14)2 + (6.1 − 6.14)2 9 ] +(6.2 − 6.14)2 + (6.5 − 6.14)2 = 0.047 1 (0.047) = 0.0047 10

18.6. STATISTICS AND APPLICATIONS TO EXPERIMENT (c) n = 15;

383

⟨X⟩ = [2((5.9 + 6.0 + 6.1 + 6.2 + 6.5) + 5(5.9)]/15 = 6.06 .

1 [(5.9 − 6.06)2 + (6.0 − 6.06)2 + · · · ] = 0.044 14 1 σ 2 (⟨X⟩) = (0.044) = 0.0029 15 σ 2 (X) =

18.6.2.

Prove Eq. (18.85). Remember that it is valid when X and Y are independent.

Solution: Square the expansion for f (x, y), Eq. (18.83). In anticipation of later taking the average value, we omit the explicit display of terms that are linear in ∆x, ∆y, or (because they are independent, both). The relevant part of the expansion, through second order, is ( [f (x, y)] = [f (⟨X⟩, ⟨Y ⟩)] + 2

2

∂f ∂x

)2

( 2

(∆x) + [(

+ f (⟨X⟩, ⟨Y ⟩)

∂2f ∂x2

∂f ∂y

)2

)

(∆y)2 (

2

(∆x) +

∂2f ∂y 2

)

] 2

(∆y)

+ ··· ,

and its average, with ⟨(∆x)2 ⟩ and ⟨(∆y)2 ⟩ written as σ 2 (X) and σ 2 (Y ), is ( ⟨f 2 ⟩ = [f (⟨X⟩, ⟨Y ⟩)]2 +

∂f ∂x

)2

( σ 2 (X) + [(

+ f (⟨X⟩, ⟨Y ⟩)

∂2f ∂x2

∂f ∂y

)2 σ 2 (Y )

)

( 2

σ (X) +

∂2f ∂y 2

)

] σ (Y ) + · · · . 2

We also need ⟨f ⟩2 . From Eq. (18.84), we get, through second order [( 2 ) ( 2 ) ] ∂ f ∂ f 2 2 ⟨f ⟩2 = [f (⟨X⟩, ⟨Y ⟩)]2 + f (⟨X⟩, ⟨Y ⟩) σ (X) + σ (Y ) + ··· . ∂x2 ∂y 2 Subtracting ⟨f ⟩2 from ⟨f 2 ⟩, we get ( )2 ( )2 ∂f ∂f σ 2 (f ) = σ 2 (X) + σ 2 (Y ) + · · · , ∂x ∂y in agreement with Eq. (18.85). 18.6.3.

If X and Y are independent random variables, and Z = XY , then ⟨Z⟩ = ⟨X⟩⟨Y ⟩. Show that σ 2 (Z) σ 2 (X) σ 2 (Y ) ≈ + . 2 2 ⟨Z⟩ ⟨X⟩ ⟨Y ⟩2

Solution: Use Eq. (18.85) with f = XY . Then, remembering that the partial derivatives are to be evaluated at ⟨X⟩, ⟨Y ⟩, we have ∂f = ⟨Y ⟩ , ∂x

∂f = ⟨X⟩ . ∂y

Writing XY = Z and ⟨Z⟩ = ⟨X⟩⟨Y ⟩, substitution into Eq. (18.85) yields the desired result.

384 18.6.4.

CHAPTER 18. PROBABILITY AND STATISTICS Prove that the result of Exercise 18.6.3 also applies if Z = X/Y .

Solution: Proceed as in Exercise 18.6.3, but now with f = X/Y , so ∂f 1 = , ∂x ⟨Y ⟩

∂f ⟨X⟩ =− . ∂y ⟨Y ⟩2

Writing X/Y = Z and ⟨Z⟩ ≈ ⟨X⟩/⟨Y ⟩ (acceptable when σ 2 (Y ) is not too large), Eq. (18.85) leads to the desired result. 18.6.5.

The following are values of f (x) for integer values of x from 1 through 10: 3.00, 3.50, 4.33, 4.75, 5.80, 6.17, 6.14, 6.63, 6.78, 7.10 (a) Find the least-squares straight-line ﬁt to these data. (b) Find the quadratic function that is a least-squares ﬁt to these data. (c) Plot the original data and both ﬁts on the same plot.

Solution: (a) This curve-ﬁtting is best done using symbolic computing. Input the data, using one of > data := [[1,3.00],[2,3.50],[3,4.33],[4,4.75],[5,5.80], >

[6,6.17],[7,6.14],[8,6.63],[9,6.78],[10,7.10]];

data = {{1,3.00},{2,3.50},{3,4.33},{4,4.75},{5,5.80}, {6,6.17},{7,6.14},{8,6.63},{9,6.78},{10,7.10}}; If using maple, load its curve-ﬁtting package by executing > with(CurveFitting): Then obtain a straight-line ﬁt by invoking one of > SLfit := LeastSquares(data, x); SLfit = Fit[data, {1, x}, x] Either of these commands produces the ﬁt SLf it = 2.89 + 0.46x. (b) A quadratic ﬁt to the same data is obtained from one of > Quadfit := LeastSquares(data, x, curve=a*x^2+b*x+c); Quadfit = Fit[data, {1, x, x^2}, x] The result is equivalent to Quadf it = 2.0275 + 0.89125x − 0.0392045x2 . (c) To plot the points using maple, > with(plots); > G0 := pointplot(data); The presence of the assignment to G0 causes the display of the point plot to be suppressed. In mathematica, the points are plotted with g0 = ListPlot[data]

18.6. STATISTICS AND APPLICATIONS TO EXPERIMENT

385

This command produces a plot (which we do not show). The ﬁts to the points are obtained using the ordinary plotting command, i.e., one of the sets > G1 := plot(SLfit, x = 0 .. 10); > G2 := plot(Quadfit, x = 0 .. 10); g1 = Plot[SLfit, {x, 0, 10}] g2 = Plot[Quadfit, {x, 0, 10}][6pt] Then all three graphs can be displayed together using one of > display([G0, G1, G2]); Show[g0, g1, g2] These commands produce plots similar to that shown here:

18.6.6.

Repeat Exercise 18.6.5 for the following data set: f (1) = 1, f (2) = 3, f (3) = 5, f (4) = 3, f (5) = 1.

Solution: Proceeding as in the referenced Exercise (but without comments here), in maple we have > data := [[1,1],[2,3],[3,5],[4,3],[5,1]]: > SLfit := LeastSquares(data, x); > Quadfit := LeastSquares(data, x, curve=a*x^2+b*x+c); > G0 := pointplot(data); > G1 := plot(SLfit, x = 1 .. 5); > G2 := plot(Quadfit, x = 1 .. 5); > display([G0, G1, G2]); In mathematica, data := { {1,1},{2,3},{3,5},{4,3},{5,1} } SLfit = Fit[data, {1, x}, x] Quadfit = Fit[data, {1, x, x^2}, x] g0 = ListPlot[data] g1 = Plot[SLfit, {x, 1, 5}] g2 = Plot[Quadfit, {x, 1, 5}] Show[go,g1, g2]

386

CHAPTER 18. PROBABILITY AND STATISTICS Here are the graphs for this exercise:

APPENDICES A

METHODS FOR MAKING PLOTS

B

PRINTING TABLES OF FUNCTION VALUES

Exercises B.1.

Modify the procedure MakeTable so that both the values of x and the function values are given as ordinary decimals, with the function values given to eight decimal places and in a ﬁeld of width suﬃcient to accommodate all numbers with |x| < 1012 .

Solution: In maple, the only change needed is to replace the printf line by > printf(" %8.3f %22.8f\n",x,func(x)); In mathematica, the While statement that contains the print command (in the text, three lines ending in a semicolon) must be replaced by the following: While[(x (If[ # < 12, Null, #] &)]]; B.2.

Modify the procedure MakeTable so that it can accept functions of two variables (x and y) with each variable running independently on a ﬁnite equally-spaced grid. It is not required to expend eﬀort on the graph title or the column labeling, but if you do so it may be easier to use the code after you have forgotten how you constructed it.

Solution: Here is the code in maple. Much of the complexity arises in dealing with the formatting of a variable number of data columns. MakeTable2 := proc(func,row1,nrow,deltarow,col1,ncol,deltacol) # Maple procedure. Makes table of func(x,y) for # nrow rows x, starting at x=row1 in steps of deltarow # ncol columns y, starting at y=col1 in steps of deltacol # copyright 2013 by Frank E. Harris local fstr,x1str,xnstr,dxstr,y1str,ynstr,dystr,x,y,yy, headform,bodyform,i,j; fstr := convert(func,string); x1str := convert(row1,string); xnstr := convert(row1+(nrow-1)*deltarow,string); dxstr := convert(deltarow,string); y1str := convert(col1,string); ynstr := convert(col1+(ncol-1)*deltacol,string); dystr := convert(deltacol,string); 387

388

. APPENDICES printf("Values of "||fstr||"(x,y) for x = "||x1str||"(" ||dxstr||")"||xnstr||" and y = "||y1str||"("||dystr ||")"||ynstr||"\n\n"); yy := col1-deltacol; headform := " "; bodyform := " %8.3f"; for i from 1 to ncol do yy := yy + deltacol; y[i] := yy; headform := cat(headform, " %13.3f"); bodyform := cat(bodyform, " %13.6E"); end do; headform := cat(headform,"\n\n"); bodyform := cat(bodyform,"\n"); printf(headform, seq(y[i], i=1 .. ncol)); for j from 1 to nrow do x := row1 + (j-1)*deltarow; printf(bodyform,x,seq(func(x,y[i]), i=1 .. ncol)); end do; end proc; Here is mathematica code. The command Row permits the buildup of the multi-items string forming the print lines. MakeTable2[func_,row1_,nrow_,deltarow_,col1_,ncol_,deltacol_] := (* Mathematica procedure. Makes table of func(x,y) for nrow rows x, starting at x=row1 in steps of deltarow ncol columns y, starting at y=col1 in steps of deltacol copyright 2013 by Frank E. Harris *) Module[{fstr,x1str,xnstr,dxstr,y1str,ynstr,dystr,yy,x,y}, fstr = ToString[func]; x1str = ToString[row1]; xnstr = ToString[row1 + (nrow - 1)*deltarow]; dxstr = ToString[deltarow]; y1str = ToString[col1]; ynstr = ToString[col1 + (ncol - 1)*deltacol]; dystr = ToString[deltacol]; Print["Values of ", fstr, "[x,y] for x = ", x1str, "(", dxstr, ")",xnstr, " and y = ", y1str, "(", dystr, ")", ynstr]; yy = " "; Do[ y[i] = col1 + (i-1)*deltacol; yy = Row[{yy, PaddedForm[y[i], {10, 3}], " "}], {i, 1, ncol}]; Print[yy]; Do[x = row1 + (j-1)*deltarow; yy = PaddedForm[x, {6, 3}]; Do[yy = Row[{yy, ScientificForm[N[func[x, y[i]], 12], {8, 6}, NumberPadding -> {" ", "0"}]}], {i, 1, ncol}]; Print[yy], {j, 1, nrow}] ]

C. DATA STRUCTURES FOR SYMBOLIC COMPUTING

C

DATA STRUCTURES FOR SYMBOLIC COMPUTING

D

SYMBOLIC COMPUTING OF RECURRENCE FORMULAS

E

PARTIAL FRACTIONS

389

Exercises E.1.

Find partial fraction decompositions of the following fractions. Do so ﬁrst by hand, and then verify your result using a symbolic computation system. (a)

1 . (x + 1)(x + 2)(x + 3)

(b)

1 , assuming x to be the variable. x 2 − a2

(c) Repeat part (b), with a the variable.

(d)

x2 ; (the variable is x). (x + a)3

Solution: Check these partial-fraction decompositions with one of the following codes: (a)

> convert(1/(x+1)/(x+2)/(x+3), parfrac); Apart[1/(x+1)/(x+2)/(x+3)] The result is equivalent to 1 1 1 − + . 2(x + 1) x + 2 2(x + 3)

(b)

> convert(1/(x^2-a^2),parfrac,x); Apart[1/(x^2-a^2),x] 1 1 − 2a(x − a) 2a(x + a)

(c)

> convert(1/(x^2-a^2),parfrac,a); Apart[1/(x^2-a^2),a] 1 1 + 2x(x + a) 2x(x − a)

(d)

> convert(x^2/(x+a)^3,parfrac,x); Apart[x^2/(x+a)^3,x] 1 2a a2 − + 2 x + a (x + a) (x + a)3

390 E.2.

. APPENDICES Prove the partial fraction expansion (with p a positive integer) 1 = n(n + 1) · · · (n + p) [ (p) 1 (p) 1 (p) 1 ] 1 (p) 1 − + − · · · + (−1)p , p! 0 n 1 n+1 2 n+2 p n+p Hint: Use mathematical induction. Two binomial coeﬃcient formulas of use here are p + 1 (p) (p + 1) = , p+1−j j j

p+1 ∑ j=1

(−1)j−1

(p + 1) =1. j

Solution: Assuming the formula to be valid for some integer p, write p ( ) 1 ∑ p 1 (−1)j = n(n + 1) · · · (n + p + 1) p! j=0 j (n + j)(n + p + 1)

=

( ) [ ] p p 1 1 ∑ 1 1 (−1)j − . j p−j+1 n+j p! j=0 n+p+1

The term of the sum containing 1/(n + j) is (in a more expanded form) 1 p! 1 1 (p + 1)! 1 (−1)j = (−1)j p! j!(p − j)! p − j + 1 (p + 1)! j!(p − j + 1)! n + j ( ) 1 p+1 1 = (−1)j , (p + 1)! j n+j corresponding to the ﬁrst binomial identity in the Exercise formulation. The terms containing 1/(n + p + 1) combine in a similar way to yield ( ) p ∑ 1 1 j+1 p + 1 (−1) . n + p + 1 (p + 1)! j=0 j If we make the substitution k = p + 1 − j in this sum, the summation will run from k = 1 to k = p + 1; we can replace j by k in the binomial coeﬃcient; and the factor (−1)j+1 can be written (−1)p+1 (−1)k−1 . We can then use the second binomial identity in the Exercise to reduce these 1/(n + p + 1) terms to the simple form ( ) 1 1 p+1 p + 1 (−1) , p+1 n+p+1 (p + 1)! and all the terms of the decomposition correspond to the formula to be proved. To complete the proof by mathematical induction, we must establish the formula for some starting value of p; a good choice is p = 0, for which we have the obvious result [( ) ] 1 1 0 1 = . n 0! 0 n

F. MATHEMATICAL INDUCTION

F

391

MATHEMATICAL INDUCTION

Exercises F.1.

Show that

n ∑

j4 =

j=1

n (2n + 1)(n + 1)(3n2 + 3n − 1). 30

Solution: Give this formula for n the name S(n). Then, assuming S(n) to be a valid formula, the sum through n + 1 must be given by S(n) + (n + 1)4 =

n (2n + 1)(n + 1)(3n2 + 3n − 1) + (n + 1)4 . 30

Write this expression in the more expanded form ] n + 1[ (2n + 1)n(3n2 + 3n − 1) + 30(n + 1)3 S(n) + (n + 1)4 = 30 ] n + 1[ 4 = 6n + 39n3 + 91n2 + 89n + 30 . 30

Compare with S(n + 1) =

=

n+1 (2n + 3)(n + 2)(3[n + 1]2 + 3[n + 1] − 1) 30 ] n + 1[ 4 6n + 39n3 + 91n2 + 89n + 30 . 30

To complete the proof we must check the formula for the starting value n = 1. We ﬁnd 1 S(1) = (3)(2)(5) = 1 , 30 which is the correct value for 14 . F.2.

Prove the Leibniz formula for the repeated diﬀerentiation of a product: [ ] [( ) ] ( )n [ ) n ( ) ( ] ∑ n d j d n−j d f (x)g(x) = f (x) g(x) . j dx dx dx j=0

Solution: Diﬀerentiate once the Leibniz formula for n-fold diﬀerentiation; before manipulation the result is n ( ) n ( ) ∑ ∑ n n [Dj+1 f ][Dn−j g] + [Dj f ][Dn+1+j g] . Dn+1 f g = S1 + S2 = j j j=0 j=0 Rewrite the second of the above summations with the index j replaced by k + 1, with k running from zero to n − 1. Since that causes the omission of the term with j = 0, this second summation is equivalent to S2 = f [D

n+1

g] +

n−1 ∑( k=0

) n [Dk+1 f ][Dn−k g] . k+1

392

. APPENDICES We can now combine S1 and S2 , obtaining S1 + S2 = f [D

n+1

g] +

n−1 ∑ [( j=0

) ( )] n n + [Dj+1 f ][Dn−j g] + [Dn+1 f ]g . j j+1

We now observe that ( ) ( ) n n! n n! + + = j j!(n − j)! (j + 1)!(n − j − 1)! j+1 =

[ ] [ ] n! n−j n! n+1 1+ = j!(n − j)! j+1 j!(n − j)! j + 1

=

(n + 1)! = (n − j)!(j + 1)!

(

) n+1 . j+1

We therefore have S1 + S2 = f [Dn+1 g] +

n−1 ∑( j=0

=

) n+1 [Dj+1 f ][D(n+1)−(j+1) g] + [Dn+1 f ]g j+1

( ) n+1 [Dm f ][Dn+1−m g] . m m=0 n+1 ∑

This is the Leibniz formula for (n + 1)-fold diﬀerentiation of f g. To complete the derivation, we verify that the Leibniz formula is obviously valid for n = 0.

G

CONSTRAINED EXTREMA

Exercises G.1.

Find the radius and height that will minimize the surface area of a right circular cylinder of unit volume.

Solution: Let r and h be the radius and height of the cylinder. Then its surface area is A = 2πr2 + 2πrh and its volume is V = πr2 h. Treating the problem as a constrained minimization, consider the unconstrained minimum of F = A − λV = 2πr2 + 2πrh − λπr2 h , with λ chosen later to yield a solution of unit volume. Setting to zero the partial derivatives of F and canceling common factors, we get ∂F = 0 −→ 2r + h − λrh = 0 , ∂r ∂F = 0 −→ 2r − λr2 = 0 . ∂h

H. SYMBOLIC COMPUTING FOR VECTOR ANALYSIS

393

From the second of these equations we get λ = 2/r; substituting this result into the ﬁrst equation we get 2r − h = 0, i.e. h = 2r. Inserting this value of h into the original expression for V and setting V = 1, we get 2πr3 = 1, i.e., r = 1/(2π)1/3 . Then h = 2/(2π)1/3 .

H I

SYMBOLIC COMPUTING FOR VECTOR ANALYSIS MAPLE TENSOR UTILITIES

Exercises I.1.

By explicit enumeration verify that the coding for Eps is correct.

Solution: A systematic way to check is to write a program that tests all combinations of the arguments 1, 2, and 3. In maple, after loading the code for Eps, > for i from 1 to 3 do for j from 1 to 3 do for k from 1 to 3 do > print(i,j,k, "Eps(i,j,k)= ", Eps(i, j, k)); > end do; end do; end do; There is no corresponding issue in mathematica, as Signature is part of its basic system. I.2.

Write symbolic code for Eps in dimension 4, i.e., Eps(i,j,m,n).

Solution: Give the three-dimensional Eps the new name Eps3. Then the following maple code answers this Exercise: > Eps := proc(i,j,m,n); >

if i=4 then RETURN(-Eps3(j,m,n)) end if;

>

if j=4 then RETURN(Eps3(i,m,n)) end if;

>

if m=4 then RETURN(Eps3(j,i,n)) end if;

>

if n=4 then Eps3(j,m,i) else 0 end if;

> end proc; I.3.

Modify the code for KD to have it return zero if the two arguments are not integers 1, 2, or 3.

Solution: Here is maple code that meets the desired speciﬁcations: > KD := proc(i,j); > >

if (member(i,{1,2,3}) and member(j,{1,2,3}) and i=j) then 1 else 0 end if

> end proc;

394

. APPENDICES

J

WRONSKIANS IN ODE THEORY

K

MAPLE CODE FOR ASSOCIATED LEGENDRE FUNCTIONS AND SPHERICAL HARMONICS