Splash Notes

January 30, 2018 | Author: Klm Flm | Category: Series (Mathematics), Polynomial, Power Series, Trigonometric Functions, Sine
Share Embed Donate


Short Description

Explained Integrals...

Description

SPLASH 2011 – Superhuman Integration Techniques Andre Kessler November 19th, 2011

1

Series Expansions

Before we get to our integration techniques, we need to review the notion of a series expansion. The techniques and methods developed here are fundamental to the derivation of most of the results we will employ, so it is necesary to have a very solid foundation in this area. To begin, all of the ideas about series expansions start with the finite geometric series 1 + x + x2 + · · · + xn =

n X

xk =

k=0

1 − x n+1 1− x

.

We can verify the closed form on the right hand side of the above equation by multiplying each side by the quantity 1 − x. This clearly telescopes the sum and gives us the desired result, as long as x 6= 1. In the case that x = 1, the sum is easily computable by just realizing it’s now the sum of n + 1 ones, but it is important to note that our formula actually still holds – we just need to be more careful and take the following limit: 1 − x n+1 −(n + 1)x n lim = lim = n + 1. x→1 1 − x x→1 −1 We can also consider infinite geometric series of the form 1 + x + x 2 + x 3 + · · · . This clearly converges as long as |x| < 1, because in those cases the term x n+1 in our closed-form expression for the series goes to 0 as n goes to infinity, and the result we are left with is well-defined. Specifically, we have the result that 2

1 + x + x + ··· =

∞ X

xk =

k=0

1 1− x

.

Now, we can turn these ideas on their head and say instead that “1 + x + x 2 + · · · is the power series representation of the function f (x) = 1/(1 − x) for any x with magnitude less than one.” In other words, we are expanding a function which is not a polynomial as an infinite-degree polynomial. We can do this simply because the two sides are equal when the series converges. If the series diverges for a particular value of x, the two sides are no longer equal. However, the function f (x) may still be well defined even when the power series does not converge. For example, consider x = −1 in the standard geometric series. In this case, we have the seqeuence 1 − 1 + 1 − 1 + − · · · . Does this converge? If we group the terms one way and add them, we get a sum of 0: (1 − 1) + (1 − 1) + · · · = 0. On the other hand, we can group the terms differently and get a value of 1 as follows: 1 + (−1 + 1) + (−1 + 1) + · · · = 1. Clearly this is not well defined. Our function, on the other hand, is perfectly reasonable at x = −1 : now f (x) = 1/2. This is a very important notion to remember when we define a function on some set of numbers using a power series. For example, later on we will define the zeta function by ζ (s ) =

∞ 1 X n=1

ns

.

The series on the right hand side above converges for all s with |s| > 1. However, it turns out that we can consider ζ (s) as a function defined on all complex numbers s 6= 1. This is called analytic continuation, and while we will not discuss this in depth, it is important to realize that power series can be limited in their description of a function, and understanding where the series converges is necessary to determine where a power series expansion will be useful. 1

From our humble little expansion for 1/(1 − x) we can pull an infinite variety of different expansions; some examples are shown below. 1 1 − x2 1

= 1 + x2 + x4 + x6 + · · · 1

1 1 · = =   3 8 8 + x3 8 1 − − x 8 ex x

e −1

=

1 1−e

−x

1−

x3 8

+

x6 64

! − +···

= 1 + e −x + e −2x + e −3x + · · ·

Now, the question logically arises: how can we find these series expansions in general? Suppose we want to find a polynomial T (x) that approximates a function f (x) around x = x0 . We can start off by simply letting T (x) = f (x0 ). This is a good approximation directly at x = x0 , but it’s probably not very good anywhere else. To make this a better approximation, we should make sure that the functions have the same slope at the point, which is equivalently the statement T 0 (x0 ) = f 0 (x0 ). We can do this by saying T 0 (x) = f 0 (x0 ) ⇒ T (x) = f (x0 ) + f 0 (x0 )x. Now let’s make T (x) an even better approximation by making the second derivatives agree: T 00 (x) = f 00 (x0 ) ⇒ T 0 (x) = f 0 (x0 ) + f 00 (x0 )x ⇒ T (x) = f (x0 ) + f 0 (x0 )x +

1 2!

f 00 (x0 )x 2 .

This process can clearly be continued indefinitely to obtain an arbitrarily accurate expansion, so if we continue it forever we can write out the Taylor expansion of f (x). It is important to notice that this only works if f (x) is infinitely differentiable at x = x0 . f (x) =

∞ f (k) (x ) X 0 k=0

k!

(x − x0 )k

Armed with the general Taylor expansion, we can arrive at expressions for some functions. When are these expressions valid? ex = sin x = cos x = (1 + x)n =

∞ xn X k=0 ∞ X k=0 ∞ X

n! (−1)k (−1)k

k=0 ∞ n  X k=0

k

x 2k+1 (2k + 1)! x 2k (2k)!

xk

∞ 2k  X

1 = p 1 − 4x k=0 arctan x = − ln (1 − x) =

∞ X

k (−1)k

k=0 ∞ X

x k+1

k=0

k +1

2

xk x 2k+1

2k + 1

1

In particular, expanding

gives us 1 + x + x 2 + · · · as expected. Another slick method for finding

1− x the series for sin x and cos x involves separating the real and imaginary parts from the two sides of the expansion for the expression e i x = cos x + i sin x. Importantly, we can differentiate series or integrate series along with their corresponding functions as much as we please; this is how we obtain the expansion for arctan x (integrating the expansion for 1/(1 + x 2 )). As an example of this process, consider trying to 1 find the expansion for − ln (1 − x). We observe that this is an antiderivative of , so 1− x Zx dt − ln (1 − x) = − 0 1− t Z xX ∞ tk dt = = =

0 k=0 x

∞ Z X

tk dt

k=0 0 ∞ x k+1 X

k +1

k=0

In the second-to-last step, we interchanged the order of summation and integration. I will not spend a lot of time justifying this step, as it is rather tedious; suffice it to say that as long as the sequence or integral does not have some strange convergence issues, you are allowed to do this pretty much whenever the results make sense. Suppose we want to determine the first three terms in the series expansion for tan x about x = 0. The tangent function has derivatives of all orders at this point, so our series clearly exists. However, it would be quite annoying to compute term by term from the general Taylor series formula. Instead of doing so, we will divide the two series for sine and cosine as follows, and use a geometric series expansion. tan x = =

= =

sin x cos x 3 x − x3! + 1−

2

x 2!

x−

x−

=x−

+ x3 3! x3

x3 6

6 +

x5 5! 4

x 4!

+ +

− ··· − ··· x5 5!

120

120

x−

3!

!  − · · · 1 +

x5

x5

=

x3

+

x3

2!

1+

2



x5 12

+

5!

x2

! − ···

+

x5

x2 2

5x 5 24

! − ···



+

x4 4!

5x 4 24

1

· 1− !

+ ···



+

2

x 2!

− x2 2!

x4 4!

 + ··· !2

− ···

  + ···

! + ···

+ ··· = x +

x3 3

+

2x 5 15

+ ···

Notice several things about this computation. First, we only kept the terms that could contribute something to the first three terms – anything with a power of x larger than 5 can’t possibly contribute to the first three terms. Next, notice that we only have odd powers in our Taylor expansion; this is because tan x is an odd function. Finally, similar methods can be applied for the division of series in any number of cases like this one.

3

2

Generating Functions Z

1

Okay, now it’s finally time to compute some integrals. Let’s consider

x 5 ln2 x d x. We’re going to use

0

the fact that a power series expansion for a function about the same point is unique. Look closely at the following manipulation. Z1 Z1 Z1 n+ε n ε x dx = x x dx = x n e ε ln x d x 0

=

0 1

Z

0

xn

0

=

∞ εk X k=0

∞ εk X k=0

Z

k!

1

k!

lnk x d x

x n lnk x d x

0

Z

1

x n+ε d x in some other way, we can 0 Z1 then compare coefficients to determine the value of any one of the integrals x n lnk x d x. But we already

Thus, if we can find the terms of the power series expansion for

0

know how to evaluate the first integral and how to expand the resulting function, so we proceed to do so as below. Z1 1+n+ε 1 x x n+ε d x = 1 + n + ε 0 0

= =

1 1+n +ε

=

1

ε 1 + n 1 + n+1

(−1)k

∞ X

1

·

k+1 k=0 (n + 1)

εk

Comparing coefficients of εk , we immediately obtain that Z

1

k

n

x ln x d x =

0

k!(−1)k (n + 1)k+1

and therefore the specific integral we wanted to compute at the beginning is simply Z

1

5

2!(−1)2

2

x ln x d x =

63

0

2.1

=

1 108

Exercises Z

1

x ln x d x.

1. Compute 0

Z

2

2. Determine the value of

ln4 x d x.

0

Z



3. (HMMT 2011) Determine the value of 1

‚

ln x x 4

Œ2011 d x.

.

4. Make the substitution x = − ln u in the integral

Z

1

n

Z



ln u d u to show that 0

x n e −x d x = n! and

0

verify this result in some other way (for example, via integration by parts). This integral will be the basis of our attempt to extend the factorial function to non-integer arguments.

3

Product Expansions

Why did we even look at infinite sums in the first place? It was because we were interested in representing some function f (x) which is not a polynomial as a sort of “infinite-degree polynomial.” So far we have been looking at the coefficients of power series – in other words, the coefficients of these “infinite-degree polynomials.” But coefficient form is not the only way to think of a polynomial – we can also consider its factored form. An ordinary polynomial has finitely many zeros. However, functions (for example, sin x and cos x) can have infinitely many zeros. This begs the question as to whether it is possible to represent such functions in “factored form” – in other words, as a product of terms that describe the zeros of the function. Weierstrass’s Factorization Theorem answers with a resounding YES for a particular class of functions in complex analysis known as entire functions1 , and we will shortly see how to work with these infinite product representations. First, however, we will discuss conditions for convergence of such infinite products.

3.1

Convergence of Infinite Products

By infinite product we mean something in the form ∞ Y

un

n=1

where un is some non-zero complex number. We will say that the above equation converges if the sequence of partial products u1 u2 · · · un converges to a nonzero limit. Clearly, a necessary condition for an infinite product to converge is that lim un = 1 (why?). Thus, it makes sense to write un = 1 + an for some n→∞ Y Y sequence an with lim an = 0 in our infinite product representations instead: un = (1 + an ). How n→∞

can we now connect this to series? Consider writing a convergent infinite product as the exponential of a sum, which we are certainly allowed since we restricted convergence of infinite products to exclude zero. ( ) ∞ ∞ Y X  1 + an = exp ln (1 + an ) n=1

n=1

Thus, we can answer X questions about the convergence of the infinite product by simply considering the convergence of ln (1 + an ). If an consists entirely of positive terms, we can make an even stronger Y statement: if all of the terms an have the same sign, then the convegence of the infinite product (1+an ) X is implied by the convergence of the series an (to see this, simply consider expanding the logarithm  ∞  X Y 1 in ln (1 + an ) using our techniques from before). For example, 1+ diverges to infinity, but n n=1  ∞  Y 1 e π − e −π sinh π 1 + 2 converges to = . 2π π n n=1 1

Basically, a function that is infinitely differentiable everywhere in the complex plane. Alternately, an entire function is a function whose power series expansion (say, about z = 0) converges everywhere in the complex plane.

5

3.2

Representation of Functions

Now, if we want to represent a polynomial p(x) that has roots r1 , r2 , . . . , rn , then we can write it in factored form as ‚ Œ n Y x p(0) (x − r1 )(x − r2 ) · · · (x − rn ) = p(0) 1− p(x) = (−1)n r1 r2 · · · rn rk k=1 If some entire function f (x) has infinitely many roots, then by the Weierstrass Factorization Theorem, sin x we can write it as an infinite product of its zeros. Take, for example, f (x) = , with f (0) = 1. Then x we can write  x ‹ x ‹ x ‹ x ‹ sin x =1· 1− 1+ 1− 1+ ··· x π! π ! 2π 2π ! x2 x2 x2 = 1− 2 1− 2 2 1 − 2 2 ··· π 2π 3π ! ∞ Y x2 1− 2 2 = n π n=1 Notice that the order in which we multiplied out the terms was important. Multiplying only the terms for positive roots first would have resulted in a divergent infinite product, so we need to be careful while doing these manipulations that we do not separate our product in ways that are “not allowed.” We are now going to be able to use our infinite product expansion to obtain a very interesting result. Consider “expanding” the infinite product by multiplying out the terms. This will be an infinite sum; in fact, it will be a power series for f (x). Just from reading off the terms in the infinite sum, we can see that the expansion will be   sin x 1 1 1 =1− + + + · · · x 2 + O(x 4 ). x π 2 22 π 2 32 π 2 However, the power series expansion for any function f (x) is unique, and we know the expansion for sin x is 1 − x 2 /6 + O(x 4 ). The two coefficients must be the same, so we know that x 1 π2

+

1 22 π 2

+

1 32 π 2

+ ··· =

1 6

=⇒ 1 +

1 22

+

1 32

+

1 42

+ ··· =

∞ 1 X 2 n=1 n

=

π2 6

.

This important result was first discovered by Euler and brought him a great deal of fame.

4

The Gamma Function Z



Earlier, in one of the exercises, we derived that Z 0∞ ative integers n, but what if we want to know

x n e −x d x = n!. This is all right and good for nonneg-

p

0

x e −x d x? Even if we assumed that our expression is

true for all values of n that cause the integral to converge, we still wouldn’t know how to determine the value of (1/2)!. Thus, we’d like to determine an extension of the factorial function to complex arguments. Well, what are the defining properties of the factorial function in the first place, anyways? The most fundamental requirements are the recursion relation n! = n · (n − 1)! and 0! = 1. Thus we’ll define the function Γ(z) which satisfies the following relations: Γ(z + 1) = zΓ(z), Γ(1) = 1, and Γ is logarithmically 6

convex. It turns out that this function is unique; additionally, Γ(z) = (z − 1)!. Now, we’ll take n to be a very large positive integer and write a limit definition of the gamma function: z terms

Γ(z) =

(n + z)! (n + z)(n + z − 1) · · · (z + 1)z

z }| { n! (n + 1)(n + 2) · · · (n + z)

=

(n + z)(n + z − 1) · · · (z + 1)z



n!n z (n + z)(n + z − 1) · · · (z + 1)z

n!n z

Γ(z) := lim

n→∞

(n + z)(n + z − 1) · · · (z + 1)z

This will be our way of defining the Gamma function. It is easy to check that it satisfies all of our desired properties in this form; moreover, this form is valid for all complex z, excluding the non-positive integers (why?). Okay, hold on to your chairs, because we’re about to do something pretty crazy. (n + z)(n + z − 1) · · · (z + 1)z

 n + z n −1+ z z +1 = lim = z lim e · ··· n→∞ Γ(z) n→∞ n!n z n n −1 1      ‹  n Y z z‹ z = z lim e −z ln n 1 + · · · (1 + z) = z lim e −z ln n 1+ 1+ n→∞ n→∞ n n −1 k k=1   ‹ n n P Y Y z z‹ −z ln n+z nj=1 1j −z ln n z/k −z/k −z/k 1+ 1+ = z lim e e e = z lim e e n→∞ n→∞ k k k=1 k=1  ‹ ∞ Y z = zeγ z e −z/k 1 + k k=1 1



−z ln n

This infinite product reminds us in several ways of the infinite product for sin πz; in fact, we can pull apart the expansion for that exact function to obtain the Gamma function reflection identity. !   ∞ ∞ ∞ Y Y z ‹Y z‹ z2 e −z/k 1 + e z/k 1 − 1 − 2 = πz sin πz = πz k k=1 k k k=1 k=1 1 1 π = πz = zΓ(z) (−z)Γ(−z) Γ(z)Γ(1 − z) The above manipulations give us the famous result that Γ(z)Γ(1 − z) =

4.1

π sin πz

.

Exercises Z



1. The integral

2

e −αx d x frequently appears in the study of probability distributions. Compute it.

0

2. Evaluate the following integrals in terms of the Gamma function, and simplify as much as possible. Z∞ Z∞ Z∞ 2p (c) x 3 e −x cos x d x (e) x2−x x d x (a) x 3 3−x d x 0 ∞

Z (b)

x 2 e −2

p 3

x

0 ∞

Z dx

(d) 0

0

3. Determine the value of f (ε) =

x3e

−x 3

Z



p

e −ε

0 ∞

Z dx

(f)

p x 2 x e −x sin x d x

0 x

cos

0

defined at ε = 0? What about the integral? 7

p

x d x for ε > 0. Is your simple expression f (ε)

4.2

Expansions of Γ(1 + ε) Z



So far we know how to integrate things such as x 2 e −x d x, but we don’t immediately have a way to 0 Z∞ 2 −x determine x ln x e d x. From the methods we used earlier, we know that a good way to proceed 0 Z∞ would be to consider the integral x 2+ε e −x d x = Γ(3 + ε) = (2 + ε)(1 + ε)Γ(1 + ε). It turns out that 0

evaluating these integrals will come down to requiring an expansion for Γ(1+ε), so we will use the infinite product developed earlier to find one. We know that Γ(1 + ε) = εΓ(ε) = e −γ ε

∞  Y

1+

k=1

ε ‹−1 k

e ε/k

so we can write an expansion for the natural logarithm of Γ(1 + ε) as follows.    ∞ •ε ∞ ∞ (−1) j +1 ε j X X X ε ‹˜ ε   ln Γ(1 + ε) = −γ ε + − ln 1 + = −γ ε +  −  j k k k jk j =1 k=1 k=1 = −γ ε + = −γ ε +

∞ X ∞ (−1) j ε j X

jkj k=1 j =2 ∞ (−1)n ζ (n) X n=2

4.2.1

n

= −γ ε +

∞ (−1) j ε j X ∞ 1 X j =2

j

k=1

kj

εn .

Exercises

1. Evaluate the following integrals in terms of the Gamma function, and simplify as much as possible. Z∞ Z∞ Z∞ 2 (e) ln2 x e −x d x (a) ln x e −x d x (c) x ln x e −2x d x 0 ∞

Z (b)

0

ln2 x e −x d x

0 ∞

Z (d)

0

x 2 ln πx π

πx

0 ∞p

Z dx

(f)

x ln x e −3x d x

0

2. Take ε = 1 in our expansion for ln Γ(1 + ε) in order to obtain an infinite series representation for γ . Does this sum converge “quickly” or “slowly”? What can you do to make it converge faster? 3. Take ε = −1/2 in our expansion for ln Γ(1 + ε) in order to obtain an infinite series representation for γ . Does this sum converge “quickly” or “slowly”? What can you do to make it converge faster? Z∞   1 p 3 −x 4. Evaluate the integral x ln x e d x. Your expression should contain terms of the form Γ +ε . 3 0 How might you deal with these terms? Consider the infinite product expansion of the gamma function, and see if you can derive the relation p     1 2 2π 3 +ε Γ + ε = 3ε+1 Γ(1 + 3ε). Γ (1 + ε) Γ 3 3 3

8

4.3

The Beta Function

A particularly lucrative result can be obtained by considering the product Γ(α)Γ(β). Making the substitutions u = x + y and t = x/u will allow us to obtain the celebrated Beta function integral. Z∞ Z∞ Γ(α)Γ(β) = x α−1 e −x d x y β−1 e −y d y 0 Z0∞ Z∞ ” — = dy d x x α−1 y β−1 e −x−y Z0∞ Z0 u ” — = du d x x α−1 (u − x)β−1 e −u 0 0 ™ – Z∞ Zu x ‹β−1 −u x ‹α−1 β−1  u 1− e u· = du dx u u 0 0 Z∞ Z1 ” — = du u d t u α−1 t α−1 u β−1 (1 − t )β−1 e −u 0 ∞

0

Z

=

u

α+β−1 −u

e

0

Z

1

du

t α−1 (1 − t )β−1 d t

0

= Γ(α + β)

Z

1

t α−1 (1 − t )β−1 d t

0

Rearranging our expression, we can write Z

1

t α−1 (1 − t )β−1 d t =

0

Γ(α)Γ(β) Γ(α + β)

= B(α, β).

For simple integer values of α and β, this expression is nothing all that special, but its real power comes Z1 Z 1Æ p t 1 − t . Previously we would have needed to t (1 − t ) d t or when we consider integrals such as 0

0

make somewhat annoying trig substitutions, but now we can simply “plug and chug” to get the answer quickly. From the beta  function  we can also quickly derive the Legendre Duplication Formula, which 1 allows us to express Γ + z in terms of known quantities. Consider B(z, z), or 2 Z1 Γ(z)Γ(z) = t z−1 (1 − t ) z−1 d t Γ(2z) 0 Z1    u + 1 z−1 u + 1 z−1 d u = [1 − ] 2 2 2 −1 Z 1€ Š z−1 1 = 2z−2 1 − u2 du 2 0 Z1 1 1 Γ(1/2)Γ(z) = 2z−1 x −1/2 (1 − x) z−1 d x = 2z−1 € Š 2 2 Γ 12 + z 0 Solving for Γ



1 2



+ z , we see that Γ



 p Γ(2z) + z = 21−2z π . 2 Γ(z)

1

9

Finally, we can obtain an extremely useful expression involving the Beta function and powers of trigonometric functions. Consider substituting x = u 2 and y = v 2 in the product Γ(α)Γ(β) as follows. Z∞ Z∞ Γ(α)Γ(β) = x α−1 e −x d x y β−1 e −y d y 0 0 Z ∞  Z ∞  2 2α−1 −u 2α−1 −v 2 = 2 u e du 2 v e dv =4

Z

=4

Z

=2

Z

0 ∞

0 π/2

Z r dr

0 ∞

d θ r 2α+2β−2 e −r

2

i

cos2α−1 θ sin2β−1 θ

0

r 0 ∞

h

2α+2β−1 −r 2

e

Z

π/2

dr

u α+β−1 e −u d u

Z

0

0 π/2

cos2α−1 θ sin2β−1 θ d θ

cos2α−1 θ sin2β−1 θ d θ

0

= 2 Γ(α + β)

Z

π/2

cos2α−1 θ sin2β−1 θ d θ

0

Solving for the integral, we obtain the result that Z

π/2

cos2α−1 θ sin2β−1 θ d θ =

0

Γ(α)Γ(β) 2 Γ(α + β)

.

This allows us to obtain previously known results much more easily – such as Z

π/2

sin6 θ cos6 θ d θ =

0

5π 2048 Z

– in addition to entirely new, previously unknown results such as the fact that

π/2 p

È sin θ d θ =

0

4.3.1

2 π

Γ2 (3/4).

Logarithms and the Beta Function

We can apply regularization techniques similar to those developed earlier in order to compute integrals involving logarithms of x or 1−x over the interval 0 to 1 (or, similarly, integrals involving logarithms of sines Z1 and cosines over the interval 0 to π/2). What if we want to determine the value of ln x ln (1 − x) d x? 0

Similar to our earlier expansion methods, we will consider the integral Z

1

x ε (1 − x)δ d x =

0

10

Γ(1 + ε)Γ(1 + δ) Γ(2 + ε + δ)

.

The integral we are looking for corresponds to the εδ term in the double Taylor expansion. Hence, we can manipulate this expression of Gamma functions to yield the desired term. Γ(1 + ε)Γ(1 + δ) Γ(2 + ε + δ)

1

=

·

Γ(1 + ε)Γ(1 + δ)

1+ε+δ Γ(1 + ε + δ) ¨ € Š Š ζ (2) € 2 2 = 1 − (ε + δ) + (ε + δ) − · · · exp −γ (α + β) + ε + δ2 + · · · 2 « ζ (2) (ε + δ)2 + γ (α + β) − 2 € Š = 1 − (ε + δ) + (ε + δ)2 − · · · e −ζ (2)εδ €

2

Š

2

= 1 − ε − δ + ε + 2εδ + δ + · · · (1 − ζ (2)εδ + · · · ) ⇒ our εδ − term is 2 −

π2 6

.

This technique takes some getting used to, but once you have it down you can obtain some very important results – all that is required is algebraic fortitude. 4.3.2

Exercises

1. Evaluate the following integrals in terms of the Gamma function, and simplify as much as possible. Z1 Z1 Z π/2 ln x (a) dx (i) x ln x ln (1 − x) d x (e) sin x ln sin x d x p 1− x 0 0 0 Z1 Z1 Z π/2   1 1 1 (f) ln2 x ln (1 − x) d x (b) ln dx (j) csc x − dx 1− x 0 x 0 x 0 Z Z1 1 Zπ ln x 2 3 d x (g) p (c) x (1 − x) ln x d x (k) sin2 x cos8 x d x x(1 − x) 0 0 0 Z1 Z π/2 Zπ p p (d) x3 1 − x d x (h) sec x ln sin x d x (l) sin4 x sin x d x 0

0

0

2. Make the substitution u = −1 + 1/t in the Beta function integral to obtain u β−1



Z

(1 + u)α+β

0

Z



From this obtain 0 Z∞ ln x (a) dx 3 0 x +1

4.4

x m−1 n

x +1

dx =

π n

csc Z

= B(α, β).

mπ n ∞

(b) 0

and then compute the following integrals. p p Z∞ 3 x x x ln2 x (c) dx 2 2 x4 + 1 0 (x + 1)

n-Dimensional Hyperspheres

Consider an n-dimensional hypersphere. When we think of the word sphere, we are usually thinking of a 2-sphere (the surface itself is 2-dimensional and it is embedded in three-dimensional space). Thus, it makes sense to set d := n +1 as the dimension of the space the sphere is “embedded” in. Additionally, to be more specific, we will define an n-sphere as the surface that satisfies the equation 2 x12 + x22 + · · · + xn+1 = r2

11

(1)

where the radius r is some positive number. Clearly, we can parametrize this surface using n angular variables θ1 , θ2 , . . . , θn−1 , ϕ (simply think of the case of the 2-sphere, where we get away with parametrizing the surface with just the variables θ, ϕ) with 0 ≤ θi ≤ π and 0 ≤ ϕ ≤ 2π. The specific parametrization of x1 , x2 , . . . , xn+1 is shown below. x1 = r cos θ1 x2 = r sin θ1 cos θ2 x3 = r sin θ1 sin θ2 cos θ3 .. . xn = r sin θ1 sin θ2 · · · sin θn−1 cos ϕ xn+1 = r sin θ1 sin θ2 · · · sin θn−1 sin ϕ

To easily obtain the surface area, we need to determine what the “area” differential d x1 d x2 · · · d xn−1 is in terms of these angular variables. Clearly, for 1 ≤ k ≤ n − 1, we have d xk = r sin θ1 sin θ2 · · · sin θk , so we can write the surface area Sn as Sn =

Z

=

Z

π

0 π

0

=r =r

n

d θ1

Z

d θ1

Z

Z

π

π

0 π

0

sin

d θ2 · · ·

Z

d θ2 · · ·

Z

n−1

0 € Š € Š Γ n2 Γ 12 n

Γ

€

n+1 2

Š

π

0 π

0

θ1 d θ1 ·

Γ

€

d θn−1

Z

d θn−1

Z

 Z

π

0 2π

0

sin 0



  d ϕ r sin θ1 · · · sin θn−1 · r sin θ1 · · · sin θn−2 · · · r sin θ1 · r ” — d ϕ r n sinn−1 θ1 sinn−2 θ2 · · · sin θn−1  Z π  ‚Z 2π

n−2

θ2 d θ2 · · ·

0

sin θn−1 d θn−1



Œ

0

€ Š Š € Š Γ (1) Γ 12 Γ 12 ··· € Š € Š · 2π Γ n2 Γ 32

n−1 2

n+1

2π 2 2π d /2 d −1 = € r . Šrn = Γ(d /2) Γ n+1 2

5

The Zeta Function

The Riemann Zeta function is defined for complex numbers s with real part greater than 1 by the following series. ∞ 1 X 1 1 ζ (s) = 1 + s + s + · · · = 2 3 ks k=1 The sum on the right converges absolutely whenever Re[s] > 1. To see that we encounter “problems” for 1 1 1 1 1 s = 1, suppose the sum converges to a particular limit L = 1 + + + · · · . Then L/2 = + + + · · · , so 2 3 2 4 6 1 1 1+ + +· · · = L/2 as well, but every corresponding term of the sequence of odd terms is larger than the 3 5 sequence with even terms, contradiction. To see the Zeta function’s connection to the prime numbers, observe that we can “factor” the series as an infinite product over all primes: ζ (s) =

∞ 1 X k=1

ks

=

12

Y

ps

p

ps − 1

.

Additionally, there are some specific integrals in which the Zeta function makes a very important appearance. Consider the following manipulations. Z



0

x n−1 ex − 1

dx = =

Z



x n−1 e −x

=



Z

x n−1 e −x

1 − e −x 0 0 Z ∞ ∞ X n−1 −(k+1)x x

∞ X

e −k x d x

k=0

dx =

e

k=0 0

∞ X k=0

1 (k + 1)

Z n



u n−1 e −u d u

0

= Γ(n)ζ (n) Z



A virtually identical sequence of manipulations can be used to show that

x n−1

ex + 1 the Dirichlet eta function η(n) is defined as a sort of “alternating” zeta function:

= Γ(n)η(n), where

0

1

η(n) = 1 −

2n

+

1

− +··· =

3n

∞ (−1)k+1 X k=1

kn

  1 = 1 − n−1 ζ (n). 2

As usual, considering a more general form of the integral allows us to differentiate through the integral sign and produce many more highly useful results: Z

0

Z ∞

0

e

ax

−1

xnex



0

Z

x n−1



(e x − 1)2

xn (e x − 1)2

= a −n Γ(n)ζ (n) = Γ(n + 1)ζ (n)

= Γ(n + 1) [ζ (n) − ζ (n + 1)]

Although it is beyond the scope of this course to derive (it required a good background in complex analysis), we will find the reflection identity for the zeta function useful. It states that ζ (2s) = 22s π2s−1 sin πsΓ(1 − 2s)ζ (1 − 2s). This identity allows us to compute the value of the zeta function for various values that were previously inaccessible. For example, it is clear that ζ (−2n) = 0 for positive integers n. The negative even integers are thus known as the trivial zeroes of the zeta function, since they trivially exist due to the sine function apparent in this reflection identity. Additionally, we can pull an expression for the values of the Zeta function at the negative odd integers, as well as an expression for the derivative of the function at the negative evens. 2(2n + 1)! ζ (−2n − 1) = (−1)n+1 ζ (2n + 2) (2π)2n+2 ζ 0 (−2n) = (−1)n

(2n)! 2(2π)2n

ζ (2n + 1)

From these results, we can “assign” value of −1/12 to the divergent sum 1 + 2 + 3 + 4 + · · · = ζ (−1) = −

1 12

It turns out that this can actually be a meaningful result in various physical applications. 13

6

Problems

This selection of problems is intended to both be a source of practice for material from the lecture, as well as an invitation to results that I haven’t had time to discuss. Some of these problems will require you to develop your own results to solve; some of these problems are very hard. Still, if you persevere, you will undoubtedly gain a much greater understanding of the material and be able to apply it in a wider variety of situations. 1. Evaluate the following integrals. Make sure that all your answers are simplified as much as possible by applying the Gamma reflection identity if necessary and by reducing the argument of any Gamma functions to something less than one. Z∞ Z∞ Z∞ x ln x 2 −x (a) x ln x e d x (d) x cos 2x sin 3x e −5x d x (g) dx 3 0 0 0 x +1 Œ Z 1‚ Z1 Z∞ ln x 3 p sin x (e) d x (h) x 2 x ln (1 − x)2 d x (b) p dx x − 1 0 0 x 0   Z∞ 2 Z∞ x  1 2  cos x (f) − 4x  cosh 2x  dx 2x (c) p dx 2 e − 2e cos x + 1 − 1 0 x cos x 0 2. It is fairly simple to convince yourself that

∞ X 1

=



Š e 1 + e −1 = cosh 1. With the logic behind

(2k)! 2 this sum in mind, see if you can figure out how to sum the following series. k=0

(a) (b)

∞ X k=0 ∞ X k=0

1

(c)

(2k + 1)! 1

(d)

(3k)!

∞ X k=0 ∞ X k=0

1 (3k + 1)! 1 (3k + 2)!

(e) (f)

∞ X 1 k=0 ∞ X k=0

(4k)! 1 (4k + 1)!

From this you should be able to figure out how you would represent the series terms of the function f (x) =

∞ X

∞ X

aq k+ p x q k+ p in

k=0

an x n .

k=0

d

Γ0 (z)

3. The Digamma function Ψ(z) is defined by Ψ(z) = ln Γ(z) = . Write a power series expandz Γ(z) Z∞ p 3 sion for Ψ(1 + z) and evaluate the integral ln x e −2x d x in terms of the Gamma and Digamma 0

functions.

4. It turns out that the digamma function can be evaluated at all positive rational arguments. Determine Ψ(1/3). 5. For each of the following integrals, determine if the integral converges and if so, evaluate it. Z



(a) 0

p

dx x(e x − 1)

Z

π/2

2

sec x ln sin x d x

(b) 0

Z

π/2 

csc x −

(c) 0

14

2

1 x2

 dx

Z

π/2 

csc x −

(d) 0

Z

π/2 

(e) 0 1

Z (f)

1

4

csc3 x −

ln (1 − x) x

0

1 3



1



(g)

dx

x4 x

Z



ln x e −x

p 2

Z

dx

0



2x

Z



d x (h) 0

Z dx



(i) 0

sin 8x 3 x

3

cos 8x x3



(j) 0 1

Z dx

(k)

x ln2 x ln2 (1 − x 2 ) d x

0 ∞

3

Z (l)

dx

e −x p dx x x

x 3 ln x (x 4 + 1)3

0

dx

6. What dimension d has the hypersphere with maximum volume? And what is that volume? ∞ X

7. Consider f (s; ξ ) = the sum

∞ X

k=−∞ s

f (s; ξ )

s=1

x

s

1 (k + ξ ) s

for some complex number ξ that is not an integer. Re-express

as a logarithm of a infinite product, and use your results to evaluate the

following sums. (a) (b)

∞  X k=0 ∞  X

1 3k + 1 1

− −

1

 (d)

3k + 2 1

 (e)

6k + 1 6k + 5   1 1 k (−1) (c) − 6k + 1 6k + 5 k=0 k=0 ∞ X

7

(f)

∞ X

(−1)

k=0 – ∞ X

k

–

1 (4k + 1)3

1

2 k=0 (12k + 1) – ∞ X 1 k=0

(6k + 1)4

+

+



1

™

(4k + 3)3 ™ 1

(12k + 11)2 ™ 1

(6k + 5)4

References

If you have found the concepts of series expansions and the Gamma function interesting, there are a number of books that you can check out to learn more about these fascinating subjects - and to get a more rigorous background in the material, since given the time constraints I had to skim through parts of the proofs. First and foremost, this lecture is very strongly based on the material taught by Jonathan A. Osborne in his Advanced Mathematical Techniques course at TJHSST, the material of which can also be found in the textbook he wrote for the course, Advanced Mathematical Techniques: for Scientists and Engineers, Second Edition. Without that course, I could not be teaching this to you today as I would simply not know these techniques in the first place! For a strong basis in complex analysis, there is no better place to turn than A. I. Markushevich’s work Theory of Functions of a Complex Variable. For a (significantly!) cheaper, condensed version of Markushevich’s text, I suggest Richard A. Silverman’s text Introductory Complex Analysis. And finally, for a much more theoretical take on the subject, I suggest William A. Veech’s A Second Course in Complex Analysis. In other areas, I suggest Julian Havil’s book Gamma: Exploring Euler’s Constant as a very readable exposition of the many different facets of Euler’s γ . If the Zeta function and the Riemann hypothesis interest you, John Derbyshire’s book Prime Obsession is a very well-written, fascinating discussion of the Riemann hypothesis – it reads like a gripping mystery novel but the mathematics remains present and clearly exposited. Finally, I suggest H. M. Edward’s Riemann’s Zeta Function as a good advanced text for learning about the developments in theory caused by Riemann’s landmark 1859 paper, ranging from the basic reflection identities to the prime number theorem, Fourier analysis, and other topics.

15

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF