Study Guide (Math Econ)

August 28, 2017 | Author: Nida Sohail Chaudhary | Category: Mathematical Optimization, Economics, Physics & Mathematics, Mathematics, Mathematical Analysis
Share Embed Donate


Short Description

Outline of 300 level course in Mathematical Economics - University of London International Programmes...

Description

Mathematical economics M. Bray, R. Razin, A. Sarychev EC3120

2014

Undergraduate study in Economics, Management, Finance and the Social Sciences This subject guide is for a 300 course offered as part of the University of London International Programmes in Economics, Management, Finance and the Social Sciences. This is equivalent to Level 6 within the Framework for Higher Education Qualifications in England, Wales and Northern Ireland (FHEQ). For more information about the University of London International Programmes undergraduate study in Economics, Management, Finance and the Social Sciences, see: www.londoninternational.ac.uk

This guide was prepared for the University of London International Programmes by: Dr Margaret Bray, Dr Ronny Razin, Dr Andrei Sarychec, Department of Economics, The London School of Economics and Political Science. With typesetting and proof-reading provided by: James S. Abdey, BA (Hons), MSc, PGCertHE, PhD, Department of Statistics, London School of Economics and Political Science. This is one of a series of subject guides published by the University. We regret that due to pressure of work the authors are unable to enter into any correspondence relating to, or arising from, the guide. If you have any comments on this subject guide, favourable or unfavourable, please use the form at the back of this guide.

University of London International Programmes Publications Office Stewart House 32 Russell Square London WC1B 5DN United Kingdom www.londoninternational.ac.uk Published by: University of London © University of London 2007 Reprinted with minor revisions 2014 The University of London asserts copyright over all material in this subject guide except where otherwise indicated. All rights reserved. No part of this work may be reproduced in any form, or by any means, without permission in writing from the publisher. We make every effort to respect copyright. If you think we have inadvertently used your copyright material, please let us know.

Contents

Contents 1 Introduction

1

1.1

The structure of the course . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.3

Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.4

Syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.5

Reading advice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.5.1

Essential reading . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.5.2

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

Online study resources . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

1.6.1

The VLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

1.6.2

Making use of the Online Library . . . . . . . . . . . . . . . . . .

5

1.7

Using the subject guide . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.8

Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.6

2 Constrained optimisation: tools

9

2.1

Aim of the chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.2

Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.3

Essential reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.4

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.5

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

2.6

The constrained optimisation problem . . . . . . . . . . . . . . . . . . .

10

2.7

Maximum value functions . . . . . . . . . . . . . . . . . . . . . . . . . .

12

2.8

The Lagrange sufficiency theorem . . . . . . . . . . . . . . . . . . . . . .

14

2.9

Concavity and convexity and the Lagrange Necessity Theorem . . . . . .

16

2.10 The Lagrangian necessity theorem . . . . . . . . . . . . . . . . . . . . . .

19

2.11 First order conditions: when can we use them? . . . . . . . . . . . . . . .

19

2.11.1 Necessity of first order conditions . . . . . . . . . . . . . . . . . .

20

2.11.2 Sufficiency of first order conditions . . . . . . . . . . . . . . . . .

21

2.11.3 Checking for concavity and convexity . . . . . . . . . . . . . . . .

22

2.11.4 Definiteness of quadratic forms . . . . . . . . . . . . . . . . . . .

23

i

Contents

2.11.5 Testing the definiteness of a matrix . . . . . . . . . . . . . . . . .

23

2.11.6 Back to concavity and convexity . . . . . . . . . . . . . . . . . . .

26

2.12 The Kuhn-Tucker Theorem . . . . . . . . . . . . . . . . . . . . . . . . .

27

2.13 The Lagrange multipliers and the Envelope Theorem . . . . . . . . . . .

29

2.13.1 Maximum value functions . . . . . . . . . . . . . . . . . . . . . .

29

2.14 The Envelope Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

2.15 Solutions to activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

2.16 Sample examination questions . . . . . . . . . . . . . . . . . . . . . . . .

33

2.17 Comments on the sample examination questions . . . . . . . . . . . . . .

33

3 The consumer’s utility maximisation problem 3.1

Aim of the chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

3.2

Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

3.3

Essential reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

3.4

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

3.5

Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

3.5.1

Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

3.5.2

Assumptions on preferences . . . . . . . . . . . . . . . . . . . . .

38

The consumer’s budget . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

3.6.1

Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

Preferences and the utility function . . . . . . . . . . . . . . . . . . . . .

40

3.7.1

The consumer’s problem in terms of preferences . . . . . . . . . .

40

3.7.2

Preferences and utility . . . . . . . . . . . . . . . . . . . . . . . .

41

3.7.3

Cardinal and ordinal utility . . . . . . . . . . . . . . . . . . . . .

41

The consumer’s problem in terms of utility . . . . . . . . . . . . . . . . .

42

3.8.1

Uncompensated demand and the indirect utility function . . . . .

42

3.8.2

Nonsatiation and uncompensated demand . . . . . . . . . . . . .

44

Solution to activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

3.10 Sample examination questions . . . . . . . . . . . . . . . . . . . . . . . .

52

3.11 Comments on sample examination questions . . . . . . . . . . . . . . . .

53

3.6 3.7

3.8

3.9

4 Homogeneous and homothetic functions in consumer choice theory

ii

37

55

4.1

Aim of the chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

4.2

Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

4.3

Essential reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

4.4

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

Contents

4.5

Homogeneous functions . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

4.5.1

Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

4.5.2

Homogeneity of degrees zero and one . . . . . . . . . . . . . . . .

56

Homogeneity, uncompensated demand and the indirect utility function .

57

4.6.1

Homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

4.6.2

Other properties of indirect utility functions . . . . . . . . . . . .

58

4.7

Derivatives of homogeneous functions . . . . . . . . . . . . . . . . . . . .

60

4.8

Homogeneous utility functions . . . . . . . . . . . . . . . . . . . . . . . .

61

4.8.1

Homogeneous utility functions and the indifference curve diagram

61

4.8.2

Homogeneous utility functions and the marginal rate of substitution 62

4.8.3

Homogeneous utility functions and uncompensated demand . . . .

63

Homothetic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

4.9.1

Homogeneity and homotheticity . . . . . . . . . . . . . . . . . . .

64

4.9.2

Indifference curves with homothetic utility functions . . . . . . . .

65

4.9.3

Marginal rates of substitution with homothetic utility functions .

66

4.9.4

Homothetic utility and uncompensated demand . . . . . . . . . .

67

4.10 Solutions to activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

4.11 Sample examination questions . . . . . . . . . . . . . . . . . . . . . . . .

72

4.6

4.9

5 Quasiconcave and quasiconvex functions

73

5.1

Aim of the chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

5.2

Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

5.3

Essential reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

5.4

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

74

5.5

Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

74

5.5.1

Concavity and convexity . . . . . . . . . . . . . . . . . . . . . . .

74

5.5.2

Quasiconcavity and quasiconvexity . . . . . . . . . . . . . . . . .

75

Quasiconcavity and concavity; quasiconvexity and convexity . . . . . . .

76

5.6.1

The relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

5.6.2

Quasiconcavity in producer theory . . . . . . . . . . . . . . . . .

77

5.6.3

Quasiconcavity in consumer theory . . . . . . . . . . . . . . . . .

79

Tangents and sets with quasiconcave and quasiconvex functions . . . . .

80

5.7.1

The result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

5.7.2

Interpreting Theorem 18 . . . . . . . . . . . . . . . . . . . . . . .

81

5.7.3

Proof of Theorem 18 . . . . . . . . . . . . . . . . . . . . . . . . .

82

The Kuhn-Tucker Theorem with quasiconcavity and quasiconvexity . . .

84

5.6

5.7

5.8

iii

Contents

5.9

Solutions to activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

5.10 Sample examination questions . . . . . . . . . . . . . . . . . . . . . . . .

88

5.11 Comments on the sample examination questions . . . . . . . . . . . . . .

88

6 Expenditure and cost minimisation problems 6.1

Aim of the chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

6.2

Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

6.3

Essential reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

6.4

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

6.5

Compensated demand and expenditure minimisation . . . . . . . . . . .

94

6.5.1

Income and substitution effects . . . . . . . . . . . . . . . . . . .

94

6.5.2

Definition of compensated demand . . . . . . . . . . . . . . . . .

95

6.5.3

Properties of compensated demand . . . . . . . . . . . . . . . . .

96

The expenditure function . . . . . . . . . . . . . . . . . . . . . . . . . . .

98

6.6.1

Definition of the expenditure function . . . . . . . . . . . . . . . .

98

6.6.2

Properties of the expenditure function . . . . . . . . . . . . . . .

98

The firm’s cost minimisation problem . . . . . . . . . . . . . . . . . . . .

102

6.7.1

Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

102

6.7.2

The firm’s problem and the consumer’s problem . . . . . . . . . .

103

Utility maximisation and expenditure minimisation . . . . . . . . . . . .

103

6.8.1

The relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . .

103

6.8.2

Demand and utility relationships . . . . . . . . . . . . . . . . . .

105

Roy’s identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

106

6.9.1

The statement of Roy’s identity . . . . . . . . . . . . . . . . . . .

106

6.9.2

The derivation of Roy’s identity . . . . . . . . . . . . . . . . . . .

106

6.9.3

The chain rule for Roy’s identity . . . . . . . . . . . . . . . . . .

107

6.10 The Slutsky equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

109

6.10.1 Statement of the Slutsky equation . . . . . . . . . . . . . . . . . .

109

6.10.2 Derivation of the Slutsky equation . . . . . . . . . . . . . . . . . .

109

6.11 Solutions to activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

110

6.12 Sample examination questions . . . . . . . . . . . . . . . . . . . . . . . .

120

6.13 Comments on the sample examination questions . . . . . . . . . . . . . .

121

6.6

6.7

6.8

6.9

7 Dynamic programming

iv

93

123

7.1

Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

123

7.2

Essential reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

123

Contents

7.3

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

123

7.4

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

123

7.5

The optimality principle . . . . . . . . . . . . . . . . . . . . . . . . . . .

124

7.6

A more general dynamic problem and the optimality principle . . . . . .

127

7.6.1

Method 1. Factoring out V (·) . . . . . . . . . . . . . . . . . . . .

128

7.6.2

Method 2. Finding V . . . . . . . . . . . . . . . . . . . . . . . . .

129

7.7

Solutions to activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

135

7.8

Sample examination/practice questions . . . . . . . . . . . . . . . . . . .

139

7.9

Comments on the sample examination/practice questions . . . . . . . . .

141

8 Ordinary differential equations

149

8.1

Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

149

8.2

Essential reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

149

8.3

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

149

8.4

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

149

8.5

The notion of a differential equation: first order differential equations . .

150

8.5.1

Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

150

8.5.2

Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

151

8.5.3

Non-homogeneous linear equations . . . . . . . . . . . . . . . . .

152

8.5.4

Separable equations . . . . . . . . . . . . . . . . . . . . . . . . . .

153

Linear second order equations . . . . . . . . . . . . . . . . . . . . . . . .

154

8.6.1

Homogeneous (stationary) equations . . . . . . . . . . . . . . . .

154

8.6.2

Non-homogeneous second order linear equations . . . . . . . . . .

155

8.6.3

Phase portraits . . . . . . . . . . . . . . . . . . . . . . . . . . . .

156

8.6.4

The notion of stability . . . . . . . . . . . . . . . . . . . . . . . .

158

Systems of equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

159

8.7.1

Systems of linear ODEs: solutions by substitution . . . . . . . . .

159

8.7.2

Two-dimensional phase portraits . . . . . . . . . . . . . . . . . .

161

8.8

Linearisation of non-linear systems of ODEs . . . . . . . . . . . . . . . .

163

8.9

Qualitative analysis of the macroeconomic dynamics under rational expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

8.6

8.7

8.9.1

Fiscal expansion in a macro model with sticky prices . . . . . . .

165

8.10 Solutions to activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

173

9 Optimal control

181

9.1

Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

181

9.2

Essential reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

181

v

Contents

9.3

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

181

9.4

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

181

9.5

Continuous time optimisation: Hamiltonian and Pontryagin’s maximum principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

182

9.5.1

Finite horizon problem . . . . . . . . . . . . . . . . . . . . . . . .

182

9.5.2

Infinite horizon problem . . . . . . . . . . . . . . . . . . . . . . .

185

Ramsey-Cass-Koopmans model . . . . . . . . . . . . . . . . . . . . . . .

185

9.6.1

The command optimum . . . . . . . . . . . . . . . . . . . . . . .

186

9.6.2

Steady state and dynamics . . . . . . . . . . . . . . . . . . . . . .

187

9.6.3

Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

188

9.6.4

Local behaviour around the steady state . . . . . . . . . . . . . .

189

9.6.5

Fiscal policy in the Ramsey model . . . . . . . . . . . . . . . . .

189

9.6.6

Current value Hamiltonians and the co-state equation . . . . . . .

192

9.6.7

Multiple state and control variables . . . . . . . . . . . . . . . . .

193

Optimal investment problem: Tobin’s q . . . . . . . . . . . . . . . . . . .

193

9.7.1

Steady state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

194

9.7.2

Phase diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . .

194

9.7.3

Local behaviour around the steady state . . . . . . . . . . . . . .

195

9.8

Continuous time optimisation: extensions . . . . . . . . . . . . . . . . . .

195

9.9

Solutions to activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

199

9.10 Sample examination/practice questions . . . . . . . . . . . . . . . . . . .

202

9.11 Comments on the sample examination/practice questions . . . . . . . . .

205

9.12 Appendix A: Interpretation of the co-state variable λ(t) . . . . . . . . . .

214

9.6

9.7

9.13 Appendix B: Sufficient conditions for solutions to optimal control problems 216 9.14 Appendix C: Transversality condition in the infinite horizon optimal contol problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

216

A Sample examination paper

217

B Sample examination paper – Examiners’ commentary

223

vi

Chapter 1 Introduction Welcome to EC3120 Mathematical economics which is a 300 course offered on the Economics, Management, Finance and Social Sciences (EMFSS) suite of programmes. In this brief introduction, we describe the nature of this course and advise on how best to approach it. Essential textbooks and Further reading resources for the entire course are listed in this introduction for easy reference. At the end, we present relevant examination advice.

1.1

The structure of the course

The course consists of two parts which are roughly equal in length but belonging to the two different realms of economics: the first deals with the mathematical apparatus needed to rigorously formulate the core of microeconomics, the consumer choice theory the second presents a host of techniques used to model intertemporal decision making in the macroeconomy. The two parts are also different in style. In the first part it is important to lay down rigorous proofs of the main theorems while paying attention to assumption details. The second part often dispenses with rigour in favour of slightly informal derivations needed to grasp the essence of the methods. The formal treatment of the underlying mathematical foundations is too difficult to be within the scope of undergraduate study; still, the methods can be used fruitfully without the technicalities involved: most macroeconomists actively employing them have never taken a formal course in optimal control theory. As you should already have an understanding of multivariate calculus and integration, we have striven to make the exposition in both parts of the subject completely self-contained. This means that beyond the basic prerequisites you do not need to have an extensive background in fields like functional analysis, topology, or differential equations. However, such a background may allow you to progress faster. For instance, if you have studied the concepts and methods of ordinary differential equations before you may find that you can skip parts of Chapter 8. By design the course has a significant economic component. Therefore we apply the techniques of constrained optimisation to the problems of static consumer and firm choice; the dynamic programming methods are employed to analyse consumption smoothing, habit formation and allocation of spending on durables and non-durables; the phase plane tools are used to study dynamic fiscal policy analysis and foreign

1

1. Introduction

currency reserves dynamics; Pontryagin’s maximum principle is utilised to examine a firm’s investment behaviour and the aggregate saving behaviour in an economy.

1.2

Aims

The course is specifically designed to: demonstrate to you the importance of the use of mathematical techniques in theoretical economics enable you to develop skills in mathematical modelling.

1.3

Learning outcomes

At the end of this course, and having completed the Essential reading and activities, you should be able to: use and explain the underlying principles, terminology, methods, techniques and conventions used in the subject solve economic problems using the mathematical methods described in the subject.

1.4

Syllabus

Techniques of constrained optimisation. This is a rigorous treatment of the mathematical techniques used for solving constrained optimisation problems, which are basic tools of economic modelling. Topics include: definitions of a feasible set and of a solution, sufficient conditions for the existence of a solution, maximum value function, shadow prices, Lagrangian and Kuhn-Tucker necessity and sufficiency theorems with applications in economics, for example General Equilibrium theory, Arrow-Debreau securities and arbitrage. Intertemporal optimisation. Bellman approach. Euler equations. Stationary infinite horizon problems. Continuous time dynamic optimisation (optimal control). Applications, such as habit formation, Ramsay-Kass-Coopmans model, Tobin’s q, capital taxation in an open economy, are considered. Tools for optimal control: ordinary differential equations. These are studied in detail and include linear second order equations, phase portraits, solving linear systems, steady states and their stability.

1.5

Reading advice

While topics covered in this subject are in every economists essential toolbox, their textbook coverage varies. There are a lot of first-rate treatments of static optimisation

2

1.5. Reading advice

methods; most textbooks that have ‘mathematical economics’ or ‘mathematics for economists’ in the title will have covered these in various levels of rigour. Therefore students with different backgrounds will be able to choose a book with the most suitable level of exposition.

1.5.1

Essential reading

Dixit, A.K. Optimization in economics theory. (Oxford: Oxford University Press, 1990) [ISBN 9780198772101]. The textbook by Dixit, is perhaps in the felicitous middle. However until recently there has been no textbook that covers all the aspects of the dynamic analysis and optimisation used in macroeconomic models. Sydsæter, K., P. Hammond, A. Seierstad and A. Strøm Further mathematics for economic analysis. (Harlow: Pearson Prentice Hall, 2008) second edition [ISBN 9780273713289]. The book by Sydsæster et al. is an attempt to close that gap, and is therefore the Essential reading for the second part of the course, despite the fact that the exposition is slightly more formal than in this guide. This book covers almost all of the topics in the course, although the emphasis falls on the technique and not on the proof. It also provides useful reference for linear algebra, calculus and basic topology. The style of the text is slightly more formal than the one adopted in this subject guide. For that reason we have included references to Further reading, especially from various macroeconomics textbooks, which may help develop a more intuitive (non-formal) understanding of the concepts from the application-centred perspective. Note that whatever your choice of Further reading textbooks is, textbook reading is essential. As with lectures, this guide gives structure to your study, while the additional reading supplies a lot of detail to supplement this structure. There are also more learning activities and Sample examination questions, with solutions, to work through in each chapter. Detailed reading references in this subject guide refer to the editions of the set textbooks listed above. New editions of one or more of these textbooks may have been published by the time you study this course. You can use a more recent edition of any of the books; use the detailed chapter and section headings and the index to identify relevant readings. Also check the virtual learning environment (VLE) regularly for updated guidance on readings.

1.5.2

Further reading

Please note that as long as you read the Essential reading you are then free to read around the subject area in any text, paper or online resource. You will need to support your learning by reading as widely as possible and by thinking about how these principles apply in the real world. To help you read extensively, you have free access to the VLE and University of London Online Library (see below). Other useful texts for this course include: Barro, R. and X. Sala-i-Martin Economic growth. (New York: McGraw-Hill, 2003)

3

1. Introduction

second edition [ISBN 9780262025539]. The mathematical appendix contains useful reference in condensed form for phase plane analysis and optimal control. Kamien, M. and N.L. Schwarz Dynamic optimisation: the calculus of variations and optimal control in economics and management. (Amsterdam: Elsevier Science, 1991) [ISBN 9780444016096]. This book extensively covers optimal control methods. Lunjqvist, L. and T.J. Sargent Recursive macroeconomic theory. (Cambridge, MA: MIT Press, 2001) [ISBN 9780262122740]. This book is a comprehensive (thus huge!) study of macroeconomical applications centred around the dynamic programming technique. Rangarajan, S. A first course in optimization theory. (Cambridge: Cambridge University Press, 1996) [ISBN 9780521497701] Chapters 11 and 12. This book has a chapter on dynamic programming. Sargent, T.J. Dynamic macroeconomic theory. (Cambridge, MA: Harvard University Press, 1987) [ISBN 9780674218772] Chapter 1. This book has a good introduction to dynamic programming. Simon, C.P. and L. Blume Mathematics for economists. (New York: W.W Norton, 1994) [ISBN 9780393957334]. This textbook deals with static optimisation topics in a comprehensive manner. It also covers substantial parts of differential equations theory. Takayama, A. Analytical methods in economics. (Ann Arbor, MI; University of Michigan Press, 1999) [ISBN 9780472081356]. This book extensively covers the optimal control methods. Varian, H.R. Intermediate microeconomics: A modern approach. (New York: W.W. Norton & Co, 2009) eight edition [ISBN 9780393934243] Chapters 2–6, or the relevant section of any intermediate microeconomics textbooks. Varian, H.R. Microeconomic analysis. (New York: W.W Norton & Co, 1992) third edition [ISBN 9780393957358] Chapters 7. For a more sophisticated treatment comparable to Chapter 2 of this guide.

1.6

Online study resources

In addition to the subject guide and the reading, it is crucial that you take advantage of the study resources that are available online for this course, including the VLE and the Online Library. You can access the VLE, the Online Library and your University of London email account via the Student Portal at: http://my.londoninternational.ac.uk You should have received your login details for the Student Portal with your official offer, which was emailed to the address that you gave on your application form. You

4

1.6. Online study resources

have probably already logged in to the Student Portal in order to register! As soon as you registered, you will automatically have been granted access to the VLE, Online Library and your fully functional University of London email account. If you have forgotten these login details, please click on the ‘Forgotten your password’ link on the login page.

1.6.1

The VLE

The VLE, which complements this subject guide, has been designed to enhance your learning experience, providing additional support and a sense of community. It forms an important part of your study experience with the University of London and you should access it regularly. The VLE provides a range of resources for EMFSS courses: Self-testing activities: Doing these allows you to test your own understanding of subject material. Electronic study materials: The printed materials that you receive from the University of London are available to download, including updated reading lists and references. Past examination papers and Examiners’ commentaries: These provide advice on how each examination question might best be answered. A student discussion forum: This is an open space for you to discuss interests and experiences, seek support from your peers, work collaboratively to solve problems and discuss subject material. Videos: There are recorded academic introductions to the subject, interviews and debates and, for some courses, audio-visual tutorials and conclusions. Recorded lectures: For some courses, where appropriate, the sessions from previous years’ Study Weekends have been recorded and made available. Study skills: Expert advice on preparing for examinations and developing your digital literacy skills. Feedback forms. Some of these resources are available for certain courses only, but we are expanding our provision all the time and you should check the VLE regularly for updates.

1.6.2

Making use of the Online Library

The Online Library contains a huge array of journal articles and other resources to help you read widely and extensively. To access the majority of resources via the Online Library you will either need to use your University of London Student Portal login details, or you will be required to register and use an Athens login:

5

1. Introduction

http://tinyurl.com/ollathens The easiest way to locate relevant content and journal articles in the Online Library is to use the Summon search engine. If you are having trouble finding an article listed in a reading list, try removing any punctuation from the title, such as single quotation marks, question marks and colons. For further advice, please see the online help pages: www.external.shl.lon.ac.uk/summon/about.php

1.7

Using the subject guide

We have already mentioned that this guide is not a textbook. It is important that you read textbooks in conjunction with the subject guide and that you try problems from them. The Learning activities and the sample questions at the end of the chapters in this guide are a very useful resource. You should try them all once you think you have mastered a particular chapter. Do really try them: do not just simply read the solutions where provided. Make a serious attempt before consulting the solutions. Note that the solutions are often just sketch solutions, to indicate to you how to answer the questions but, in the examination, you must show all your calculations. It is vital that you develop and enhance your problem-solving skills and the only way to do this is to try lots of examples. Finally, we often use the symbol  to denote the end of a proof, where we have finished explaining why a particular result is true. This is just to make it clear where the proof ends and the following text begins.

1.8

Examination

Important: Please note that subject guides may be used for several years. Because of this we strongly advise you to always check both the current Regulations, for relevant information about the examination, and the VLE where you should be advised of any forthcoming changes. You should also carefully check the rubric/instructions on the paper you actually sit and follow those instructions. A Sample examination paper is at the end of this guide. Notice that the actual questions may vary, covering the whole range of topics considered in the course syllabus. You are required to answer eight out of 10 questions: all five from Section A (8 marks each) and any three from Section B (20 marks each). Candidates are strongly advised to divide their time accordingly. Also note that in the examination you should submit all your derivations and rough work. If you cannot completely solve an examination question you should still submit partial answers as many marks are awarded for using the correct approach or method. Remember, it is important to check the VLE for: up-to-date information on examination and assessment arrangements for this course

6

1.8. Examination

where available, past examination papers and Examiners’ commentaries for this course which give advice on how each question might best be answered.

7

1. Introduction

8

Chapter 2 Constrained optimisation: tools 2.1

Aim of the chapter

The aim of this chapter is to introduce you to the topic of constrained optimisation in a static context. Special emphasis is given to both the theoretical underpinnings and the application of the tools used in economic literature.

2.2

Learning outcomes

By the end of this chapter, you should be able to: formulate a constrained optimisation problem discern whether you could use the Lagrange method to solve the problem use the Lagrange method to solve the problem, when this is possible discern whether you could use the Kuhn-Tucker Theorem to solve the problem use the Kuhn-Tucker Theorem to solve the problem, when this is possible discuss the economic interpretation of the Lagrange multipliers carry out simple comparative statics using the Envelope Theorem.

2.3

Essential reading

This chapter is self-contained and therefore there is no essential reading assigned.

2.4

Further reading

Sydsæter, K., P. Hammond, A. Seierstad and A. Strøm Further mathematics for economic analysis. Chapters 1 and 3. Dixit, A.K. Optimization in economics theory. Chapters 1–8 and the appendix.

9

2. Constrained optimisation: tools

2.5

Introduction

The role of optimisation in economic theory is important because we assume that individuals are rational. Why do we look at constrained optimisation? The problem of scarcity. In this chapter we study the methods to solve and evaluate the constrained optimisation problem. We develop and discuss the intuition behind the Lagrange method and the Kuhn-Tucker theorem. We define and analyse the maximum value function, a construct used to evaluate the solutions to optimisation problems.

2.6

The constrained optimisation problem

Consider the following example of an economics application. Example 2.1 ABC is a perfectly competitive, profit-maximising firm, producing y from input x according to the production function y = x5 . The price of output is 2, and the price of input is 1. Negative levels of x are impossible. Also, the firm cannot buy more than k units of input. The firm is interested in two problems. First, the firm would like to know what to do in the short run; given that its capacity, k, (e.g. the size of its manufacturing facility) is fixed, it has to decide how much to produce today, or equivalently, how many units of inputs to employ today to maximise its profits. To answer this question, the firm needs to have a method to solve for the optimal level of inputs under the above constraints. When thinking about the long-run operation of the firm, the firm will consider a second problem. Suppose the firm could, at some cost, invest in increasing its capacity, k, is it worthwhile for the firm? By how much should it increase/decrease k? To answer this the firm will need to be able to evaluate the benefits (in terms of added profits) that would result from an increase in k. Let us now write the problem of the firm formally: (The firm’s problem)

max g(x) = 2x5 − x s.t. h(x) = x ≤ k x ≥ 0.

More generally, we will be interested in similar problems as that outlined above. Another important example of such a problem is that of a consumer maximising his utility trying to choose what to consume and constrained by his budget. We can formulate the general problem denoted by ‘COP’ as: (COP)

10

max g(x) s.t. h(x) ≤ k x ∈ x.

2.6. The constrained optimisation problem

Note that in the general formulation we can accommodate multidimensional variables. In particular g : x → R, x is a subset of Rn , h : x → Rm and k is a fixed vector in Rm . Definition 1 z ∗ solves COP if z ∗ ∈ Z, h(z ∗ ) ≤ k, and for any other z ∈ Z satisfying h(z) ≤ k, we have that g(z ∗ ) ≥ g(z).

Definition 2 The feasible set is the set of vectors in Rn satisfying z ∈ Z and h(z) ≤ k.

Activity 2.1 Show that the following constrained maximisation problems have no solution. For each example write what you think is the problem for the existence of a solution. (a) Maximise ln x subject to x > 1. (b) Maximise ln x subject to x < 1. (c) Maximise ln x subject to x < 1 and x > 2. Solutions to activities are found at the end of the chapters. To steer away from the above complications we can use the following theorem: Theorem 1 If the feasible set is non-empty, closed and bounded (compact), and the objective function g is continuous on the feasible set then the COP has a solution. Note that for the feasible set to be compact it is enough to assume that x is closed and the constraint function h(x) is continuous. Why? The conditions are sufficient and not necessary. For example x2 has a minimum on R2 even if it is not bounded. Activity 2.2 For each of the following sets of constraints either say, without proof, what is the maximum of x2 subject to the constraints, or explain why there is no maximum: (a) x ≤ 0, x ≥ 1 (b) x ≤ 2 (c) 0 ≤ x ≤ 1 (d) x ≥ 2

11

2. Constrained optimisation: tools

(e) −1 ≤ x ≤ 1 (f) 1 ≤ x < 2 (g) 1 ≤ x ≤ 1 (h) 1 < x ≤ 2. In what follows we will devote attention to two questions (similar to the short-run and long-run questions that the firm was interested in in the example). First, we will look for methods to solve the COP. Second, having found the optimal solution we would like to understand the relation between the constraint k and the optimal value of the COP given k. To do this we will study the concept of the maximum value function.

2.7

Maximum value functions

To understand the answer to both questions above it is useful to follow the following route. Consider the example of firm ABC. A first approach to understanding the maximum profit that the firm might gain is to consider the situations that the firm is indeed constrained by k. Plot the profit of the firm as a function of x and we see that profits are increasing up to x = 1 and then decrease, crossing zero profits when x = 4. This implies that the firm is indeed constrained by k when k ≤ 1; in this case it is optimal for the firm to choose x∗ = k. But when k > 1 the firm is not constrained by k; choosing x∗ = 1 < k is optimal for the firm. Since we are interested in the relation between the level of the constraint k and the maximum profit that the firm could guarantee, v, it is useful to look at the plane spanned by taking k on the x-axis and v on the y-axis. For firm ABC it is easy to see that maximal profits follow exactly the increasing part of the graph of profits we had before (as x∗ = k). But when k > 1, as we saw above, the firm will always choose x∗ = 1 < k and so the maximum attainable profit will stay flat at a level of 1. More generally, how can we think about the maximum attainable value for the COP? Formally, and without knowing if a solution exists or not, we can write this function as s(k) = sup {g(x) : x ∈ i, h(x) ≤ k} . We would like to get some insight as to what this function looks like. One way to proceed is to look, in the (k, v) space, at all the possible points, (k, v), that are attainable by the function g(x) and the constraints. Formally, consider the set, B = {(k, v) : k ≥ h(x), v ≤ g(x) for some x ∈ x} . The set B defines the ‘possibility set’ of all the values that are feasible, given a constraint k. To understand what is the maximum attainable value given a particular k is to look at the upper boundary of this set. Formally, one can show that the values v on the upper boundary of B correspond exactly with the function s(k), which brings us closer to the notion of the maximum value function. It is intuitive that the function s(k) will be monotone in k; after all, when k is increased, this cannot lower the maximal attainable value as we have just relaxed the

12

2.7. Maximum value functions

Figure 2.1: The set B and S(k).

constraints. In the example of firm ABC, abstracting away from the cost of the facility, a larger facility may never imply that the firm will make less profits! The following lemma formalises this. Lemma 1 If k1 ∈ K and k1 ≤ k2 then k2 ∈ K and s(k1 ) ≤ s(k2 ). Proof. If k1 ∈ K there exists a z ∈ Z such that h(z) ≤ k1 ≤ k2 so k2 ≤ K. Now consider any v < s(k1 ). From the definition there exists a z ∈ Z such that v < g(z) and h(z) ≤ k1 ≤ k2 , which implies that s(k2 ) = sup {g(z); z ∈ Z, h(z) ≤ k2 } > v. Since this is true for all v < s(k1 ) it implies that s(k2 ) ≥ s(k1 ).  Therefore, the boundary of B defines a non-decreasing function. If the set B is closed, that is, it includes its boundary, this is the maximum value function we are looking for. For each value of k it shows the maximum available value of the objective function. We can confine ourselves to maximum rather than supremum. This is possible if the COP has a solution. To ensure a solution, we can either find it, or show that the objective function is continuous and the feasible set is compact. Definition 3 (The maximum value function) If z(k) solves the COP with the constraint parameter k, the maximum value function is v(k) = g(z(k)). The maximum value function, it it exists, has all the properties of s(k). In particular, it is non-decreasing. So we have reached the conclusion that x∗ is a solution to the COP if and only if (k, g(x∗ )) lies on the upper boundary of B. Activity 2.3 XY Z is a profit-maximising firm selling a good in a perfectly competitive market at price 4. It can produce any non-negative quantity of such good y at cost c(y) = y 2 . However there is a transport bottleneck which makes it

13

2. Constrained optimisation: tools

impossible for the firm to sell more than k units of y, where k ≥ 0. Write down XY Z’s profit maximisation problem. Show on a graph the set B for this position. Using the graph write down the solution to the problem for all non-negative values of k.

2.8

The Lagrange sufficiency theorem

In the last section we defined what we mean by the maximum value function given that we have a solution. We also introduced the set, B, of all feasible outcomes in the (k, v) space. We concluded that x∗ is a solution to the COP if and only if (k, g(x∗ )) lies on the upper boundary of B. In this section we proceed to use this result to find a method to solve the COP. Which values of x would give rise to boundary points of B? Suppose we can draw a line with a slope λ through a point (k, v) = (h(x∗ ), g(x∗ )) which lies entirely on or above the set B. The equation for this line is: v − λk = g(x∗ ) − λh(x∗ ). Example 2.2 (Example 2.1 revisited) We can draw such a line through (0.25, 0.75) with slope q = 1 and a line through (1, 1) with slope 0. If the slope λ is non-negative and recalling the definition of B as the set of all possible outcomes, the fact that the line lies entirely on or above the set B can be restated as g(x∗ ) − λh(x∗ ) ≥ g(x) − λh(x) for all x ∈ x. But note that this implies that (h(x∗ ), g(x∗ )) lies on the upper boundary of the set B implying that if x∗ is feasible it is a solution to the COP. Example 2.3 (Example 2.1 revisited) It is crucial for this argument that the slope of the line is non-negative. For example for the point (4, 0) there does not exist a line passing through it that is on or above B. This corresponds to x = 4 and indeed it is not a solution to our example. Sometimes the line has a slope of q = 0. Suppose for example that k = 4, if we take the line with q = 0 through (4, 1), it indeed lies above the set B. The point x∗ = 1 satisfies: g(x∗ ) − 0h(x∗ ) ≥ g(x) − 0h(x) for all x ∈ x, and as h(x∗ ) < k, then x∗ solves the optimisation problem for k = k ∗ ≥ 1. Summarising the argument so far, suppose k ∗ , λ and x∗ satisfy the following conditions: g(x∗ ) − λh(x∗ ) λ x∗ either k ∗ or k ∗

14

≥ ≥ ∈ = >

g(x) − λh(x) for all x ∈ x 0 x h(x∗ ) h(x∗ ) and λ = 0

2.8. The Lagrange sufficiency theorem

then x∗ solves the COP for k = k ∗ . This is the Lagrange sufficiency theorem. It is convenient to write it slightly differently: adding λk ∗ to both sides of the first condition we have g(x∗ ) + λ(k ∗ − h(x∗ )) ≥ g(x) + λ(k ∗ − h(x)) for all x ∈ x λ ≥ 0 ∗ x ∈ x and k ∗ ≥ h(x∗ ) λ[k ∗ − h(x∗ )] = 0. We refer to λ as the Lagrange multiplier. We refer to the expression g(x) + λ(k ∗ − h(x)) ≡ L(x, k ∗ , λ) as the Lagrangian. The conditions above imply that x∗ maximises the Lagrangian given a non-negativity restriction, feasibility, and a complementary slackness (CS) condition respectively. Formally: Theorem 2 If for some q ≥ 0, z ∗ maximises L(z, k ∗ , q) subject to the three conditions, it also solves the COP. Proof. From the complementary slackness condition, q[k ∗ − h(z ∗ )] = 0. Thus, g(z ∗ ) = g(z ∗ ) + q(k ∗ − h(z ∗ )). By q ≥ 0 and k ∗ − h(z) ≥ 0 for all feasible z, then g(z) + q(k ∗ − h(z)) ≥ g(z). By maximisation of L we get g(z ∗ ) ≥ g(z), for all feasible z and since z ∗ itself is feasible, then it solves the COP.  Example 2.4 (Example 2.1 revisited) We now solve the example of the firm ABC. The Lagrangian is given by L(x, k, λ) = 2x0.5 − x + λ(k − z). Let us use first order conditions, although we have to prove that we can use them and we will do so later in the chapter. Given λ, the first order condition of L(x, k ∗ , λ) with respect to x is x−0.5 − 1 − λ = 0 (FOC). We need to consider the two cases (using the CS condition), λ > 0 and the case of λ = 0. Case 1: If λ = 0. CS implies that the constraint is not binding and (FOC) implies that x∗ = 1 is a candidate solution. By Theorem 1, when x∗ is feasible, i.e. k ≥ x∗ = 1, this will indeed be a solution to the COP. Case 2: If λ > 0. In this case the constraint is binding; by CS we have x∗ = k as a candidate solution. To check that it is a solution we need to check that it is feasible, i.e .that k > 0, that it satisfies the FOC and that it is consistent with a non-negative λ. From the FOC we have k −0.5 − 1 = λ and for this to be non-negative implies that: k −0.5 − 1 ≥ 0 ⇔ k ≤ 1. Therefore when k ≤ 1, x∗ = k solves the COP.

15

2. Constrained optimisation: tools

Figure 2.2: Note that the point C represents a solution to the COP when k = k ∗ , but

this point will not be characterised by the Lagrange method as the set B is not concave. Remark 1 Some further notes about what is to come: The conditions that we have stated are sufficient conditions. This means that some solutions of COP cannot be characterised by the Lagrangian. For example, if the set B is not convex, then solving the Lagrangian is not necessary. As we will show later, if the objective function is concave and the constraint is convex, then B is convex. Then, the Lagrange conditions are also necessary. That is, if we find all the points that maximise the Lagrangian, these are all the points that solve the COP. With differentiability, we can also solve for these points using first order conditions. Remember that we are also interested in the maximum value function. What does it mean to relax the constraint? The Lagrange multipliers are going to play a central role in understanding how relaxing the constraints affect the maximum value.

2.9

Concavity and convexity and the Lagrange Necessity Theorem

In the last section we have found necessary conditions for a solution to the COP. This means that some solutions of COP cannot be characterised by the Lagrangian method. In this section we investigate the assumption that would guarantee that the conditions of the Lagrangian are also necessary. To this end, we will need to introduce the notions of convexity and concavity. From the example above it is already clear why convexity should play a role: if the set B is not convex, there will be points on the boundary of B (that as we know are solutions to the COP) that will not accommodate a line passing through them and entirely above the set B as we did in the last section. In turn, this means that using the Lagrange method will not lead us to these points.

16

2.9. Concavity and convexity and the Lagrange Necessity Theorem

We start with some formal definitions: Definition 4 A set U is a convex set if for all x ∈ U and y ∈ U , then for all t ∈ [0, 1]: tx + (1 − t)y ∈ U.

Definition 5 A real-valued function f defined on a convex subset of U of Rn is concave, if for all x, y ∈ U and for all t ∈ [0, 1]: f (tx + (1 − t)y) ≥ tf (x) + (1 − t)f (y). A real-valued function g defined on a convex subset U of Rn is convex, if for all x, y ∈ U and for all t ∈ [0, 1]: g(tx + (1 − t)y) ≤ tg(x) + (1 − t)g(y). Remark 2 Some simple implications of the above definition that will be useful later:

f is concave if and only if −f is convex.

Linear functions are convex and concave.

Concave and convex functions need to have convex sets as their domain. Otherwise, we cannot use the conditions above.

Activity 2.4 A and B are two convex subsets of Rn . Which of the following sets are always convex, sometimes convex, or never convex? Provide proofs for the sets which are always convex, draw examples to show why the others are sometimes or never convex. (a) A ∪ B (b) A + B ≡ {x | x ∈ Rn , x = a + b, a ∈ A, b ∈ B}.

In what follows we will need to have a method of determining whether a function is convex, concave or neither. To this end the following characterisation of concave functions is useful:

17

2. Constrained optimisation: tools

Lemma 2 Let f be a continuous and differentiable function on a convex subset U of Rn . Then f is concave on U if and only if for all x, y ∈ U : f (y) − f (x) ≤ Df (x)(y − x) ∂f (x) ∂f (x) = (y1 − x1 ) + · · · + (yn − xn ). ∂x1 ∂xn Proof. Here we prove the result on R1 : since f is concave, then: tf (y) + (1 − t)f (x) ≤ f (ty + (1 − t)x) ⇔ t(f (y) − f (x)) + f (x) ≤ f (x + t(y − x)) ⇔ f (x + t(y − x)) − f (x) f (y) − f (x) ≤ ⇔ t f (x + h) − f (x) f (y) − f (x) ≤ (y − x) h for h = t(y − x). Taking limits when h → 0 this becomes: f (y) − f (x) ≤ f 0 (x)(y − x.)  Remember that we introduced the concepts of concavity and convexity as we were interested in finding out under what conditions is the Lagrange method also a necessary condition for solutions of the COP. Consider the following assumptions, denoted by CC: 1. The set x is convex. 2. The function g is concave. 3. The function h is convex. To see the importance of these assumptions, recall the definition of the set B: B = {(k, v) : k ≥ h(x), v ≤ g(x) for some x ∈ x} . Proposition 1 Under assumptions CC, the set B is convex. Proof. Suppose that (k1 , v1 ) and (k2 , v2 ) are in B, so there exists z1 and z2 such that: k1 ≥ h(z1 )k2 ≥ h(z2 ) v1 ≤ g(z1 )v2 ≥ g(z2 ). By convexity of h: θk1 + (1 − θ)k2 ≥ θh(z1 ) + (1 − θ)h(z2 ) ≥ h(θz1 + (1 − θ)z2 )

18

2.10. The Lagrangian necessity theorem

and by concavity of g: θv1 + (1 − θ)v2 ≤ θg(z1 ) + (1 − θ)g(z2 ) ≤ g(θz1 + (1 − θ)z2 ) thus, (θk1 + (1 − θ)k2 , θv1 + (1 − θ)v2 ) ∈ B for all θ ∈ [0, 1], implying that B is convex.  Remember that the maximum value is the upper boundary of the set B. When B is convex, we can say something about the shape of the maximum value function: Proposition 2 Assume that the maximum value exists, then under CC, the maximum value is a non-decreasing, concave and continuous function of k. Proof. We have already shown, without assuming convexity or concavity, that the maximum value is non-decreasing. We have also shown that if the maximum value function v(k) exists, it is the upper boundary of the set B. Above we proved that under CC the set B is convex. The set B can be re-written as: B = {(k, v) : v ∈ R, k ∈ K, v ≤ v(k)} . But a set B is convex iff the function v is concave. Thus, v is concave, and concave functions are continuous, so v is continuous. 

2.10

The Lagrangian necessity theorem

We are now ready to formalise under what conditions the Lagrange method is necessary for a solution to the COP. Theorem 3 Assume CC. Assume that the constraint qualification holds, that is, there is a vector z0 ∈ Z such that h(z0 )  k ∗ . Finally suppose that z ∗ solves COP. Then: i. there is a vector q ∈ Rm such that z ∗ maximises the Lagrangian L(q, k ∗ , z) = g(z) + q[k ∗ − h(z ∗ )]. ii. the Lagrange multiplier q is non-negative for all components, q ≥ 0. iii. the vector z ∗ is feasible, that is z ∈ Z and h(z ∗ ) ≤ k. iv. the complementary slackness conditions are satisfied, that is, q[k ∗ − h(z ∗ )] = 0.

2.11

First order conditions: when can we use them?

So far we have found a method that allows us to find all the solutions to the COP by solving a modified maximisation problem (i.e. maximising the Lagrangian). As you recall, we have used this method to solve for our example of firm ABC by looking at

19

2. Constrained optimisation: tools

first order conditions. In this secton we ask under what assumptions can we do this and be sure that we have found all the solutions to the problem. For this we need to introduce ourselves to the notions of continuity and differentiability.

2.11.1

Necessity of first order conditions

We start with some general definitions. Definition 6 A function g : Z → Rm , Z ⊂ Rn , is differentiable at a point z0 in the interior of Z if there exists a unique m × n matrix, Dg(z0 ), such that given any  > 0, there exists a δ > 0, such that if |z − z0 | < δ, then |g(z) − g(z0 ) − Dg(z0 )(z − z0 )| < |z − z0 |. There are a few things to note about the above definition. First, when m = n = 1, Dg(x0 ) is a scalar, that is, the derivative of the function at x0 that we sometimes denote by g 0 (x0 ). Second, one interpretation of the derivative is that it helps approximate the function g(x) for x’s that are close to x0 by looking at the line, g(x0 ) + Dg(x0 )(x − x0 ), that passes through (x0 , g(x0 )) with slope Dg(x0 ). Indeed this is an implication of Taylor’s theorem which states that for any n > 1: g(x) − g(x0 ) = Dg(x0 )(x − x0 ) +

Dn g(x0 ) D2 g(x0 ) (x − x0 )2 + · · · + (x − x0 )n + Rn (2.1) 2! n!

where Rn is a term of order of magnitude (x − x0 )n+1 . The implication is that as we get closer to x0 we can more or less ignore the elements with (x − x0 )k with the highest powers and use just the first term to approximate the change in the function. We will later return to this when we will ask whether first order conditions are sufficient, but for now we focus on whether they are necessary. But (2.1) implies that if x0 is a point which maximises g(x) and is interior to the set we are maximising over then if Dg(x0 ) exists then it must equal zero. If this is not the case, then there will be a direction along which the function will increase (i.e., the left-hand side of (2.1) will be positive); this is easily seen if we consider a function on one variable, but the same intuition generalises. This leads us to the following result: Theorem 4 (Necessity) Suppose that g : Z → R, Z ⊂ Rn , is differentiable in z0 and that z0 ∈ Z maximises g on Z, then Dg(z0 ) = 0. Finally, we introduce the notion of continuity: Definition 7 k f : Rk → Rm is continuous at x0 ∈ Rk if for any sequence {xn }∞ n=1 ∈ R which converges to x0 , {f (xn )}∞ n=1 converges to f (x0 ). The function f is continuous if it is continuous at any point in Rk .

20

2.11. First order conditions: when can we use them?

Note that all differentiable functions are continuous but the converse is not true.

2.11.2

Sufficiency of first order conditions

Recall that if f is a continuous and differentiable concave function on a convex set U then f (y) − f (x) ≤ Df (x)(y − x). Therefore, if we know that for some x0 , y ∈ U , Df (x0 )(y − x0 ) ≤ 0 we have f (y) − f (x0 ) ≤ Df (x0 )(y − x0 ) ≤ 0 implying that f (y) ≤ f (x0 ). If this holds for all y ∈ U , then x0 is a global maximiser of f . This leads us to the following result: Proposition 3 (Sufficiency) Let f be a continuous twice differentiable function whose domain is a convex open subset U of Rn . If f is a concave function on U and Df (x0 ) = 0 for some x0 , then x0 is a global maximum of f on U . Another way to see this result is to reconsider the Taylor expansion outlined above. Assume for the moment that a function g is defined on one variable x. Remember that for n = 2, we have g(x) − g(x0 ) = Dg(x0 )(x − x0 ) +

D2 g(x0 ) (x − x0 )2 + R3 . 2!

(2.2)

If the first order conditions hold at x0 this implies that Dg(x0 ) = 0 and the above expression can be rewritten as, g(x) − g(x0 ) =

D2 g(x0 ) (x − x0 )2 + R3 . 2!

(2.3)

But now we can see that when D2 g(x0 ) < 0 and when we are close to x0 the left-hand side will be negative and so x0 is a local maximum of g(x), and when D2 g(x0 ) > 0 similarly x0 will constitute a local minimum of g(x). As concave functions have D2 g(x) < 0 for any x and convex functions have D2 g(x) > 0 for any x this shows why the above result holds. The above intuition was provided for the case of a function over one variable x. In the next section we extend this intuition to functions on Rn to discuss how to characterise convexity and concavity in general.

21

2. Constrained optimisation: tools

2.11.3

Checking for concavity and convexity

In the last few sections we have introduced necessary and sufficient conditions for using first order conditions to solve maximisation problems. We now ask a more practical question. Confronted with a particular function, how can we verify whether it is concave or not? If it is concave, we know from the above results that we can use the first order conditions to characterise all the solutions. If it is not concave, we will have to use other means if we want to characterise all the solutions. It will be again instructive to look at the Taylor expansion for n = 2. Let us now consider a general function defined on Rn and look at the Taylor expansion around a vector x0 ∈ Rn . As g is a function defined over Rn , Dg(x0 ) is now an n-dimensional vector and D2 g(x0 ) is an n × n matrix. Let x be an n-dimensional vector in Rn . The Taylor expansion in this case becomes g(x) − g(x0 ) = Dg(x0 )(x − x0 ) + (x − x0 )T

D2 g(x0 ) (x − x0 ) + R3 . 2!

If the first order condition is satisfied, then Dg(x0 ) = 0, where 0 is the n-dimensional vector of zeros. This implies that we can write the above as g(x) − g(x0 ) = (x − x0 )T

D2 g(x0 ) (x − x0 ) + R3 . 2!

But now our problem is a bit more complicated. We need to determine the sign of 2 0) (x − x0 )T D g(x (x − x0 ) for a whole neighbourhood of xs around x0 ! We need to find 2! what properties of the matrix D2 g(x0 ) would guarantee this. For this we analyse the 2 0) properties of expressions of the form (x − x0 )T D g(x (x − x0 ), i.e. quadratic forms. 2! Consider functions of the form Q(x) = xT Ax where x is an n-dimensional vector and A a symmetric n × n matrix. If n = 2, this becomes     a11 a12 /2 x1 x1 x2 a12 /2 a22 x2 and can be rewritten as a11 x21 + a12 x1 x2 + a22 x22 . Definition 8 i. A quadratic form on Rn is a real-valued function X Q(x1 , x2 , . . . , xn ) = aij xi xj i≤j

or equivalently: ii. A quadratic form on Rn is a real-valued function Q(x) = xT Ax where A is a symmetric n × n matrix. Below we would like to understand what properties of A relate to the quadratic form it generates, taking on only positive values or only negative values.

22

2.11. First order conditions: when can we use them?

2.11.4

Definiteness of quadratic forms

We now examine quadratic forms, Q(x) = xT Ax. It is apparent that whenever x = 0 this expression is equal to zero. In this section we ask under what conditions Q(x) takes on a particular sign for any x 6= 0 (strictly negative or positive, non-negative or non-positive). For example, in one dimension, when y = ax2 then if a > 0, ax2 is non-negative and equals 0 only when x = 0. This is positive definite. If a < 0, then the function is negative definite. In two dimensions, x21 + x22 is positive definite, whereas −x21 − x22 is negative definite, whereas x21 − x22 is indefinite, since it can take both positive and negative values, depending on x. There could be two intermediate cases: if the quadratic form is always non-negative but also equals 0 for non-zero xs, then we say it is positive semi-definite. This is the case, for example, for (x1 + x2 )2 which can be 0 for points such that x1 = −x2 . A quadratic form which is never positive but can be zero at points other than the origin is called negative semi-definite. We apply the same terminology for the symmetric matrix A, that is, the matrix A is positive semi-definite if Q(x) = xT Ax is positive semi-definite, and so on. Definition 9 Let A be an n × n symmetric matrix. Then A is: positive definite if xT AX > 0 for all x 6= 0 ∈ Rn positive semi-definite if xT AX ≥ 0 for all x 6= 0 ∈ Rn negative definite if xT AX < 0 for all x 6= 0 ∈ Rn negative semi-definite if xT AX ≤ 0 for all x 6= 0 ∈ Rn indefinite if xT AX > 0 for some x 6= 0 ∈ Rn and xT Ax < 0 for some x 6= 0 ∈ Rn .

2.11.5

Testing the definiteness of a matrix

In this section, we try to examine what properties of the matrix, A, of the quadratic form Q(x) will determine its definiteness.

23

2. Constrained optimisation: tools

We start by introducing the notion of a determinant of a matrix. The determinant of a matrix is a unique scalar associated with the matrix. Computing the determinant of a matrix proceeds recursively.   a11 a12 For a 2 × 2 matrix, A = the determinant, det(A) or |A|, is: a21 a22 a11 a22 − a12 a21 . 

 a11 a12 a13 For a 3 × 3 matrix, A =  a21 a22 a23  the determinant is: a31 a32 a33  a11 det

a22 a23 a32 a33



 − a12 det

a21 a23 a31 a33



 + a13 det

a21 a22 a31 a32

 .



 a11 · · · a1n Generally, for an n × n matrix, A =  · · · · · · · · · , the determinant will be given an1 · · · ann by: n X det(A) = (−1)i−1 a1i det(A1i ) i=1

where A1i is the matrix that is left when we take out the first row and ith column of the matrix A. Definition 10 Let A be an n × n matrix. A k × k submatrix of A formed by deleting n − k columns, say columns i1 , i2 , . . . , in−k and the same n − k rows from A, i1 , i2 , . . . , in−k , is called a kth order principal submatrix of A. The determinant of a k × k principal submatrix is called a kth order principal minor of A.

Example 2.5 For a general 3 × 3 matrix A, there is one third order principal minor, which is det(A). There are three second order principal minors and three first order principal minors. What are they? Definition 11 Let A be an n × n matrix. The k-th order principal submatrix of A obtained by deleting the last n − k rows and columns from A is called the k-th order leading principal submatrix of A, denoted by Ak . Its determinant is called the k-th order leading principal minor of A, denoted by |Ak |. We are now ready to relate the above elements of the matrix A to the definiteness of the matrix:

24

2.11. First order conditions: when can we use them?

Proposition 4 Let A be an n × n symmetric matrix. Then: (a) A is positive definite if and only if all its n leading principal minors are strictly positive. (b) A is negative definite if and only if all its n leading principal minors alternate in sign as follows: |A1 | < 0, |A2 | > 0, |A3 | < 0, etc. The k-th order leading principal minor should have the same sign as (−1)k . (c) A is positive semi-definite if and only if every principal minor of A is non-negative. (d) A is negative semi-definite if and only if every principal minor of odd order is non-positive and every principal minor of even order is non-negative.

Example 2.6

Consider diagonal matrices:   a1 0 0  0 a2 0  . 0 0 a3

These correspond to the simplest quadratic forms: a1 x21 + a2 x22 + a3 x23 . This quadratic form will be positive (negative) definite if and only if all the ai s are positive (negative). It will be positive semi-definite if and only if all the ai s are non-negative and negative semi-definite if and only if all the ai s are non-positive. If there are two ai s of opposite signs, it will be indefinite. How do these conditions relate to what you get from the proposition above? Example 2.7 To see how the conditions of the Proposition 4 relate to the definiteness of a matrix consider a 2 × 2 matrix, and in particular its quadratic form:     a b x1 x1 x2 Q(x1 , x2 ) = b c x2 = ax21 + 2bx1 x2 + cx22 . If a = 0, then Q cannot be negative or positive definite since Q(1, 0) = 0. So assume that a 6= 0 and add and subtract b2 x22 /a to get: b2 b2 Q(x1 , x2 ) = ax21 + 2bx1 x2 + cx22 + x22 − x22 a  a  2 2b x b b2 1 2 = a x − 12 + + 2 x22 − x22 + cx22 a a a  2 2 b (ac − b ) 2 = a x1 + x22 + x2 . a a

25

2. Constrained optimisation: tools

If both coefficients above, a and (ac − b2 )/a are positive, then Q will never be negative. It will equal 0 only when x1 + (b/a)x2 and x2 = 0 in other words, when x1 = 0 and x2 = 0. Therefore, if: a b >0 |a| > 0 and det A = b c then Q is positive definite. Conversely, in order for Q to be positive definite, we need both a and det A = ac − b2 to be positive. Similarly, Q will be negative definite if and only if both coefficients are negative, which occurs if and only if a < 0 and ac − b2 > 0, that is, when the leading principal minors alternate in sign. If ac − b2 < 0, then the two coefficients will have opposite signs and Q will be indefinite.



 2 3 Example 2.8 Numerical examples. Consider A = . Since |A1 | = 2 and   3 7 2 4 |A2 | = 5, A is positive definite. Consider B = . Since |B1 | = 2 and 4 7 |B2 | = −2, B is indefinite.

2.11.6

Back to concavity and convexity

Finally we can put all the ingredients together. A continuous twice differentiable function f on an open convex subset U of Rn is concave on U if and only if the Hessian D2 f (x) is negative semi-definite for all x in U . The function f is a convex function if and only if D2 f (x) is positive semi-definite for all x in U . Therefore, we have the following result: Proposition 5 Second order sufficient conditions for global maximum (minimum) in Rn Suppose that x∗ is a critical point of a function f (x) with continuous first and second order partial derivatives on Rn . Then x∗ is: a global maximiser for f (x) if D2 f (x) is negative (positive) semi-definite on Rn . a strict global maximiser for f (x) if D2 f (x) is negative (positive) definite on Rn . The property that critical points of concave functions are global maximisers is an important one in economic theory. For example, many economic principles, such as marginal rate of substitution equals the price ratio, or marginal revenue equals marginal cost are simply the first order necessary conditions of the corresponding maximisation problem as we will see. Ideally, an economist would like such a rule also to be a sufficient condition guaranteeing that utility or profit is being maximised, so it can provide a guideline for economic behaviour. This situation does indeed occur when the objective function is concave.

26

2.12. The Kuhn-Tucker Theorem

2.12

The Kuhn-Tucker Theorem

We are now in a position to formalise the necessary and sufficent first order conditions for solutions to the COP. Consider once again the COP: max g(x) s.t. h(x) ≤ k ∗ x ∈ x. where g : x → R, x is a subset of Rn , h : x → Rm , and k ∗ is a fixed m-dimensional vector. We impose a set of the following assumptions, CC’: 1. The set x is convex. 2. The function g is concave. 3. The function h is convex (these are assumptions CC from before), and 4. The functions g and h are differentiable. Consider now the following conditions, which we term the Kuhn-Tucker conditions: 1. There is a vector λ ∈ Rm such that the partial derivative of the Lagrangian: L(k ∗ , λ, x) = g(x) + λ[k ∗ − h(x)] evaluated at x∗ is zero, in other words: ∂L(k ∗ , λ, x) = Dg(x∗ ) − λDh(x∗ ) = 0. ∂x 2. The Lagrange multiplier vector is non-negative: λ ≥ 0. 3. The vector x∗ is feasible, that is, x ∈ x and h(x∗ ) ≤ k ∗ . 4. The complementary slackness conditions are satisfied, that is, λ[k ∗ − h(x∗ )] = 0. The following theorem is known as the Kuhn-Tucker (K-T) Theorem. Theorem 5 Assume CC’. i. If z ∗ is in the interior of Z and satisfies the K-T conditions, then z ∗ solves the COP. ii. If the constraint qualification holds (there exists a vector z0 ∈ Z such that h(z0 )  k ∗ ), z ∗ is in the interior of Z and solves the COP, then there is a vector of Lagrange multipliers q such that z ∗ and q satisfy the K-T conditions.

27

2. Constrained optimisation: tools

Proof. We first demonstrate that under CC and for non-negative values of Lagrange multipliers, the Lagrangian is concave. The Lagrangian is: L(k ∗ , q, z) = g(z) + q[k ∗ − h(z)]. Take z and z 0 . Then: tg(z) + (1 − t)g(z 0 ) ≤ g(tz + (1 − t)z 0 ) th(z) + (1 − t)h(z 0 ) ≥ h(tz + (1 − t)z 0 ) and thus with q ≥ 0, we have: g(tz + (1 − t)z 0 ) + qk ∗ − qh(tz + (1 − t)z 0 ) ≥ tg(z) + (1 − t)g(z 0 ) + qk ∗ − q(th(z) + (1 − t)h(z 0 )). It follows that: L(k ∗ , q, tz + (1 − t)z) = g(tz + (1 − t)z 0 ) + q[k ∗ − h(tz + (1 − t)z 0 )] ≥ t[g(z) + q(k ∗ − h(z))] + (1 − t)[g(z 0 ) + q(k ∗ − h(z 0 ))] = tL(k ∗ , q, z) + (1 − t)L(k ∗ , q, z 0 ). This proves that the Lagrangian is concave in z. In addition, we know that g and h are differentiable, therefore also L is a differentiable function of z. Thus, we know that if the partial derivative of L with respect to z is zero at z ∗ , then z ∗ maximises L on Z. Indeed the partial derivative of L at z ∗ is zero, and hence we know that if g is concave and differentiable, h is convex and differentiable, the Lagrange multipliers q are non-negative, and Dg(z ∗ ) − qDh(z ∗ ) = 0 then z ∗ maximises the Lagrangian on Z. But then the conditions of the Lagrange sufficiency theorem are satisfied, so that z ∗ indeed solves the COP. We have to prove the converse result now. Suppose that the constraint qualification is satisfied. The COP now satisfies all the conditions of the Lagrange necessity theorem. This theorem says that if z ∗ solves the COP, then it also maximises L on Z, and satisfies the complementary slackness conditions, with non-negative Lagrange multipliers, as well as being feasible. But since partial derivatives of a differentiable function are zero at the maximum, then the partial derivatives of L with respect to z at z ∗ are zero and therefore all the Kuhn-Tucker conditions are satisfied.  Remark 3 (Geometrical intuition) Think of the following example in R2 . Suppose that the constraints are 1. h1 (z) = −z1 ≤ 0. 2. h2 (z) = −z2 ≤ 0. 3. h3 (z) ≤ k ∗ . Consider first the case in which the point z0 solves the problem at a tangency of h3 (z) = k ∗ and the objective function g(z) = g(z0 ). The constraint set is convex, and by the concavity of g it is also the case that:  z : z ∈ R2 , g(z) ≥ g(z0 )

28

2.13. The Lagrange multipliers and the Envelope Theorem

is a convex set. The Lagrangian for the problem is: L(k ∗ , q, z) = g(z) + q[k ∗ − h(z)] = g(z) + q1 z1 + q2 z2 + q3 (k ∗ − h3 (z)). In the first case of z0 , the non-negativity constraints do not bind. By the complementary slackness then, it is the case that q1 = q2 = 0 so that the first order condition is simply: Dg(z0 ) = q3 Dh3 (z0 ). Recall that q3 ≥ 0. If q3 = 0, then it implies that Dg(z0 ) = 0 so z0 is the unconstrained maximiser but this is not the case here. Then q3 > 0, which implies that the vectors Dg(z0 ) and Dh3 (z0 ) point in the same direction. These are the gradients: they describe the direction in which the function increases most rapidly. In fact, they must point in the same direction, otherwise, this is not a solution to the optimisation problem.

2.13

The Lagrange multipliers and the Envelope Theorem

2.13.1

Maximum value functions

In this section we return to our initial interest in maximum (minimum) value functions. Profit functions and indirect utility functions are notable examples of maximum value functions, whereas cost functions and expenditure functions are minimum value functions. Formally, a maximum value function is defined by: Definition 12 If x(b) solves the problem of maximising f (x) subject to g(x) ≤ b, the maximum value function is v(b) = f (x(b)). You will remember that such a maximum value function is non-decreasing. Let us now examine these functions more carefully. Consider the problem of maximising f (x1 , x2 , . . . , xn ) subject to the k inequality constraints: g(x1 , x2 , . . . , xn ) ≤ b∗1 , . . . , g(x1 , x2 , . . . , xn ) ≤ b∗k where b∗ = (b∗1 , . . . , b∗k ). Let x∗1 (b∗ ), . . . , x∗n (b∗ ) denote the optimal solution and let λ1 (b∗ ), . . . , λk (b)∗ be the corresponding Lagrange multipliers. Suppose that as b varies near b∗ , then x∗1 (b∗ ), . . . , x∗n (b∗ ) and λ1 (b∗ ), . . . , λk (b)∗ are differentiable functions and that x∗ (b∗ ) satisfies the constraint qualification. Then for each j = 1, 2, . . . , k: λj (b∗ ) =

∂ f (x∗ (b∗ )). ∂bj

Proof. (We consider the case of a binding constraint, and for simplicity, assume there is only one constraint, and that f and g are functions of two variables.) The Lagrangian is: L(x, y, λ; b) = f (x, y) − λ(g(x, y) − b).

29

2. Constrained optimisation: tools

The solution satisfies: ∂L ∗ (x (b), y ∗ (b), λ∗ (b); b) ∂x ∂f ∗ = (x (b), y ∗ (b), λ∗ (b)) ∂x ∂h −λ∗ (b) (x∗ (b), y ∗ (b), λ∗ (b)) ∂x

0 =

∂L ∗ (x (b), y ∗ (b), λ∗ (b); b) ∂y ∂f ∗ = (x (b), y ∗ (b), λ∗ (b)) ∂y ∂h −λ∗ (b) (x∗ (b), y ∗ (b), λ∗ (b)) ∂y

0 =

for all b. Furthermore, since h(x∗ (b), y ∗ (b)) = b for all b: ∂h ∗ ∗ ∂x∗ (b) ∂h ∗ ∗ ∂y ∗ (b) (x , y ) + (x , y ) =1 ∂x ∂b ∂y ∂b for every b. Therefore, using the chain rule, we have: df (x∗ (b), y ∗ (b)) ∂f ∗ ∗ ∂x∗ (b) ∂f ∗ ∗ ∂y ∗ (b) = (x , y ) + (x , y ) db ∂x ∂b ∂y ∂b   ∗ ∂h ∗ ∗ ∂x (b) ∂h ∗ ∗ ∂y ∗ (b) ∗ (x , y ) + (x , y ) = λ (b) ∂x ∂b ∂y ∂b ∗ = λ (b).  The economic interpretation of the multiplier is as a ‘shadow’ price. For example, in the application for a firm maximising profits, it tells us how valuable another unit of input would be to the firm’s profits, or how much the maximum value changes for the firm when the constraint is relaxed. In other words, it is the maximum amount the firm would be willing to pay to acquire another unit of input.

2.14

The Envelope Theorem

Recall that: L(x, y, λ) = f (x, y) − λ(g(x, y) − b) so that:

∂ d f (x(b), y(b); b) = λ(b) = L(x(b), y(b), λ(b); b). db ∂b Hence, what we have found above is simply a particular case of the envelope theorem, which says that: d ∂ f (x(b), y(b); b) = λ(b) = L(x(b), y(b), λ(b); b). db ∂b

30

2.15. Solutions to activities

Consider the problem of maximising f (x1 , x2 , . . . , xn ) subject to the k inequality constaints: h1 (x1 , x2 , . . . , xn , c) = 0, . . . , hk (x1 , x2 , . . . , xn , c) = 0. Let x∗1 (c), . . . , x∗n (c) denote the optimal solution and let µ1 (c), . . . , µk (c) be the corresponding Lagrange multipliers. Suppose that x∗1 (c), . . . , x∗n (c) and µ1 (c), . . . , µk (c) are differentiable functions and that x∗ (c) satisfies the constraint qualification. Then for each j = 1, 2, . . . , k: ∂ d f (x∗ (c); c) = L(x∗ (c), µ(c); c). dc ∂c Note: if hi (x1 , x2 , . . . , xn , c) = 0 can be expressed as some h0i (x1 , x2 , . . . , xn ) − c = 0, then we are back at the previous case, in which we have found that: d ∂ f (x∗ (c); c) = L(x∗ (c), µ(c); c) = λj (c). dc ∂c But the statement is more general. We will prove this for the simple case of an unconstrained problem. Let f (x; a) be a continuous function of x ∈ Rn and the scalar a. For any a, consider the problem of finding max f (x; a). Let x∗ (a) be the maximiser which we assume is differentiable of a. We will show that: ∂ d f (x∗ (a); a) = f (x∗ (a); a). da ∂a Apply the chain rule: X ∂f d ∂x∗ ∂f ∗ f (x∗ (a); a) = (x∗ (a); a) i (a) + (x (a); a) da ∂x ∂a ∂a i i =

∂f ∗ (x (a); a) ∂a

∂f since ∂x (x∗ (a); a) = 0 for all i by the first order conditions. Intuitively, when we are i already at a maximum, changing slightly the parameters of the problem or the constraints, does not affect the optimal solution (but it does affect the value at the optimal solution).

2.15

Solutions to activities

Solution to Activity 2.1 (a) For any possible solution, x∗ > 1, we have that x∗ + 1 > 1 and ln(x∗ + 1) > ln(x∗ ) and therefore x∗ cannot be a solution. As ln x is a strictly increasing function and the feasible set is unbounded there is no solution to this problem. (b) For any possible solution, x∗ < 1, we can choose an , where  < 1 − x∗ , such that we have that x∗ +  < 1. But then we have ln(x∗ + ) > ln(x∗ ) and therefore x∗ cannot be a solution. As ln x is a strictly increasing function and the feasible set is not closed there is not solution to this problem. (c) As the feasible set is empty there is no solution to this problem.

31

2. Constrained optimisation: tools

Figure 2.3: Concave profit function in Activity 2.3.

Solution to Activity 2.2 (a) In this case there is no maximum as the feasible set is empty. (b) In this case there is no maximum as the feasible set is unbounded. (c) The maximum is achieved at x = 1. (d) In this case there is no maximum as the feasible set is unbounded. (e) The maximum is achieved at either x = 1 or at x = −1. (f) In this case there is no maximum as the feasible set is not closed around 2. (g) The only feasible x is x = 1 and therefore x = 1 yields the maximum. (h) The maximum is achieved at x = 2. Note that this is true even though the feasible set is not closed. Solution to Activity 2.3 In what follows refer to Figure 2.3. The profit function for this problem is 4y − c(y) and the problem calls for maximising this function with respect to y subject to the constraint that y ≤ k. To graph the set B and to find the solution we look at the (k, v) space. We first plot the function v(k) = 4k − c(k). As can be seen in Figure 2.3 this function has an inverted U shape which peaks at k = 2 with a maximum value of 4. Remember that the constraint is y ≤ k and therefore, whenever k > 2 we can always achieve the value 4 by choosing y = 2 < k. This is why the boundary of set B does not follow the function v(k) = 4k − c(k) as k is larger than 2 but rather flattens out above k = 2. From the graph it is clear that for all k ≥ 2 the solution is y = 2 and for all 0 ≤ k < 2 the solution is y = k.

32

2.16. Sample examination questions

Solution to Activity 2.4 (a) A ∪ B is sometimes convex and sometimes not convex. For an example in which it is not convex consider the following sets in R: A = [0, 1] and B = [2, 3]. Consider x = 1 ∈ A ⊂ A ∪ B and y = 2 ∈ B ⊂ A ∪ B and t = 0.5. We now have tx + (1 − t)y = 1.5 6∈ [0, 1] ∪ [2, 3]. (b) This set is always convex. To see this, take any two elements x, y ∈ A + B. By definition x = xA + xB where xA ∈ A and xB ∈ B. Similarly, y = yA + yB where yA ∈ A and yB ∈ B. Observe that for any t ∈ [0, 1]: tx + (1 − t)y = txA + (1 − t)yA + txB + (1 − t)yB . As A and B are convex, txA + (1 − t)yA ∈ A and txB + (1 − t)yB ∈ B. But now we are done, as by the definition of A + B, tx + (1 − t)y ∈ A + B. A reminder of your learning outcomes By the end of this chapter, you should be able to: formulate a constrained optimisation problem discern whether you could use the Lagrange method to solve the problem use the Lagrange method to solve the problem, when this is possible discern whether you could use the Kuhn-Tucker Theorem to solve the problem use the Kuhn-Tucker Theorem to solve the problem, when this is possible discuss the economic interpretation of the Lagrange multipliers carry out simple comparative statics using the Envelope Theorem.

2.16

Sample examination questions

1. A household has a utility u(x1 , x2 ) = xa1 xb2 where a, b > 0 and a + b = 1. It cannot consume negative quantities of either good. It has strictly positive income y and faces prices p1 , p2 > 0. What is its optimal consumption bundle? 2. A household has a utility u(x1 , x2 ) = (x1 + 1)a (x2 + 1)b where a, b > 0 and a + b = 1. It cannot consume negative quantities of either good. It faces a budget with strictly positive income y and prices p1 , p2 > 0. What is its optimal consumption bundle?

2.17

Comments on the sample examination questions

1. The utility function u(x1 , x2 ) is real-valued if x1 , x2 ≥ 0. Therefore, we have:  Z = (x1 , x2 ) ∈ R2 | x1 , x2 ≥ 0 .

33

2. Constrained optimisation: tools

Our problem is to maximise u(x1 , x2 ) on Z and subject to the budget constraint p1 x1 + p2 x2 ≤ y. The constraint function is differentiable and convex. The problem is that while the objective function, u(x1 , x2 ), is concave (see below), it is not differentiable at (0, 0)  b a b−1 as Du(x1 , x2 ) = axa−1 x , bx x is not well-defined at that point. 1 2 1 2 Note, however, that in a valid solution both x1 or x2 must be non-zero, for suppose that without loss of generality x1 = 0. The marginal utility for good 1 is infinite, which means that trading a small amount of good 2 for good 1 would increase the utility. As a result the constraint (x1 , x2 ) ∈ Z should be slack at the maximum. We next verify that the objective function is concave. For this we look at the Hessian:   a−1 b−1 b −abxa−2 2 1 x2 abx1 x2 . D u(x1 , x2 ) = b−1 abxa−1 −abxa1 xb−2 2 1 x2 The principal minors of the first order are negative: a b−2 b < 0. −abxa−2 1 x2 , −abx1 x2

The second order minor is non-negative: 2(a−1) 2(b−1) x2

|A2 | = (a2 b2 − a2 b2 )x1

= 0.

Therefore, the Hessian is negative semi-definite and so u(x1 , x2 ) is concave. As for the constraint, it is a linear function and hence convex. As prices and income are strictly positive, we can find strictly positive x1 and x2 that would satisfy the budget constraint with inequality. Therefore, we can use the Kuhn-Tucker Theorem to solve this problem. The Lagrangian is: L = xa1 xb2 + λ[y − p1 x1 − p2 x2 ]. The first order conditions are: b axa−1 1 x2 = λp1 bxa1 xb−1 = λp2 . 2

If λ > 0, dividing the two equations we have: ax2 p1 = bx1 p2

(MRS = price ratio).

Plugging this into the constraint (that holds with equality when λ > 0), x1 =

ay , p1

x2 =

by . p2

As all the conditions are satisfied, we are assured that this is a solution. To get that maximum value function, we plug the solution into the objective function to get: −b v(p1 , p2 , y) = aa bb p−a 1 p2 .

34

2.17. Comments on the sample examination questions

2. The utility function u(x1 , x2 ) is real-valued if x1 , x2 ≥ −1, but the consumer cannot consume negative amounts. Therefore, we have Z = {(x1 , x2 ) ∈ R2 | x1 , x2 ≥ 0}. Our problem is to maximise u(x1 , x2 ) on Z and subject to the budget constraint p1 x1 + p2 x2 ≤ y. The constraint function is differentiable and convex. The objective function u(x1 , x2 ) is concave (see below) and differentiable (see below). We first make sure that the objective function is concave. For this we look at the Hessian:   −ab(x1 + 1)a−2 (x2 + 1)b ab(x1 + 1)a−1 (x2 + 1)b−1 2 . D u(x1 , x2 ) = ab(x1 + 1)a−1 (x2 + 1)b−1 −ab(x1 + 1)a (x2 + 1)b−2 The principal minors of the first order are negative: −ab(x1 + 1)a−2 (x2 + 1)b , −ab(x1 + 1)a (x2 + 1)b−2 < 0 and of the second order is non-negative: |A2 | = (a2 b2 − a2 b2 )(x1 + 1)2(a−1) (x2 + 1)2(b−1) = 0. Therefore, the Hessian is negative semi-definite and so u(x1 , x2 ) is concave. As for the constraint, it is a linear function and hence convex. As prices and income are strictly positive, we can find strictly positive x1 and x2 that would satisfy the budget constraint with inequality. Therefore, we can use the Kuhn-Tucker Theorem to solve this problem. As in this problem xi = 0 could be part of a solution, we can add the non-negativity constraints. The Lagrangian is: L = xa1 xb2 + λ0 [y − p1 x1 − p2 x2 ] + λ1 x1 + λ2 x2 . The first order conditions are: a(x1 + 1)a−1 (x2 + 1)b = λp1 + λ1 b(x1 + 1)a (x2 + 1)b−1 = λp2 + λ2 . There may be three types of solutions (depending on the multipliers), one interior and two corner solutions. Assume first that λ1 = λ2 = 0 but that λ0 > 0, dividing the two equations we have: p1 a(x2 + 1) = b(x1 + 1) p2

(MRS = price ratio).

Plugging this into the budget constraint (that holds with equality when λ0 > 0): ay + ap2 − bp1 , p1 −b = aa ba p−1 1 p2 .

x1 = λ0

x2 =

by − ap2 + bp1 p2

This is a solution only if both x1 and x2 are non-negative. This happens if and only if:   −ap2 + bp1 ap2 − bp1 y > max , . a b

35

2. Constrained optimisation: tools

Consider now the case of λ1 > 0 implying that x1 = 0. In this case we must have that x2 = y/p2 implying that λ2 = 0, as the consumer might as well spend all his income on good 2. From the first order conditions we have: −a   y b +1 λ0 = p2 p2   −a bp1 − ap2 − ay y λ1 = +1 . p2 p2 This is part of a solution only if λ1 ≥ 0, i.e. if and only if y ≤ (bp1 − ap2 )/a and in that case the solution is x1 = 0 and x2 = y/p2 . Similarly, if y ≤ −(bp1 + ap2 )/a, the solution is x1 = y/p1 and x2 = 0.

36

Chapter 3 The consumer’s utility maximisation problem 3.1

Aim of the chapter

The aim of this chapter is to review the standard assumptions of consumer theory on preferences, budget sets and utility functions, treating them in a more sophisticated way than in intermediate microeconomics courses, and to understand the consequences of these assumptions through the application of constrained optimisation techniques.

3.2

Learning outcomes

By the end of this chapter, you should be able to: outline the assumptions made on consumer preferences describe the relationship between preferences and the utility function formulate the consumer’s problem in terms of preferences and in terms of utility maximisation define uncompensated demand, and to interpret it as the solution to a constrained optimisation problem define the indirect utility function, and to interpret it as a maximum value function.

3.3

Essential reading

This chapter is self-contained and therefore there is no essential reading assigned. But you may find further reading very helpful.

3.4

Further reading

For a reminder of the intermediate microeconomics treatment of consumer theory read: Varian, H.R. Intermediate Microeconomics. Chapters 2–6, or the relevant section of any intermediate microeconomics textbook. For a more sophisticated treatment comparable to this chapter read:

37

3. The consumer’s utility maximisation problem

Varian, H.R. Microeconomic Analysis. Chapter 7 or the relevant section of any mathematical treatment of consumer theory.

3.5

Preferences

3.5.1

Preferences

Consumer theory starts with the idea of preferences. Consumers are modelled as having preferences over all points in the consumption set, which is a subset of Rn . Points in Rn are vectors of the form x = (x1 x2 . . . xn ) where xi is the consumption of good i. Most treatments of consumer theory, including this chapter, assume that the consumption set is Rn+ that is the subset of Rn with xi ≥ 0 for i = 1, . . . , n. If x and y are points in Rn+ that a consumer is choosing between: x  y means that the consumer strictly prefers x to y so given a choice between x and y the consumer would definitely choose y x ∼ y means that the consumer is indifferent between x to y so the consumer would be equally satisfied by either x or y x  y means that the consumer weakly prefers x to y that is either x  y or x  y.

3.5.2

Assumptions on preferences

The most important assumption on preferences are: Preferences are complete if for any x and y in Rn+ either x  y or y  x. Preferences are reflexive that is for any x in Rn+ x  x. Preferences are transitive, that is for x, y and z in Rn+ x  y and y  z implies that x  z. A technical assumption on preferences is: Preferences are continuous if for all y in Rn+ the sets {x : x ∈ Rn+ , x  y} and {x : x ∈ Rn+ , y  x} are closed. Closed sets can be defined in a number of different ways. In addition we often assume: Preferences satisfy nonsatiation if for any x and y in Rn+ , x  y that is xi > yi for i = 1, 2, . . . , n implies that x  y. Note that nonsatiation is not always plausible. Varian argues that if the two goods are chocolate cake and ice cream you could very plausibly be satiated. Some books refer to nonsatiation as ‘more is better’, others use the mathematical term ‘monotonicity’.

38

3.6. The consumer’s budget

Preferences satisfy convexity, that is for any y in Rn+ the set {x : x ∈ Rn+ , x  y} of points preferred to y is convex. Convex sets are defined in the previous chapter and on pages 70 and 71 of Chapter 6 of Dixit and Section 2.2 of Chapter 2 of Sydsæter et al. The convexity assumption is very important for the application of Lagrangian methods to consumer theory and we will come back to it. Some textbooks discuss convexity in terms of diminishing rates of marginal substitution, others describe indifference curves as being convex to the origin. Different textbooks work with slightly different forms of these assumptions; it does not matter, the resulting theory is the same. Activity 3.1 Assume there are two goods and that all the assumptions given above hold. Pick a point y in R2+ . (a) Show the set {x : x ∈ Rn+ , x ∼ y}. What is this set called? (b) What is the marginal rate of substitution? (c) Show in a diagram the set of points weakly preferred to y, that is {x : x ∈ Rn+ , x  y}. Make sure that it is convex. What does this imply for the marginal rate of substitution? (d) Show the set of points {x : x ∈ Rn+ , x ≥ y}, where x ≥ y means that xi ≥ yi for i = 1, 2, . . . , n. Explain why nonsatiation implies that this set is a subset of the set of points weakly preferred to y, that is {x : x ∈ Rn+ , x  y}. (e) Explain why the nonsatiation assumption implies that the indifference curve cannot slope upwards.

3.6 3.6.1

The consumer’s budget Definitions

Assumption. The consumer’s choices are constrained by the conditions that: p 1 x1 + p 2 x2 + · · · + p n xn ≤ m x ∈ Rn+ where pi is the price of good i and m is income. We assume that prices and income are strictly positive, that is pi > 0 for i = 1, . . . , n and m > 0. Definition 13 The budget constraint is the inequality: p1 x1 + p2 x2 + · · · + pn xn ≤ m. The budget constraint is sometimes written as px ≤ m where p is the vector prices (p1 · · · pn ) and px is notation for the sum p1 x1 + p2 x2 + · · · + pn xn .

39

3. The consumer’s utility maximisation problem

Definition 14 The budget line is the set of points with p1 x1 + p2 x2 + · · · + pn xn = m and xi ≥ 0 for i = 1, . . . , n.

Definition 15 The budget set is the subset of points in Rn+ satisfying the budget constraint. In formal notation it is: {x : x ∈ Rn+ , p1 x1 + p2 x2 + · · · + pn xn ≤ m}.

Activity 3.2 Assume that n = 2 and draw a diagram showing: (a) the budget line and its gradient (b) the points where the budget line meets the axes (c) the budget set. Activity 3.3 Draw diagrams showing the effects on the budget line and budget set of the following changes: (a) an increase in p1 (b) a decrease in p2 (c) an increase in m.

3.7 3.7.1

Preferences and the utility function The consumer’s problem in terms of preferences

Definition 16 The consumer’s problem stated in terms of preferences is to find a point x∗ in Rn+ with the property that x∗ satisfies the budget constraint, so: p1 x∗1 + p2 x∗2 + · · · + pn x∗n ≤ m and: x∗  x for all x in Rn+ that satisfy the budget constraint.

40

3.7. Preferences and the utility function

3.7.2

Preferences and utility

The difficulty with expressing the consumer’s problem in terms of preferences is that we have no techniques for solving the problem. However preferences can be represented by utility functions, which makes it possible to define the consumer’s problem in terms of constrained optimisation and solve it using the tools developed in the previous chapter. Definition 17 A utility function u(x) which takes Rn+ into R represents preferences  if: u(x) > u(y) implies that x  y u(x) = u(y) implies that x ∼ y u(x) ≥ u(y) implies that x  y. The following result and its proof are beyond the scope of this course, but the result is worth knowing. Theorem 6 If preferences satisfy the completeness, transitivity and continuity assumptions they can be represented by a continuous utility function. Varian’s Microeconomic Analysis proves a weaker result on the existence of a utility function in Chapter 7.1. Again this is beyond the scope of this course.

3.7.3

Cardinal and ordinal utility

One of the standard points made about consumer theory is that the same set of consumer preferences can be represented by different utility functions so long as the order of the numbers on indifference curves is not changed. For example if three indifference curves have utility levels 1, 2 and 3, replacing 1, 2 and 3 by 2, 62 and 600 does not change the preferences being represented because 1 < 2 < 3 and 2 < 62 < 600. However replacing 1, 2 and 3 by 62, 2 and 600 does change the preferences being represented because it is not true that 62 < 2 < 600. The language used to describe this is that the utility function is ordinal rather than cardinal, so saying that one bundle of goods gives higher utility than another means something, but saying that one bundle gives twice as much utility as the other is meaningless. This can be stated more formally. Theorem 7 If the utility function u(x) and the function b(x) are related by u(x) = a(b(x)) where a is a strictly increasing function, then b(x) is also a utility function representing the same preferences as u(x).

41

3. The consumer’s utility maximisation problem

As there are many different strictly increasing functions this result implies that any set of preferences can be represented by many different utility functions. Proof. As a is strictly increasing and u represents preferences:

b(x) > b(y) implies that u(x) > u(y) which implies x  y

b(x) = b(y) implies that u(x) = u(y) which implies x ∼ y

b(x) ≥ b(y) implies that u(x) ≥ u(y) which implies x  y, so b(x) is a utility function representing the same preferences as u(x). 

Activity 3.4 Suppose that f is a function taking Rn+ into R, and the term ≥f is defined by f (x) ≥ f (y) implies that x≥f y. Explain why the relationship given by ≥f is complete, reflexive and transitive.

Activity 3.5

Suppose that the utility function is differentiable, and that: ∂u(x) > 0 for i = 1, . . . , n. ∂xi

Explain why this implies that the preferences satisfy nonsatiation.

3.8 3.8.1

The consumer’s problem in terms of utility Uncompensated demand and the indirect utility function

Definition 18 The consumer’s utility maximisation problem is: max u(x) s.t. p1 x1 + p2 x2 + · · · + pn xn ≤ m x ∈ Rn+ . Note that this has the same form as the constrained optimisation problem in the previous chapter. The previous chapter showed you how to solve the consumer’s problem with the Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 and the utility function u(x1 , x2 ) = (x1 + 1)a (x2 + 1)b .

42

3.8. The consumer’s problem in terms of utility

Definition 19 The solution to the consumer’s utility maximisation problem is uncompensated demand. It depends upon prices p1 , p2 , . . . , pn and income m. The uncompensated demand for good i is written as: xi (p1 , p2 , . . . , pn , m) or using vector notation as xi (p, m) where p is the vector (p1 p2 . . . pn ). Notation x(p, m) is used for the vector of uncompensated demand (x1 (p, m) x2 (p, m) . . . xn (p, m)). The notation x(p, m) suggests that uncompensated demand is a function of (p, m), which requires that for each value of (p, m) there is only one solution to the utility maximising problem. This is usually so in the examples economists work with, but it does not have to be, and we will look at the case with a linear utility function where the consumer’s problem has multiple solutions for some values of p. Recall from Chapter 1 the definition of the maximum value function as the value of the function being maximised at the solution to the problem. Definition 20 The indirect utility function is the maximum value function for the consumer’s utility maximisation problem. It is written as: v(p1 , p2 , . . . , pn , m) or in vector notation v(p, m) where: v(p, m) = u(x(p, m)). There are two important results linking preferences and utility functions. Theorem 8 If preferences are presented by a utility function u(x) then the solutions to the consumer’s utility maximising function in terms of preferences are the same as the solutions to the consumer’s problem in terms of utility.

Theorem 9 If two utility functions represent the same preferences the solutions to the consumer’s utility maximisation problem are the same with the two utility functions but the indirect utility functions are different. These results are not difficult to prove, the next learning activity asks you to do this.

43

3. The consumer’s utility maximisation problem

3.8.2

Nonsatiation and uncompensated demand

The nonsatiation assumption on preferences is that for any x and y in Rn+ , if x > y, that is xi ≥ yi for all i and xi > yi for some i, then x  y. If the preferences are represented by a utility function u, this requires that if x > y then u(x) > u(y), so the utility function is strictly increasing in the consumption of at least one good, and non-decreasing in consumption of any good. In intuitive terms, the assumption says that more is better, so a consumer will spend all available income. Proposition 6 If the nonsatiation condition is satisfied any solution to the consumer’s utility maximising problem satisfies the budget constraint as an equality. Proof. To see this, suppose that the budget constraint is not satisfied as an equality so: p 1 x1 + p 2 x2 + · · · + p n xn < m and increasing consumption of good i increases utility so if  > 0: u(x1 , x2 , . . . , xi + , . . . , xn ) > u(x1 , x2 , . . . , xi , . . . , xn ). As p1 x1 + p2 x2 + · · · + pn xn < m if  is small enough the point (x1 , x2 , . . . , xi + , . . . , xn ) satisfies the budget constraint and gives higher utility that (x1 , x2 , . . . , xi , . . . , xn ) so (x1 , x2 , . . . , xi , . . . , xn ) cannot solve the consumer’s problem.  Activity 3.6

Explain why Theorem 8 holds.

Activity 3.7

Explain why Theorem 9 holds.

Activity 3.8 Find the indirect utility function for the Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 . Activity 3.9 The objective of this activity is to solve the consumer’s utility maximising function with a linear utility function 2x1 + x2 subject to the constaints p1 x1 + p2 x2 ≤ m, x1 ≥ 0 and x2 ≥ 0. Assume that p1 > 0, p2 > 0 and m > 0. (a) Draw indifference curves for the utility function u(x1 , x2 ) = 2x1 + x2 . What is the marginal rate of substitution? (b) Assume that p1 /p2 < 2. Use your graph to guess the values of x1 and x2 that solve the maximising problem. Which constraints bind, that is have hi (x1 , x2 ) = ki where hi is constraint function i. Which constraints do not bind, that is have hi (x1 , x2 ) < ki ? Does the problem have more than one solution? (c) Assume that p1 /p2 = 2. Use your graph to guess the values of x1 and x2 that solve the maximising problem. Which constraints bind? Which constraints do not bind? Is there more than one solution?

44

3.9. Solution to activities

(d) Assume that p1 /p2 > 2. Use your graph to guess the values of x1 and x2 that solve the maximising problem. Which constraints bind? Which constraints do not bind? Does the problem have more then one solution? (e) Explain why the Kuhn-Tucker conditions are necessary and sufficient for a solution to this problem. (f) Write down the Lagrangian for the problem. Confirm that your guesses are correct by finding the Lagrange multipliers associated with the constraints for Activities 3.6–3.8. Does the problem have more than one solution? Activity 3.10

Find the indirect utility function for the utility function: u(x1 , x2 ) = 2x1 + x2 .

3.9

Solution to activities

Solution to Activity 3.1 Assume that the consumer’s preferences satisfy all the assumptions of Section 3.1. Refer to Figure 3.1. (a) The set {x : x ∈ Rn+ , x ∼ y} is the indifference curve. (b) The marginal rate of substitution is the gradient of the indifference curve. (c) The set {x : x ∈ Rn+ , x  y} is the entire shaded area in Figure 3.1. The marginal rate of substitution decreases as x1 increases because this set is convex. (d) The set {x : x ∈ Rn+ , x ≥ y}, where x ≥ y means that xi ≥ yi for i = 1, 2, . . . , n is the lightly shaded area in Figure 3.1. The nonsatiation assumption states that all points in this set are weakly preferred to y, that is x  y. (e) If the indifference curve slopes upwards there are points on the indifference curve x = (x1 , x2 ) and y = (y1 , y2 ) with y1 > x1 and y2 > x2 so x  y. Nonsatiation then implies that x  y which is impossible if x and y are on the same indifference curve. Solution to Activity 3.2 The budget line in Figure 3.2 has gradient −p1 /p2 and meets the axes at m/p1 and m/p2 . Solution to Activity 3.3 Refer to Figure 3.3:

45

3. The consumer’s utility maximisation problem

Figure 3.1: Indifference curves and preference relationships.

Figure 3.2: The budget set and budget line.

46

3.9. Solution to activities

Figure 3.3: The effect of changes in prices and income on the budget line.

An increase in p1 to p01 shifts the budget line from CE to CD. The budget line becomes steeper.

A decrease in p2 to p02 shifts the budget line from CE to BE. The budget line becomes steeper.

An increase in m to m0 shifts the budget line out from CE to AF . The gradient of the budget line does not change.

Solution to Activity 3.4 This activity tells you conditions on the utility function that ensure that preferences have various properties, and asks you to explain why. Suppose that f is a function taking Rn+ into R, and the term ≥f is defined by f (x) ≥ f (y) implies that x≥f y. The values of the functions f (x) are real numbers. The relationship ≥ for real numbers is complete, so for any x and y either f (x) ≥ f (y) or f (y) ≥ f (x). The relationship ≥ for real numbers is reflexive so for any x, f (x) ≥ f (x). The relationship ≥ is transitive, so for any x, y and z, f (x) ≥ f (y) and f (y) ≥ f (z) implies that f (x) ≥ f (z). Thus, as f (x) ≥ f (y) implies that x≥f y, the relationship ≥f is complete, reflexive and transitive.

47

3. The consumer’s utility maximisation problem

Solution to Activity 3.5 As:

∂u(x) > 0 for i = 1, . . . , n ∂xi

if yi > xi for i = 1, . . . , n then u(y) > u(x), so y  x, and nonsatiation is satisfied. Solution to Activity 3.6 Theorem 8 states that if preferences are represented by a utility function u(x) then the solutions to the consumer’s utility maximising function in terms of preferences are the same as the solutions to the consumer’s problem in terms of utility. To see why suppose that a point x∗ solves the consumer’s problem in terms of preferences so x∗  x for all points in Rn+ that satisfy the budget constraint. A point x∗ solves the consumer’s utility maximisation problem if u(x∗ ) ≥ u(x) for all points in Rn+ that satisfy the budget constraint. If the utility function represents the preferences then the set of points for which x∗  x is the same as the set of points for which u(x∗ ) ≥ u(x) is the same as the set of points for which u(x∗ ) ≥ u(x), so the two problems have the same solutions. Solution to Activity 3.7 Theorem 9 states that if two utility functions represent the same preferences the solutions to the consumer’s utility maximisation problem are the same with the two utility functions but the indirect utility functions are different. This follows directly from Theorem 8, because if u(x) and u(x∗ ) represent the same preferences the solutions to the consumer’s utility maximisation with the two utility functions are the same as the solution to the consumer’s problem in terms of the preferences and so are the same as each other. Solution to Activity 3.8 The Sample examination question for Chapter 2 gave the solution to the consumer’s Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 where a > 0, b > 0 and a + b = 1. The solutions are x1 = a(m/p1 ) and x2 = b(m/p2 ) and so the uncompensated demands are:

m p1 m x2 (p1 , p2 , m) = b p2 x1 (p1 , p2 , m) = a

48

3.9. Solution to activities

Figure 3.4: Utility maximisation with u(x1 , x2 ) = 2x1 + x2 and p1 /p2 < 2.

so the indirect utility function is: a  b m m b v(p1 , p2 , m) = a p1 p2 a+b m = aa bb a b p p  a b12 a b = m. pa1 pb2 

The third line here follows from the second line because a + b = 1. Solution to Activity 3.9 The objective of this activity is to solve the consumer’s utility maximising function with a linear utility function 2x1 + x2 subject to the constraints p1 x1 + p2 x2 ≤ m, x1 ≥ 0 and x2 ≥ 0. Assume that p1 > 0, p2 > 0 and m > 0. (a) The indifference curves for the utility function u(x1 , x2 ) = 2x1 + x2 are parallel straight lines with gradient −2 as shown in Figure 3.4. The marginal rate of substitution is 2. (b) From Figure 3.4 if p1 /p2 < 2 the budget line is less steep than the indifference curves. The solution is at x1 = m/p1 , x2 = 0. The constraints p1 x1 + p2 x2 ≤ m and x2 ≥ 0 bind. The constraint x1 ≥ 0 does not bind. There is only one solution. (c) From Figure 3.5 if p1 /p2 = 2 the budget line is parallel to all the indifference curves, and one indifference curve coincides with the budget line. Any (x1 , x2 ) with x1 ≥ 0, x2 ≥ 0 and p1 x2 + p2 x2 = m solves the problem. If x1 = 0 the constraints x1 ≥ 0 and p1 x1 + p2 x2 ≤ m bind and the constraint x2 ≥ 0 does not bind. If x2 = 0 the constraints x2 ≥ 0 and p1 x1 + p2 x2 = m bind and the constraint x1 ≥ 0 does not bind. If x1 > 0 and x2 > 0 the constraint p1 x1 + p2 x2 ≤ m binds and the constraints x1 ≥ 0 and x2 ≥ 0 do not bind. There are many solutions.

49

3. The consumer’s utility maximisation problem

(d) From Figure 3.6 if p1 /p2 > 2 the budget line is steeper than the indifference curves. The solution is at x1 = 0, x2 = m/p2 . The constraints p1 x1 + p2 x2 ≤ m and x1 ≥ 0 bind. The constraint x2 ≥ 0 does not bind. There is only one solution. (e) The objective 2x1 + x2 is linear so concave, the constraint functions p1 x2 + p2 x2 , x1 and x2 are linear so convex, the constraint qualification is satisfied because there are points that satisfy the constraints as strict inequalities, for example x1 = m/2p1 , x2 = m/2p2 . Thus the Kuhn-Tucker conditions are necessary and sufficient for a solution to this problem. (f) The Lagrangian is: L = 2x1 + x2 + λ0 (m − p1 x1 − p2 x2 ) + λ1 x1 + λ2 x2 = x1 (2 − λ0 p1 + λ1 ) + x2 (1 − λ0 p2 + λ2 ). The first order conditions require that: 2 = λ0 p1 − λ1 1 = λ0 p2 − λ2 . Checking for solutions with x1 > 0 and x2 = 0, complementary slackness forces λ1 = 0, so λ0 = 2/p1 > 0 and λ2 = λ0 p2 − 1 = 2p2 /p1 − 1. Thus non-negativity of multipliers forces p1 /p2 ≤ 2. Complementary slackness forces x1 = m/p1 . The point x1 = m/p1 , x2 = 0 is feasible. Thus if p1 /p2 ≤ 2 the point x1 = m/p1 , x2 = 0 solves the problem. If p1 /p2 < 0 this is the unique solution. Checking for solutions with x1 = 0 and x2 > 0, complementary slackness forces λ2 = 0, so λ0 = 1/p2 and λ1 = λ0 p1 − 2 = p1 /p2 − 2. Thus non-negativity of multipliers forces p1 /p2 ≥ 2. Complementary slackness forces x2 = m/p2 . The point x1 = 0, x2 = m/p2 is feasible. Thus if p1 /p2 ≥ 2 the point x1 = 0, x2 = m/p2 solves the problem. If p2 < p1 /2 this is the only solution. If p1 /p2 = 2, and λ0 = 2/p2 = 1/p2 , then λ1 = λ2 = 0, so solutions with x1 > 0 and x2 > 0 are possible. As λ0 > 0 complementary slackness forces p1 x1 + p2 x2 = m. In this case there are multiple solutions. Solution to Activity 3.10 Finding the indirect utility function for the utility function u(x1 , x2 ) = 2x1 + x2 , if p1 /p2 < 2 then x1 (p1 , p2 , m) = m/p1 and x2 (p1 , p2 , m) = 0 so v(p1 , p2 , m) = 2x1 + x2 = 2m/p1 . If p1 /p2 = 2, any x1 , x2 with x1 ≥ 0, x2 ≥ 0 and x2 = (m − p1 x1 )/p2 solves the problem, so: m 2m (m − p1 x1 ) = = . v(p1 , p2 , m) = 2x1 + p2 p2 p1 If p1 /p2 > 2, then x1 (p1 , p2 , m) = 0 and x2 (p1 , p2 , m) = m/p2 so v(p1 , p2 , m) = 2x1 + x2 = m/p2 . This is a perfectly acceptable answer to the question, but the indirect utility function can also be written more concisely as:   2m m . , v(p1 , p2 , m) = max p1 p2

50

3.9. Solution to activities

Figure 3.5: Utility maximisation with u(x1 , x2 ) = 2x1 + x2 and p1 /p2 = 2.

Figure 3.6: Utility maximisation with u(x1 , x2 ) = 2x1 + x2 and p1 /p2 > 2.

51

3. The consumer’s utility maximisation problem

Learning outcomes By the end of this chapter, you should be able to: outline the assumptions made on consumer preferences describe the relationship between preferences and the utility function formulate the consumer’s problem in terms of preferences and in terms of utility maximisation define uncompensated demand, and to interpret it as the solution to a constrained optimisation problem define the indirect utility function, and to interpret it as a maximum value function.

3.10

Sample examination questions

When answering these and any other examination questions be sure to explain why your answers are true, as well as giving the answer. For example here when you use Lagrangian techniques in Question 2 check that the problem you are solving satisfies the conditions of the theorem you are using, and explain your reasoning. 1. (a) Explain the meaning of the nonsatiation and transtivity assumptions in consumer theory. (b) Suppose that preferences can be represented by a utility function u(x1 , x2 ). Do these preferences satisfy the transitivity assumption? (c) Assume that the derivatives of the utility function u(x1 , x2 ) are strictly positive. Do the preferences represented by this utility function satisfy the nonsatiation assumption? (d) Continue to assume that the derivatives of the utility function u(x1 , x2 ) are strictly positive. Is it possible that the solution to the consumer’s utility maximisation problem does not satisfy the budget constraint as an equality? 2. (a) Define uncompensated demand. (b) Define the indirect utility function. (c) A consumer has a utility function u(x1 , x2 ) = xρ1 + xρ2 where 0 < ρ < 1. Solve the consumer’s utility maximisation problem. (d) What is the consumer’s uncompensated demand function for this utility function? (e) What is the consumer’s indirect utility function for this utility function?

52

3.11. Comments on sample examination questions

3.11

Comments on sample examination questions

As with any examination question you need to explain what you are doing and why. 2. (d) The uncompensated demand functions for the consumer’s utility maximisation problem with u(x1 , x2 ) = xρ1 + xρ2 are:   1 − 1−ρ p x1 (p1 , p2 , m) =  − ρ 1 − ρ  m p1 1−ρ + p2 1−ρ   1 − 1−ρ p x2 (p1 , p2 , m) =  − ρ 2 − ρ  m. p1 1−ρ + p2 1−ρ (e) The indirect utility function is: v(p1 , p2 , m) =



− ρ p1 1−ρ

+

− ρ p2 1−ρ

1−ρ

mρ .

53

3. The consumer’s utility maximisation problem

54

Chapter 4 Homogeneous and homothetic functions in consumer choice theory 4.1

Aim of the chapter

The aim of this chapter is to review the important concepts of a homogeneous function and introduce the idea of a homothetic function. Homogeneity is important for consumer theory because for any utility function the uncompensated demand and indirect utility functions are homogeneous of degree zero in prices and income. Homotheticity is a weaker concept than homogeneity. Assuming that the utility function is either homogeneous or homothetic has very strong implications for indifference curves, marginal rates of subtitution and uncompensated demand which are explored in this chapter.

4.2

Learning outcomes

By the end of this chapter, you should be able to: state the definitions of homogeneous and homothetic functions explain why the uncompensated demand and indirect utility functions are homogeneous of degree zero in prices and income determine when the indirect utility function is increasing or non-decreasing in income and decreasing or non-increasing in prices explain the relationship between the homogeneity of a function and its derivative describe the implications of homogeneous and homothetic utility functions for the shape of indifference curves, the marginal rate of substitution and uncompensated demand.

4.3

Essential reading

This chapter is self-contained and therefore there is no essential reading assigned.

55

4. Homogeneous and homothetic functions in consumer choice theory

4.4

Further reading

Varian, H.R. Intermediate Microeconomics. Chapter 6 contains a brief discussion of homothetic utility functions. Varian, H.R. Microeconomic Analysis. Chapter 7 covers the indirect utility function. There is a brief discussion of homogeneous and homothetic production functions in Chapter 1. Any textbook covering multivariate calculus will have some discussion of homogeneous functions.

4.5 4.5.1

Homogeneous functions Definition

Definition 21 A function f (x1 , x2 , . . . , xn ) taking Rn+ into R is homogeneous of degree d in (x1 , x2 , . . . , xn ) if for all (x1 , x2 , . . . , xn ) in Rn+ and all numbers s > 0: f (sx1 , ssx2 , . . . , sxn ) = sd f (x1 , x2 , . . . , xn ). Using vector notation x for (x1 , x2 , . . . , xn ), so (sx1 , sx2 , . . . , sxn ) = sx, this equation can be written as: f (sx) = sd f (x).

4.5.2

Homogeneity of degrees zero and one

The most important cases are homogeneity of degree zero and homogeneity of degree one. As s0 = 1, f (x1 , x2 , . . . , n) is homogeneous of degree zero if for all s > 0: f (sx1 , ssx2 , . . . , sxn ) = f (x1 , x2 , . . . , xn ) so multiplying all components of (x1 , x2 , . . . , xn ) by the same number s > 0 has no effect on the value of the function. Using vector notation this says that: f (sx) = f (x). As s1 = s a function is homogeneous of degree one if for all s > 0: f (sx1 , ssx2 , . . . , sxn ) = sf (x1 , x2 , . . . , xn ) so multiplying all components of (x1 , x2 , . . . , xn ) by s has the effect of multiplying the function by s. Using vector notation this equation says that: f (sx) = sf (x).

56

4.6. Homogeneity, uncompensated demand and the indirect utility function

Activity 4.1 For each of the following functions state whether or not it is homogeneous in (x1 , x2 ). If the function is homogeneous say of what degree. (a) ax1 + bx2 (b) (ax1 + bx2 )2 (c) ax21 + bx2 (d) cx31 + dx21 x2 + ex1 x22 + f x31 (e) (cx31 + dx21 x2 + ex1 x22 + f x31 )/(ax1 + bx2 )3 (f) xα1 x − 2β . Assume that α > 0 and β > 0, do not assume that α + β = 1. (g) c ln x1 + d ln x2 .

4.6 4.6.1

Homogeneity, uncompensated demand and the indirect utility function Homogeneity

Theorem 10 Uncompensated demand and the indirect utility function are both homogeneous of degree zero in prices and income, that is for uncompensated demand for any number s > 0: xi (sp1 , sp2 , . . . , spn , sm) = xi (p1 , p2 , . . . , pn , m) for i = 1, . . . , n or in vector notation: x(p, m) = x(sp, m) and for the indirect utility function: v(sp1 , sp2 , . . . , spn , sm) = v(p1 , p2 , . . . , pn , m) or in vector notation: v(sp, sm) = v(p, m). In words this says that when all prices and income are multiplied by the same positive number s the solution to the consumer’s utility problem does not change and neither does the level of utility. The next learning activity asks you to explain why this is so.

57

4. Homogeneous and homothetic functions in consumer choice theory

4.6.2

Other properties of indirect utility functions

Monotonicity in income Proposition 7 For any utility function the indirect utility function is non-decreasing in income m. With nonsatiation the indirect utility function is strictly increasing in m. Proof. From Chapter 1 a maximum value function is non-decreasing in the level of the constraint k. For the consumer’s utility maximisation problem the maximum value function is the indirect utility function v(p1 , p2 , m), and the level of the constraint is m, so the general result implies at once that indirect utility is non-decreasing in m. Now assume that the consumer is not satiated and income rises from m to m0 . Nonsatiation implies that: p1 x1 (p1 , p2 , m) + p2 x2 (p1 , p2 , m) = m so: p1 x1 (p1 , p2 , m) + p2 x2 (p1 , p2 , m) < m0 making it possible to increase both x1 and x2 by a small amount to x01 and x02 without violating the budget constraint so: p1 x01 + p2 x02 ≤ m0 and because of nonsatiation: u(x01 , x02 ) > u(x1 (p1 , p2 , m), x2 (p1 , p2 , m)) = v(p1 , p2 , m). As (x01 , x02 ) is feasible with income m0 , the solution to the consumer’s problem with income m must give utility v(p1 , p2 , m) which cannot be smaller than u(x01 , x02 ), so: v(p1 , p2 , m0 ) ≥ u(x01 , x02 ). Taken together the last two inequalities imply that: v(p1 , p2 , m0 ) > v(p1 , p2 , m).  Monotonicity in prices Proposition 8 With any utility function the indirect utility function is non-increasing in prices. With nonsatiation the indirect utility function is strictly decreasing in the prices of good i if the consumer purchases a strictly positive amount of the good, that is xi (p1 , p2 , m) > 0.

58

4.6. Homogeneity, uncompensated demand and the indirect utility function

Proof. Suppose that the price p1 of good 1 falls from p1 to p01 . As x1 (p1 , p2 , m) ≥ 0 and p01 < p1 this implies that: p01 x1 (p1 , p2 , m) + p2 x2 (p1 , p2 , m) ≤ p1 x1 (p1 , p2 , m) + p2 x2 (p1 , p2 , m). Thus (x1 (p1 , p2 , m), x2 (p1 , p2 , m)) is feasible with prices p01 and p2 and income m so gives utility no greater than the utility from the solution to the consumer’s problem with prices p01 and p2 and income m which is v(p01 , p2 , m) so: v(p01 , p2 , m) ≥ u(x1 (p1 , p2 , m), x2 (p1 , p2 , m)) = v(p1 , p2 , m). Thus if p01 < p1 then v(p01 , p2 , m) ≥ v(p1 , p2 , m). Now suppose that x1 (p1 , p2 , m) > 0 and the consumer is not satiated which implies that: p01 x1 (p1 , p2 , m) + p2 x2 (p1 , p2 , m) < p1 x1 (p1 , p2 , m) + p2 x2 (p1 , p2 , m) = m. Then at prices p01 and p2 and income m there is a feasible point (x01 , x02 ) with x01 > x1 (p1 , p2 , m) and x02 > x2 (p1 , p2 , m) so: v(p01 , p2 , m) ≥ u(x01 , x02 ). Nonsatiation implies that: u(x01 , x02 ) > u(x1 (p1 , p2 , m), x2 (p1 , p2 , m)) = v(p1 , p2 , m). From the last two inequalities v(p01 , p2 , m) > v(p1 , p2 , m). A similar argument applies to good 2.  Activity 4.2

Consider the two goods case.

(a) What happens to the budget set and the budget line when p1 , p2 and m are all multiplied by 2? (b) What happens to the budget set and the budget line when p1 , p2 and m are all multiplied by s > 0? (c) What happens to uncompensated demand when p1 , p2 and m are all multiplied by s > 0? (d) How does utility change when p1 , p2 and m are all multiplied by s > 0? (e) Suppose that the annual inflation rate is 50%, the prices of all goods increase by 50% and your income increases by 50%. Does the inflation make you better off or worse off? Is real world inflation like this? Activity 4.3 The indirect utility function from the Cobb-Douglas utility function with a + b = 1 is u(x1 , x2 ) = xa1 xb2 is:  a b a b v(p1 , p2 , m) = m. pa1 pb2 Confirm that this function is homogeneous of degree zero in (p1 , p2 , m), increasing in m and decreasing in p1 .

59

4. Homogeneous and homothetic functions in consumer choice theory

4.7

Derivatives of homogeneous functions

Given the definition of a homogeneous function it is important to have notation for the derivatives of a function f at the point (sx1 , sx2 , . . . , sxn ). This is done by defining fi (z1 , z2 , . . . , zn ) by: ∂f (z1 , z2 , . . . , zn ) fi (z1 , z2 , . . . , zn ) = ∂zi which is the derivative of f (z1 , z2 , . . . , zn ) with respect to variable i. This notation makes it possible to state and prove the following theorem. Theorem 11 If the function f taking Rn+ into R is homogeneous of degree s and differentiable then for any (x1 , x2 , . . . , xn ) in Rn+ and any s > 0: fi (sx1 , sx2 , . . . , sxn ) = sd−1 fi (x1 , x2 , . . . , xn ).

(4.1)

Thus the partial derivative fi is homogeneous of degree d − 1. Proof. If you are very comfortable with partial derivatives you can derive this result immediately by differentiating both sides of the equation: f (sx1 , sx2 , . . . , sxn ) = sd f (x1 , x2 , . . . , xn )

(4.2)

with respect to xi getting: sfi (sx1 , sx2 , . . . , sxn ) = sd fi (x1 , x2 , . . . , xn ) and dividing by s. If you need to take things more slowly define a function φ by: φ(s, x1 , x2 , . . . , xn ) = f (sx1 , sx2 , . . . , sxn ) so φ(1, x1 , x2 , . . . , xn ) = f (x1 , x2 , . . . , xn ) and the condition: f (sx1 , sx2 , . . . , sxn ) = sd f (x1 , x2 , . . . , xn ) that f is homogeneous of degree d becomes: φ(s, x1 , x2 , . . . , xn ) = sd φ(1, x1 , x2 , . . . , xn ). Differentiating both sides of this equation with respect to xi gives: ∂φ(1, x1 , x2 , . . . , xn ) ∂φ(s, x1 , x2 , . . . , xn ) = sd . ∂xi ∂xi Now φ(s, x1 , x2 , . . . , xn ) = f (z1 , z2 , . . . , zn ) where zi = sxi for i = 1, . . . , n, so using the chain rule: ∂zi ∂f (z1 , z2 , . . . , zn ) ∂f (z1 , z2 , . . . , zn ) ∂φ(s, x1 , x2 , . . . , xn ) = =s . ∂xi ∂xi ∂zi ∂zi

60

4.8. Homogeneous utility functions

If s = 1 so zi = xi this equation becomes: ∂φ(1, x1 , x2 , . . . , xn ) ∂f (1, x1 , x2 , . . . , xn ) = . ∂xi ∂xi The last three equations imply that: s

∂f (z1 , z2 , . . . , zn ) ∂f (1, x1 , x2 , . . . , xn ) = sd ∂zi ∂xi

so dividing by s: ∂f (z1 , z2 , . . . , zn ) ∂f (1, x1 , x2 , . . . , xn ) = sd−1 ∂zi ∂xi where zi = sxi for i = 1, . . . , n. Using notation fi (z1 , z2 , . . . , zn ) for (∂f (z1 , z2 , . . . , zn ))/(∂zi ) this becomes: fi (sx1 , sx2 , . . . , sxn ) = sd−1 fi (x1 , x2 , . . . , xn ).  Activity 4.4 The Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 with a + b = 1 is homogeneous of degree one. Find the partial derivatives of the Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 where a + b = 1 and confirm that the function and its derivatives satisfy Theorem 11.

4.8 4.8.1

Homogeneous utility functions Homogeneous utility functions and the indifference curve diagram

If the utility function is homogeneous of degree d and u(x) = u0 so x lies on the indifference curve with utility u0 , homogeneity implies that u(sx) = sd u(x) = sd u0 , so sx lies on the indifference curve with utility sd u0 . Thus given an indifference curve with utility u0 the points with utility u1 can be found by multiplying the points on the u0 indifference curve by s where sd = u1 /u0 . In particular when d = 1 so the utility function is homogeneous of degree one, the indifference curve with utility u1 can be found by treating the points on the u0 indifference curve as vectors and multiplying them by u1 /u0 . This is illustrated in Figure 4.1 which shows indifference curves with the utility function 2/3 1/3 u(x1 , x2 ) = x1 x2 which is homogeneous of degree one in x1 and x2 . Given a point on 2/3 1/3 the indifference curve x1 x2 = 1 it is possible to find the corresponding point on the 2/3 1/3 indifference curve x1 x2 = 3 by multiplying the vector (x1 , x2 ) by 3. This point can be found geometrically by drawing a ray through the original point with utility 1, and finding the point s on the ray that is three times as far from the origin.

61

4. Homogeneous and homothetic functions in consumer choice theory

2/3 1/3

Figure 4.1: Indifference curves with a homogeneous utility function u(x1 , x2 ) = x1 x2 .

4.8.2

Homogeneous utility functions and the marginal rate of substitution

Figure 4.1 shows the lines that are tangent to the indifference curves at points on the same ray. In the figure it looks as if lines on the same ray have the same gradient. This is indeed the case. The gradient of the tangent line is −M RS where M RS is the marginal rate of substitution, given by: M RS =

u1 (x1 , x2 ) u2 (x1 , x2 )

where ui (x1 , x2 ) is marginal utility, that is the derivative: ∂u(x1 , x2 ) . ∂xi Using notation ui (sx1 , sx2 ) for (∂u(z1 , z2 ))/(∂zi ) where zi = sxi gives the result: Theorem 12 If u is a homogeneous function from R2+ into R so: u(sx1 , sx2 ) = sd u(x1 , x2 ) and u2 (x1 , x2 ) 6= 0, then u2 (sx1 , sx2 ) 6= 0 and: u1 (x1 x2 ) u1 (sx1 , sx2 ) = u2 (x1 , x2 ) u2 (sx1 , sx2 ) that is the marginal rate of substitution is homogeneous of degree zero in (x1 , x2 ).

62

4.8. Homogeneous utility functions

Proof. From Theorem 11 if u(x1 , x2 ) is homogeneous of degree d in (x1 , x2 ) with continuous partial derivatives then its partial derivatives are homogeneous of degree d − 1, so: u1 (sx1 , sx2 ) = sd−1 u1 (x1 , x2 ) u2 (sx1 , sx2 ) = sd−1 u2 (x1 , x2 )

so as u2 (x1 , x2 ) 6= 0, u2 (sx1 , sx2 ) 6= 0 and: u1 (sx1 , sx2 ) sd−1 u1 (x1 , x2 ) u1 (x1 , x2 ) = d−1 = . u2 (sx1 , sx2 ) s u2 (x1 , x2 ) u2 (x1 , x2 ) The term on the left is the MRS at the point (sx1 , sx2 ), the term on the right is the MRS at the points (x1 , x2 ). 

4.8.3

Homogeneous utility functions and uncompensated demand

Intermediate microeconomics uses the indifference curve diagram to show that the solution to the consumer’s problem is at a point where the budget line is tangent to the indifference curve, so the marginal rate of substitution is equal to the price ratio. As Figure 4.1 shows with a homogeneous utility function the marginal rate of substitution is the same at all points along a ray, so if income changes but prices do not change uncompensated demand moves along the ray. Thus if income doubles the demand for each good doubles but the ratio of consumption of good 2 to consumption of good 1 does not change. More generally any change in income affects the quantity of goods being consumed. This is a very strong condition; it says for example that if income doubles then both consumption of salt, and consumption of housing space double; whereas in fact as people get richer their pattern of consumption changes. In particular homogeneous utility functions imply that there are no inferior goods. In fact this result on the pattern of consumption does not rely on the tangency condition, as the next result shows. Theorem 13 If the utility function is homogeneous then for all prices (p1 , p2 , . . . , pn ) and income m > 0: xi (p1 , p2 , . . . , pn , m) = mxi (p1 , p2 , . . . , pn , 1) for i = 1, 2, . . . , n or in vector notation: x(p, m) = mx(p, 1). In words this says that to find the consumption of goods at any level of income m, find the consumption when m = 1 and then multiply by m. Proof. Suppose the result is not true, that is there are prices p and income m, for which x(p, m) 6= mx(p, 1), that is mx(p, 1) does not solve the utility maximising problem with income m. As x(p, 1) solves the problem with income 1, it must satisfy

63

4. Homogeneous and homothetic functions in consumer choice theory

the budget constraint with income 1, so: p1 x1 (p, 1) + p2 x2 (p, 1) + · · · + pn xn (p, 1) ≤ 1 and be in R2+ , so xi (p, 1) ≥ 0 for i = 1, . . . , n. This implies that if m > 0: p1 (mx1 (p, 1)) + p2 (mx2 (p, 1)) + · · · + pn (mxn (p, 1)) ≤ m and mxi (p, 1) ≥ 0. Thus mx(p, 1) lies in the budget set with prices p and income m. As mx(p, 1) does not solve the utility maximising problem but x(p, m) does: u(x(p, m)) > u(mx(p, 1)).

(4.3)

Because u is homogeneous of degree d: u(x(p, m)) = u(mm−1 x(p, m)) = md u(m−1 x(p, m)) and: u(mx(p, 1)) = md u(x(p, 1)) so from (4.3): md u(m−1 x(p, m)) = u(x(p, m)) > u(mx(p, 1)) = md u(x(p, 1)) so dividing by md : u(m−1 x(p, m)) > u(x(p, 1)). But this is impossible because as x(p, m) solves the problem with income m, x(p, m) is feasible with income m implying that m−1 x(p, m) is feasible with income 1, so as x(p, 1) solves the problem with income 1: u(x(p, 1)) ≥ u(m−1 x(p, m)). Thus the assumption that the result does not hold leads to contradiction, so the result must hold.  Activity 4.5 Consider the utility function xa1 xb2 where a > 0, b > 0 and a + b = 1. From Activity 4.4 this function is homogeneous of degree one. (a) Find the MRS and show that it is homogeneous of degree zero in (x1 , x2 ). (b) Show that: x1 (p1 , p2 , m) = mx1 (p1 , p2 , 1) x2 (p1 , p2 , m) = mx2 (p1 , p2 , 1).

4.9 4.9.1

Homothetic functions Homogeneity and homotheticity

Homotheticity is a weaker condition than homogeneity.

64

4.9. Homothetic functions

Definition 22 A function f from Rn+ into R is homothetic if it has the form: f (x) = q(r(x)) where r is a function that is homogeneous of degree one and q is a strictly increasing function.

Proposition 9 Any homogeneous function of degree d > 0 is homothetic. Proof. If f (x1 , x2 , . . . , xn ) is homogeneous of degree d > 0 so for any s > 0: f (sx1 , sx2 , . . . , sxn ) = sd f (x1 , x2 , . . . , xn ) let: r(x1 , x2 , . . . , xn ) = (f (x1 , x2 , . . . , xn ))1/d which implies that: r(sx1 , sx2 , . . . , sxn ) = (f (sx1 , sx2 , . . . , sxn ))1/d 1/d = sd f (x1 , x2 , . . . , xn ) = s (f (x1 , x2 , . . . , xn ))1/d = sr(x1 , x2 , . . . , xn ) so r(x1 , x2 , . . . , xn ) is homogeneous of degree one and: f (x1 , x2 , . . . , xn ) = q(r(x1 , x2 , . . . , xn )) where q(r) = rd so q is a strictly increasing function of r. 

4.9.2

Indifference curves with homothetic utility functions

Indifference curves are the level sets of the utility function, so an indifference curve is a set of the form:  (x1 , x2 ) : (x1 , x2 ) ∈ R2+ , u(x1 , x2 ) = u0 . Now suppose that the utility function u(x1 , x2 ) is homothetic, so: u(x1 , x2 ) = q(r(x1 , x2 )) where q is a strictly increasing function and r is a function that is homogeneous of degree one. Thus u(x1 , x2 ) = u(x01 , x02 ) implies that q(r(x1 , x2 )) = q(r(x01 , x02 )) which as q is strictly increasing requires that r(x1 , x2 ) = r(x01 , x02 ), so if u0 = q(r0 ) the sets:  (x1 , x2 ) : (x1 , x2 ) ∈ R2+ , u(x1 , x2 ) = u0  (x1 , x2 ) : (x1 , x2 ) ∈ R2+ , r(x1 , x2 ) = r0

65

4. Homogeneous and homothetic functions in consumer choice theory

are the same. Thus any indifference curves associated with the homothetic utility function u is also a level set of the homogeneous function r(x1 , x2 ). Diagrams of the indifference curves and level sets for homothetic and homogeneous utility functions look the same, although the number associated with each indifference curve is different for the two functions u and r.

4.9.3

Marginal rates of substitution with homothetic utility functions

The fact that the indifference curves for homothetic and homogeneous indifference curves are the same implies that the geometric properties of the two diagrams are the same. In particular homothetic utility functions should share with homogeneous utility functions the property that moving along a ray the marginal rate of substitution does not change. This is indeed so. The formal result is: Theorem 14 Suppose u is a homothetic function from R2+ into R, so: u(x1 , x2 ) = 1(r(x1 , x2 )) where q is strictly increasing, and r is homogeneous of degree one. Suppose also that the functions q and r are differentiable, and that for all (x1 , x2 ) in R2+ , q 0 (r(x1 , x2 )) > 0 and r2 (x1 , x2 ) 6= 0. Then u2 (x1 , x2 ) 6= 0, u2 (sx1 , sx2 ) 6= 0 and: u1 (sx1 , sx2 ) u1 (x1 , x2 ) = u2 (x1 , x2 ) u2 (sx1 , sx2 ) that is the marginal rate of substitution is homogeneous of degree zero in (x1 , x2 ). Proof. as u(x1 , x2 ) = q(r(x1 , x2 )) the chain rule implies that: u1 (x1 , x2 ) = q 0 (r(x1 , x2 ))r1 (x1 , x2 ) u2 (x1 , x2 ) = q 0 (r(x1 , x2 ))r2 (x1 , x2 ).

(4.4) (4.5)

From the assumptions of q 0 and r2 the terms in the second equation are not zero and division gives: u1 (x1 , x2 ) q 0 (r(x1 , x2 ))r1 (x1 , x2 ) r1 (x1 , x2 ) = 0 = . (4.6) u2 (x1 , x2 ) q (r(x1 , x2 ))r2 (x1 , x2 ) r2 (x1 , x2 ) This argument applies to all points in R2+ . In particular it applies to (sx1 , sx2 ) so: u1 (sx1 , sx2 ) r1 (sx1 , sx2 ) = . u2 (sx1 , sx2 ) r2 (sx1 , sx2 )

(4.7)

As r(x1 , x2 ) is homogeneous of degree one Theorem 11 implies that the derivatives of r are homogeneous of degree zero so: r1 (x1 , x2 ) = r1 (sx1 , sx2 ) r2 (x1 , x2 ) = r2 (sx1 , sx2 )

66

4.9. Homothetic functions

so from (4.6) and (4.7): u1 (x1 , x2 ) u1 (sx1 , sx2 ) = . u2 (x1 , x2 ) u2 (sx1 , sx2 ) 

4.9.4

Homothetic utility and uncompensated demand

Theorem 13 says that if the utility function is homogeneous then for all prices (p1 , p2 , . . . , pn ) and income m > 0: xi (p1 , p2 , . . . , pn , m) = mxi (p1 , p2 , . . . , pn , 1) for i = 1, 2, . . . , n or in vector notation: x(p, m) = mx(p, 1). A similar result applies to homothetic functions. Theorem 15 If the utility function is homothetic then for all prices (p1 , p2 , . . . , pn ) and income m > 0: xi (p1 , p2 , . . . , pn , m) = mxi (p1 , p2 , . . . , pn , 1) for i = 1, 2, . . . , n or in vector notation: x(p, m) = mx(p, 1). Proof. Given the definition of a homethetic function if the utility function u(x) is homothetic u(x) = q(r(x)) where q is a strictly increasing function and r is homogeneous of degree one. Then from Theorem 7 r(x) is also a utility function representing the same preferences as u(x). From Theorem 9 the solutions to the utility maximising problem with utility function u(x) are the same as the solutions to the utility maximising problem with utility function r(x). As r(x) is homogeneous Theorem 13 implies that the solutions satisfy: x(p, m) = mx(p, 1). Thus uncompensated demand with both homogeneous and homothetic utility functions has the property that the proportions in which goods are consumed do not change as income increases with prices remaining the same.  Activity 4.6 For each of the following functions state whether or not it is homothetic in (x1 , x2 ). (a) cx31 + dx21 x2 + ex1 x22 + f x31 (b) (cx31 + dx21 x2 + ex1 x22 + f x31 )/(ax1 + bx2 )3 (c) xα1 xβ2 where α > 0 and β > 0, do not assume that α + β = 1 (d) α ln x1 + β ln x2 where α > 0, β > 0.

67

4. Homogeneous and homothetic functions in consumer choice theory

Activity 4.7 Find the marginal rate of substitution for the utility function u(x1 , x2 ) = (x1 + 1)a (x2 + 1)b . Is this utility function homothetic?

4.10

Solutions to activities

Solution to Activity 4.1 (a) a(sx1 ) + b(sx2 ) = s(sx1 + bx2 ) so the function ax1 + bx2 is homogeneous of degree one. (b) (a(sx1 ) + b(sx2 ))2 = s2 (ax1 + bx2 )2 so the function (ax1 + bx2 )2 is homogeneous of degree 2. (c) a(sx21 ) + bsx2 = s2 ax21 + sbx2 , the function ax21 + bx2 is not homogeneous unless a = 0 or b = 0. To see why, consider the case where a = b = 1, so f (x1 , x2 ) = x21 + x2 , f (1, 1) = 2, f (2, 2) = 6 and f (4, 4) = 20. If the function were homogeneous of degree d then: 6 = f (2, 2) = 2d f (1, 1) = 2d (2) 20 = f (4, 4) = 4d f (1, 1) = (4d )(2). The first of these equations implies that 4d = 22d = 9, the second that 4d = 10, so there is no d that satisfies both of these equations. (d) c(sx31 ) + d(sx1 )2 (sx2 ) + e(sx1 )(sx2 )2 + f (sx2 )3 = s3 (cx31 + ex21 x2 + ex1 x22 + f x31 ) so the function cx31 + ex21 x2 + ex1 x22 + f x31 is homogeneous of degree 3. (e) c(sx31 ) + d(sx1 )2 (sx2 ) + e(sx1 )(sx2 )2 + f (sx2 )3 (a(sx1 ) + b(sx2 ))3 s3 (cx31 + dx21 x2 + ex1 x22 + f x31 ) = s3 (ax1 + bx2 )3 cx31 + dx21 x2 + ex1 x22 + f x31 = (ax1 + bx2 )3 so the function:

cx31 + dx21 x2 + ex1 x22 + f x31 (ax1 + bx2 )3 is homogeneous of degree zero. (f) (sx1 )α (sx2 )β = sα+β (xα1 xβ2 ) so the function xα1 xβ2 is homogeneous of degree α + β. (g) α ln(sx1 ) + β ln(sx2 ) = α ln x1 + β ln x2 + (α + β) ln s the function α ln x1 + β ln x2 is not homogeneous.

68

4.10. Solutions to activities

Solution to Activity 4.2 For any s > 0 the values of (x1 , x2 ) that satisfy the budget constraint p1 x1 + p2 x2 ≤ m are the same as the values that satisfy that constraint (sp1 )x1 + (sp2 )x2 ≤ (sm). In particular this applies when s = 2. The non-negativity constraints x1 ≥ 0 and x2 ≥ 0 are unchanged by changes in prices and income. Thus the budget line and budget set do not change when (p1 , p2 , m) is replaced by (sp1 , sp2 , sm), so the point (x1 , x2 ) that maximises u(x1 , x2 ) on the budget set does not change, implying that: x1 (p1 , p2 , m) = x1 (sp1 , sp2 , sm) x2 (p1 , p2 , m) = x2 (sp1 , sp2 , sm) and: v(p1 , p2 , m) = u(x1 (p1 , p2 , m), x2 (p1 , p2 , m)) = u(x1 (sp1 , sp2 , sm), x2 (sp1 , sp2 , sm)) = v(sp1 , sp2 , sm) so v(p1 , p2 , m) = v(sp1 , sp2 , sm). Thus uncompensated demand and the indirect utility function are homogeneous of degree zero in (p1 , p2 , m). Under the assumptions of this model if all prices and income increase by 50% it has no effect on utility. Real world inflation is not like this, the prices of different goods and the incomes of different people change by different percentages so some people will be better off and others worse off. In addition high rates of inflation are associated with considerable uncertainty which causes additional problems. Solution to Activity 4.3 For the Cobb-Douglas utility function:  v(p1 , p2 , m) =

aa bb pa1 pb2

 m

so:  aa bb (sm) v(sp1 , sp2 , sm) = (sp1 )a (sp2 )b  a b  a b s a b a b = a+b m= m = v(p1 , p2 , m) a b s p 1 p2 pa1 pb2 

so v(p1 , p2 , m) is homogeneous of degree zero in (p1 , p2 , m). Also:  a b a b ∂v(p1 , p2 , m) = >0 ∂m pa1 pb2  a b ∂v(p1 , p2 , m) a b −a−1 = −ap1 m 0 so from Proposition 9 is homothetic. (b) The function: cx31 + dx21 x2 + ex1 x22 + f x31 (ax1 + bx2 )3 is homogeneous of degree zero. It is not homothetic because no functions of degree zero are homothetic. To see why this is, note that homotheticity implies that if s > 1: f (sx1 , sx2 , . . . , sxn ) = q(r(sx1 , sx2 , . . . , sxn )) = q(sr(x1 , x2 , . . . , xn )) > q(r(x1 , x2 , . . . , xn )) = f (x1 , x2 , . . . , xn ) because the function q is strictly increasing, whereas homogeneity of degree zero implies that: f (sx1 , sx2 , . . . , sxn ) = f (x1 , x2 , . . . , xn ).

70

4.10. Solutions to activities

(c) The function α ln x1 + β ln x2 where α > 0, β > 0 is homothetic because: α ln x1 + β ln x2 = ln xα1 + ln xβ2   = ln xα1 xβ2 α+β !  α β = ln x1α+β x2α+β  α  β α+β α+β = (α + β) ln x1 x2 . The function: α

β

x1α+β x2α+β is homogeneous of degree one, the function (α + β) ln r is strictly increasing so α ln x1 + β ln x2 is homothetic. Solution to Activity 4.7 With the utility function u(x1 , x2 ) = (x1 + 1)a (x2 + 1)b : M RS = =

∂u ∂x1 ∂u ∂x2

=

a(x1 + 1)a−1 (x2 + 1)b b(x1 + 1)a (x2 + 1)b−1

a(x2 + 1) . b(x1 + 1)

From Theorem 14 if the function u(x1 , x2 ) = (x1 + 1)a (x2 + 1)b were homothetic the M RS would be homogeneous of degree zero in (x1 , x2 ), which it is not. For example at (1, 2) the M RS is 2a/3b, whereas at (2, 4) the M RS is 3a/4b. A reminder of your learning outcomes By the end of this chapter, you should be able to: state the definitions of homogeneous and homothetic functions explain why the uncompensated demand and indirect utility functions are homogeneous of degree zero in prices and income determine when the indirect utility function is increasing or non-decreasing in income and decreasing or non-increasing in prices explain the relationship between the homogeneity of a function and its derivative describe the implications of homogeneous and homothetic utility functions for the shape of indifference curves, the marginal rate of substitution and uncompensated demand.

71

4. Homogeneous and homothetic functions in consumer choice theory

4.11

Sample examination questions

Note that many of the activities of this chapter are also examples of examination questions. 1. (a) Define the terms homogeneous and homothetic. (b) Are all homogeneous functions homothetic? (c) Are all homothetic functions homogeneous? (d) Suppose a consumer has a homogeneous utility function, income doubles, but prices do not change. What happens to uncompensated demand? (e) Suppose a consumer has a homothetic utility function, income does not change, but prices fall by 50%. What happens to uncompensated demand? 2. Which of the following statements are true? Explain your answers. (a) Uncompensated demand is homogeneous of degree zero in prices and income. (b) The indirect utility function is homogeneous of degree one in prices and income. (c) All utility functions are homogeneous. (d) All homogeneous utility functions are homothetic.

72

Chapter 5 Quasiconcave and quasiconvex functions 5.1

Aim of the chapter

The aim of this chapter is to introduce the ideas of quasiconcavity and quasiconvexity, show why they are important for consumer and producer theory, and to show that the Kuhn-Tucker Theorem applies to maximisation problems with a quasiconcave objective function and quasiconvex constraint functions.

5.2

Learning outcomes

By the end of this chapter, you should be able to: state the formal definitions and explain the implications of quasiconcave and quasiconvex functions explain the relationship between quasiconcave and concave functions explain the relationship between quasiconvex and convex functions explain the relationship between concavity and returns to scale for production functions explain the relationship between the convexity assumption in consumer theory and quasiconcavity of the utility function explain the relationship between the sets on which a quasiconcave function f (x) ≥ f (y) and the tangent line prove the sufficiency of the Kuhn-Tucker conditions in case of a quasiconcave objective function and quasiconvex constraints use the Kuhn-Tucker conditions to solve optimisation problems with a quasiconcave objective function and quasiconvex constraints.

5.3

Essential reading

This chapter is self-contained and therefore there is no essential reading assigned. But you may find further reading very helpful.

73

5. Quasiconcave and quasiconvex functions

5.4

Further reading

Dixit, A.K. Optimization in Economic Theory. Chapter 6. Sydsæter, K., P. Hammond, A. Seierstad and A. Strøm Further mathematics for economic analysis. Chapter 2.2–2.5. Any textbook covering multivariate calculus will have some discussion of homogeneous functions.

5.5 5.5.1

Definitions Concavity and convexity

Convex sets, concave functions and convex functions were defined in Chapter 1. The definitions are given again here as foundations for the new ideas of quasiconcavity and quasiconvexity. Informally a set is convex of any straight line joining two points in the set lies entirely within the set. The formal definition is: Definition 23 A subset U of Rn is convex if for all x and y in U and for all t ∈ [0, 1]: tx + (1 − t)y ∈ U. A function is concave if a straight line joining any two points on its graph lies entirely on or below the graph. Formally: Definition 24 A real-valued function f defined on a convex subset U of Rn is concave if for all x and y in U and for all t ∈ [0, 1]: f (tx + (1 − t)y) ≥ tf (x) + (1 − t)f (y). A function is convex if a straight line joining any two points on its graph lies entirely on or above the graph. Formally: Definition 25 A real-valued function f defined on a convex subset U of Rn is convex if for all x and y in U and for all t ∈ [0, 1]: f (tx + (1 − t)y) ≤ tf (x) + (1 − t)f (y).

74

5.5. Definitions

Figure 5.1: A quasiconcave function that is not concave.

5.5.2

Quasiconcavity and quasiconvexity

These are new ideas. Definition 26 A real-valued function f defined on a convex subset U of Rn is quasiconcave if for all y in U the set: {x : x ∈ U, f (x) ≥ f (y)} is convex. In words this says that the function f (x) is quasiconcave if the set of points with f (x) ≥ f (y) is convex. Figure 5.1 shows the graph of a quasiconcave function from R into R. In this figure the set of points with f (x) ≥ f (y1 ) is the set of points with y1 ≤ x ≤ y2 , which is as required convex. Quasiconcavity is a weaker concept than concavity as illustrated by Figure 5.1 where the function is quasiconcave but not concave. Quasiconvexity can be defined in a similar way to quasiconcavity. Definition 27 A real-valued function f defined on a convex subset U of Rn is quasiconvex if for all y in U the set: {x : x ∈ U, f (x) ≤ f (y)} is convex. Figure 5.2 shows the sets with u(x) ≥ u(y) and f (x) ≤ f (z) for the quasiconcave function u and the quasiconvex function f from R2+ into R. As required by the definitions these sets are convex.

75

5. Quasiconcave and quasiconvex functions

Figure 5.2: Level sets for a quasiconcave function u(x) and a quasiconvex function f (x).

Activity 5.1 Prove that for any quasiconvex function f (x) the function −f (x) is quasiconcave. Activity 5.2

Could a quasiconvex function defined on R be increasing?

Activity 5.3

Could a quasiconvex function increase and then decrease?

5.6 5.6.1

Quasiconcavity and concavity; quasiconvexity and convexity The relationship

Figure 5.1 shows a function that is quasiconcave but not concave, so quasiconcavity and concavity are not equivalent properties. The following theorem shows that concavity and convexity are respectively stronger than quasiconcavity and quasiconvexity. Theorem 16 All concave functions are quasiconcave and all convex functions are quasiconvex. Proof. If f is a concave function, f (x1 ) ≥ y and f (x2 ) ≥ y then if t ∈ [0, 1]: f (tx1 + (1 − t)x2 ) ≥ tf (x1 ) + (1 − t)f (x2 ) ≥ tf (y) + (1 − t)f (y) = f (y) so: f (tx1 + (1 − t)x2 ) ≥ f (y) which is what is needed to imply that f is quasiconcave. If f is convex then −f is concave, so −f is quasiconcave which implies that f is quasiconvex. 

76

5.6. Quasiconcavity and concavity; quasiconvexity and convexity

In fact a stronger result holds. Theorem 17 If r is a concave function, q a strictly increasing function, and f (x) = q(r(x)) then f (x) is a quasiconcave function. Similarly if r is a convex function, q a strictly increasing function, and f (x) = q(r(x)) then f (x) is a quasiconvex function. Proof. If f (x1 ) ≥ f (y) and f (x2 ) ≥ f (y) then: f (x1 ) = q(r(x1 )) ≥ q(r(y)) = f (y) f (x2 ) = q(r(x2 )) ≥ q(r(y)) = f (y). As q is a strictly increasing function this implies that r(x1 ) ≥ r(y) and r(x2 ) ≥ r(y). As r is a concave function, if t ∈ [0, 1]: r(tx1 + (1 − t)x2 ) ≥ tr(x1 ) + (1 − t)r(x2 ) ≥ tr(y) + (1 − t)r(y) = r(y) so: r(tx1 ) + (1 − t)x2 ) ≥ r(y). As q is a strictly increasing function and f (x) = q(r(x)) this implies that: f (tx1 + (1 − t)x2 ≥ f (y). This argument establishes that if f (x1 ) ≥ f (y), f (x2 ) ≥ f (y) and t ∈ [0, 1] then f (tx1 + (1 − t)x2 ) ≥ f (y) which is what is needed to establish that the set: {x : x ∈ U, f (x) ≥ f (y)} is convex so the function f is quasiconcave. Similarly if r is convex, and q is strictly increasing f (x1 ≤ f (y)) and f (x2 ) ≤ f (y) implies that r(x1 ) ≤ r(y) and r(x2 ) ≤ r(y). As r is a convex function, if t ∈ [0, 1]: r(tx1 ) + (1 − t)x2 ) ≤ tr(x1 ) + (1 − t)r(x2 ) ≤ tr(y) + (1 − t)r(y) = r(y) so as q is strictly increasing f (x1 ) ≤ f (y) and f (x2 ≤ f (y)) implying that: f (tx1 ) + (1 − t)x2 ) = q(r(tx1 ) + (1 − t)x2 )) ≤ q(r(y)) = f (y) so f is quasiconvex. 

5.6.2

Quasiconcavity in producer theory

Producer theory works with production functions f (K, L) which give output as a function of inputs K (capital) and L (labour). For example the Cobb-Douglas production function is f (K, L) = K α Lβ where α > 0 and β > 0. One important aspect of a production function is returns to scale.

77

5. Quasiconcave and quasiconvex functions

Definition 28 A production function f (K, L) has increasing returns to scale if for any s > 1, f (sK, sL) > sf (K, L). A production function f (K, L) has constant returns to scale if for any s > 1, f (sK, sL) = sf (K, L). A production function f (K, L) has decreasing returns to scale if for any s > 1, f (sK, sL) < sf (K, L). As an example of returns to scale consider the Cobb-Douglas production function f (K, L) = K α Lβ defined on R2+ where α > 0 and β > 0. Do not assume that α + β = 1. Then: f (sK, sL) = (sK)α (sL)β = sα+β K α Lβ = sα+β f (K, L). If s > 1, sα+β > s if α + β > 1, sα+β = s if α + β = 1, and sα+β < s if α + β > 1, so: The Cobb-Douglas production function K α Lβ function has increasing returns to scale if α + β > 1. The Cobb-Douglas production function K α Lβ function has constant returns to scale if α + β = 1. The Cobb-Douglas production function K α Lβ function has decreasing returns to scale if α + β < 1. There is an important link between returns to scale and concavity. Proposition 10 If the production function f (K, L) has f (0, 0) = 0 and is concave then it has decreasing or constant returns to scale. Proof. Suppose that s > 1 so 0 < 1/s < 1. Then concavity of f (K, L) implies that:     1 1 f (K, L) = f (0, 0) (sK, sL) + 1 − s s   1 1 ≥ f (sK, sL) + 1 − f (0, 0) s s 1 = f (sK, sL) s so f (sK, sL) ≤ sf (K, L). This result means that limiting attention to concave production functions rules out increasing returns to scale, which are an important economic phenomenon. However production functions can be quasiconcave without being concave. If α + β > 1 the Cobb-Douglas production function has increasing returns to scale, so is not concave. However it is quasiconcave because: K α Lβ = q(r(K, L))

78

5.6. Quasiconcavity and concavity; quasiconvexity and convexity

where r(K, L) = K a Lb where: α α+β β b = α+β

a =

so a > 0, b > 0 and a + b = 1 and: q(r) = rα+β . From Chapter 1 r(K, L) is concave in (K, L). Because α + β > 0, q(r) is a strictly increasing function. Thus from Theorem 17 K α Lβ is a quasiconcave function. 

5.6.3

Quasiconcavity in consumer theory

The complexity assumption for consumer theory, stated in terms of preferences, is that the set of points preferred to y, that is:  x : x ∈ Rn+ , x  y is convex. The convexity assumption on preferences is equivalent to a requirement that for any utility function u(x) representing the preferences the set:  x : x ∈ Rn+ , u(x) ≥ u(y) is convex. But this is the condition that the utility function be quasiconcave. Thus quasiconcavity of the utility function is the natural concavity requirement for consumer theory. Activity 5.4

Find the first and second derivatives and draw a graph of the function:   1 2 f (x) = exp − x 2

where exp is the exponential function.

Activity 5.5 line.

 Consider the function f (x) = exp − 12 x2 defined on the entire real

(a) Is the function concave or convex? (b) Is the function quasiconcave? (c) Is the function quasiconvex?

79

5. Quasiconcave and quasiconvex functions

 Activity 5.6 Consider the function f (x) = exp − 12 x2 defined on the interval [0, ∞), that is the set of real numbers with x ≥ 0. (a) Is the function concave or convex? (b) Is the function quasiconcave? (c) Is the function quasiconvex?

5.7

5.7.1

Tangents and sets with quasiconcave and quasiconvex functions The result

Theorem 18 proved in this chapter says that, as shown in Figure 5.3, the line which is the tangent at x∗ to the set on which the quasiconcave function g(x) ≥ g(x∗ ) does not cross the boundary of the set. Similar conditions apply to quasiconvex functions. The results are important because they imply that the Kuhn-Tucker Theorem applies to optimisation problems with a quasiconcave objective function and quasiconvex constraint functions provided the derivatives of the objective function are not all zero.

Figure 5.3: Quasiconcave functions and tangents.

The formal statement of the theorem comes next, followed by a discussion of the geometric interpretation of the theorem, and then the proof.

80

5.7. Tangents and sets with quasiconcave and quasiconvex functions

Theorem 18 i. If g is a differentiable quasiconcave function defined on a convex subset of Rn , g(x) ≥ g(x∗ ) and at least one of the partial derivatives of g(x) at x∗ is not zero then: n X ∂g(x∗ ) (xi − x∗i ) ≥0 ∂xi i=1 or in vector notation: (x − x∗ )Dg(x∗ ) ≥ 0. ii. If h is a differentiable quasiconvex function defined on a convex subset of Rn , h(x) ≤ h(x∗ ) and at least one of the partial derivatives of h(x) at x∗ is not zero then: n X ∂h(x∗ ) ≤0 (xi − x∗i ) ∂xi i=1 or in vector notation: (x − x∗ )Dh(x∗ ) ≤ 0. iii. If g is a differentiable quasiconcave function defined on a convex subset of Rn , g(x) > g(x∗ ) and at least one of the partial derivatives of g(x) at x∗ is not zero then: n X ∂g(x∗ ) (xi − x∗i ) >0 ∂xi i=1 or in vector notation: (x − x∗ )Dg(x∗ ) > 0.

5.7.2

Interpreting Theorem 18

In R2 Theorem 18 says that if g is quasiconcave, not all the derivatives of g(x) at x∗ are zero, and g(x) ≥ g(x∗ ) then:

x1

∂g(x∗ ) ∂g(x∗ ) ∂g(x∗ ) ∂g(x∗ ) + x2 ≥ x∗1 + x∗2 . ∂x1 ∂x1 ∂x1 ∂x1

(5.1)

This means that all the points with g(x) ≥ g(x∗ ) the line has to be tangent to the set at x∗ , as shown in Figure 5.3 where the tangent is the solid straight line and the set is the shaded area. The set is convex because g(x) is quasiconcave. Another way of looking at the result is that is a line is tangent at x∗ to the set on which g(x) ≥ g(x∗ ) the line may touch the boundary of the set at other points, but it cannot cross the boundary. Part 2 of the theorem gives the corresponding result for quasiconvex functions, and part 3 states that if g(x) > g(x∗ ) the inequality in (5.1) is strict. The difference between the first and third parts of the theorem is that the inequalities are weak in the first part and strict in the third part.

81

5. Quasiconcave and quasiconvex functions

5.7.3

Proof of Theorem 18

The first part of the theorem is proved as Theorem 2.5.4 of Sydsæter et al.. If you do not have Sydsæter, here is a proof for R2 that works with Figure 5.3. Proof. Consider a point y with g(y) ≥ g(x∗ ) as shown in Figure 5.3. Because g is quasiconcave and g(y) ≥ g(x∗ ) all points of the form ty + (1 − t)x∗ with 0 < t < 1 lying on the dashed line joining y and x∗ satisfy: g(ty + (1 − t)x∗ ) ≥ g(x∗ ).

(5.2)

As: (tyi + (1 − t)x∗i ) − x∗i = t(yi − x∗i ) if:

n X

(xi − x∗i )

i=1

∂g(x∗ ) 0 then: n X

((txi + (1 − t)x∗i ) − x∗1 )

i=1 n X

= t

(xi − x∗i )

i=1

∂g(x∗ ) ∂xi

(5.4)

∂g(x∗ ) ∂xi

< 0. Because g is differentiable, close to x∗ : g(x) ≈ g(x∗ ) +

n X i=1

(xi − x∗i )

∂g(x∗ ) . ∂xi

Recall that by assumption at least one of the partial derivatives of g at x∗ is not zero in which case this approximation implies that if x is close enough to x∗ and: n X

(xi − x∗i )

i=1

∂g(x∗ ) g(x∗ ) implies that: n X i=1

(xi − x∗i )

∂g(x∗ ) ≥0 ∂xi

so what is needed is to rule out the possibility that there is a point z with g(z) > g(x∗ ) and: n X ∂g(x∗ ) (zi − x∗i ) = 0. ∂xi i=1 Figure 5.3 illustrates a point z satisfying this equation. Because g is differentiable small changes in x result in small changes in g(x), so as g(z) > g(x∗ ), all points x close enough to z, that is inside the dotted circle around z in Figure 5.3, have g(x) > g(x∗ ). But as Figure 5.3 shows this includes points on the opposite side of the tangent line, which satisfy (5.5) but have g(x) > g(x∗ ). This is impossible, given part 1 of the theorem. 

Activity 5.7 (a) Show on a graph the set on which x21 + x22 ≤ 25. Is this set convex? Is the function f (x1 , x2 ) = x21 + x22 quasiconcave or quasiconvex? (b) Find the equation of the line that is tangent to the set at the point (3, 4). Show the line on your graph. (c) Is there any point on this tangent line with x21 + x22 < 25? (d) The point (3, 4) solves the problem of maximising the function a1 x1 + a2 x2 subject to x21 + x22 ≤ 25 for some values of a1 and a2 . Find these values.

83

5. Quasiconcave and quasiconvex functions

5.8

The Kuhn-Tucker Theorem with quasiconcavity and quasiconvexity

Theorem 19 Suppose that the function g defined on the convex subset U of Rn is quasiconcave and differentiable, and that the derivatives of g at x∗ are not all zero. Suppose also that the functions h1 , . . . , hm defined on U are quasiconvex and differentiable. Then the Kuhn-Tucker conditions are sufficient for x∗ to solve the problem of maximising g(x) on U subject to hj (x) ≤ kj for j = 1, . . . , m. The conditions are: i. The first order conditions are satisfied at x∗ , that is there is a vector λ ∈ Rm such that the partial derivatives of the Lagrangian: L(k ∗ , λ, x) = g(x) + λ[k ∗ − g(x)] m X = g(x) + λj [kj∗ − hj (x)] j=1

at x∗ are zero, so: m

∂g(x∗ ) X ∂hj (x∗ ) − λj = 0 for i = 1, 2, . . . , n ∂xi ∂xi j=1 or in vector notation: ∂L(k ∗ , λ, x) = Dg(x∗ ) − λDh(x∗ ) = 0. ∂x ii. The multipliers are non-negative, that is λj ≥ 0 for j = 1, . . . , m, or in vector notation λ ≥ 0. iii. The vector x∗ is feasible, that is x∗ ∈ U and hj (x∗ ) ≤ kj∗ for j = 1, . . . , m or in vector notation h(x∗ ) ≤ k∗ . iv. The complementary slackness conditions are satisfied, that is λj [kj∗ − hj (x∗ )] = 0 for j = 1, . . . , m or in vector notation λ[k∗ − h(x∗ )] = 0. Proof. Suppose that the theorem does not hold, that is there is a point x in U with g(x > g(x∗ )) and hj (x) ≤ kj for j = 1, 2, . . . , m. The x∗ are not all zero; thus part 3 of Theorem 18 implies that as g(x > g(x∗ )): n X

(xi − x∗i )

i=1

∂g(x∗ ) > 0. ∂xi

If hj (x∗ ) < hj (x∗ ) complementary slackness implies that λj = 0 so: n X i=1

84

(xi −

x∗i )

  ∂hj (x∗ ) λj = 0. ∂xi

(5.6)

5.9. Solutions to activities

This equality also holds if all the derivatives of hj at x∗ are zero. If hj (x∗ ) = kj , then hj (x∗ ) = kj , so if the derivatives of hj at x∗ are not all zero the non-negativity of multipliers and part 2 of Theorem 18 imply that:   n X ∂hj (x∗ ) ∗ (xi − xi ) λj ≤ 0. ∂x i i=1 From the last two expressions: n X i=1

(xi − x∗i )

m X

∂hj (x∗ ) λj ∂xi j=1

! ≤ 0.

(5.7)

From the Kuhn-Tucker first order conditions: m ∂g(x∗ ) X ∂hj (x∗ ) = λj ∂xi ∂xi j=1 so:

n X

n m ∗ X X ∂hj (x∗ ) ∗ ∂g(x ) ∗ (xi − xi ) = (xi − xi ) λj ∂xi ∂xi i=1 i=1 j=1

! .

But (5.6) and (5.7) state that the left-hand side of this equation is strictly positive, whereas the left-hand side is not positive, which is impossible. As (5.6) and (5.7) follows from the assumption this contradiction implies that the result holds. 

5.9

Solutions to activities

Solution to Activity 5.1 From the definition if f is a quasiconvex function defined on the convex set U , then for all y in U the set on which f (x) ≤ f (y) is convex. But this is exactly the same as the set on which −f (x) ≥ −f (y), so this set is also convex for all y in U , which is what is required to make −f (x) a quasiconcave function. Solution to Activity 5.2 A quasiconcave function defined on R could be increasing. In fact any increasing function defined on R is quasiconvex. This is because if f (x1 ) ≥ y, f (x2 ) ≥ y and t ∈ [0, 1], then either x1 ≤ x2 in which case x1 ≤ tx1 + (1 − t)x2 , so as f is increasing f (y) ≤ f (x1 ) ≤ f (tx1 + (1 − t)x2 ), or x1 > x2 in which case as f is increasing f (y) ≤ f (x2 ) ≤ f (tx1 + (1 − t)x2 ). In either case f (x1 ) ≥ y and f (x2 ) ≥ y implies that f (tx1 + (1 − t)x2 ) ≥ y which is what is needed to show that f is quasiconvex. Solution to Activity 5.3 A quasiconvex function cannot increase and then decrease as shown in Figure 5.1, because the set on which f (x) ≤ f (y1 ) includes y1 and y2 , but not the points between y1 and y2 , which have the form ty1 + (1 − t)y2 with t ∈ [0, 1]. Hence the set on which f (x) ≤ f (y1 ) is not convex so the function f is not quasiconvex.

85

5. Quasiconcave and quasiconvex functions

Figure 5.4: The graph of f (x) = exp − 21 x2 .



Solution to Activity 5.4   For the function f (x) = exp − 21 x2 the first derivative is −x exp − 12 x2 and the second derivative is (x2 − 1) exp − 21 x2 . The graph is shown in Figure 5.4. Solution to Activity 5.5  (a) The second derivative of exp − 12 x2 is positive when x < −1 or x > 1 and negative when −1 < x < 1, so the function is neither concave nor convex on R, that is the entire real line. (b) The function is quasiconcave because the set on which f (x) ≥ f (y) is the set of points with −y ≤ x ≤ y if y > 0, and the set on which y ≤ x ≤ −y if y < 0, and these sets are convex. (c) The function is not quasiconvex, because for example the set on which f (x) ≤ f (2) is the set of points with either x ≤ −2 or x ≥ 2 which is not convex. Solution to Activity 5.6  (a) The second derivative of exp − 21 x2 is positive when x > 1 and negative when 0 < x < 1, so the function defined on [0, ∞) is neither concave nor convex. (b) The function is quasiconcave on [0, ∞) because the subset of [0, ∞) on which f (x) ≥ f (y) is the set of points with 0 ≤ x ≤ y which is convex. (c) The function is quasiconvex on [0, ∞) because the subset of [0, ∞) on whcich f (x) ≤ f (y) is the set of points with y ≤ x which is convex.

86

5.9. Solutions to activities

Figure 5.5: The circle on which x21 + x22 ≤ 25 and its tangent at (3, 4).

Solution to Activity 5.7 The set on which f (x1 , x2 ) = x21 + x22 ≤ 5 is the set of points on or within the circle with radius 5 centred on the origin in Figure 5.5. The set is convex, and the function f (x1 , x2 ) is quasiconvex. The tangent line at the point (3, 4) has equation: (x1 − 3)

∂f (3, 4) ∂f (3, 4) + (x2 − 4) = 0. ∂x1 ∂x2

As the partial derivatives: ∂f (3, 4) = 6, ∂x1

∂f (3, 4) =8 ∂x2

this line has equation: (x1 − 3)6 + (x2 − 4)8 = 0 or equivalently: 6x1 + 8x2 = 50. The line is shown in Figure 5.5. There are no points on this line with x21 + x22 < 25, but the point (3, 4) with x21 + x22 = 25 lies on this line. The point (3, 4) solves the problem of maximising 6x1 + 8x2 subject to x21 + x22 ≤ 25.

87

5. Quasiconcave and quasiconvex functions

A reminder of your learning outcomes By the end of this chapter, you should be able to: state the formal definitions and explain the implications of quasiconcave and quasiconvex functions explain the relationship between quasiconcave and concave functions explain the relationship between quasiconvex and convex functions explain the relationship between concavity and returns to scale for production functions explain the relationship between the convexity assumption in consumer theory and quasiconcavity of the utility function explain the relationship between the sets on which a quasiconcave function f (x) ≥ f (y) and the tangent line prove the sufficiency of the Kuhn-Tucker conditions in case of a quasiconcave objective function and quasiconvex constraints use the Kuhn-Tucker conditions to solve optimisation problems with a quasiconcave objective function and quasiconvex constraints.

5.10

Sample examination questions

1. Find a solution to the consumer’s problem of maximising u(x) = xα1 xβ2 on R2+ subject to p1 x1 + p2 x2 ≤ m. Assume that α > 0 and β > 0 but do not assume that α + β = 1. 2. (a) A firm wants to find the combination of inputs K (capital) and L (labour) that minimise the cost rK + wL of producing output q subject to the production constraint K 3 L2 ≥ q. Assume that w = r = 1 and solve the problem. Find also the minimum cost of producing q. (b) A firm wants to find the combination of inputs K (capital) and L (labour) that minimise the cost rK + wL of producing output q subject to the production constraint K 3 L2 ≥ q and a constraint on the supply of capital ¯ As before assume that w = r = 1 and solve the problem. Find also the K ≤ K. minimum cost of producing q.

5.11

Comments on the sample examination questions

1. The solution is:  α m = α + β p1   β m = . α + β p2 

x1 x2

88

5.11. Comments on the sample examination questions

You solve the problem using exactly the same Lagrangian techniques given in Chapter 2 for the solution of the consumer’s problem with the utility function u(x1 , x2 ) = xa1 xb2 where a > 0, b > 0 and a + b = 1. However the theorems on optimisation in Chapter 2 work with a concave objective function and convex constraint functions. If α + β > 1 the objective function is quasiconcave but not concave. So you need to appeal to theorems on optimisation with quasiconcave and quasiconvex functions. You need to state that the objective function is quasiconcave and explain why. (This is covered in Example 5.2). The constraint function is linear so concave and b quasiconcave. The partial derivative ∂u/∂x1 = axa−1 1 x2 > 0. Thus the Kuhn-Tucker conditions are sufficient for a solution. You then find values of x1 , x2 and the multiplier that satisfy the Kuhn-Tucker conditions. 2. (a) As in Question 1 you need to name the theorem you are using, and check that the conditions hold. Here you are solving a constrained minimisation problem of minimising K + L with a ≥ constraint K 3 L2 ≥ q. You can turn this into a constrained maximisation problem with a ≤ constraint by multiplying by −1 so the objective is −K − L and the constraint is −K 3 L2 ≤ −q. The objective is linear so concave and quasiconcave. From the previous question the function K 3 L2 is quasiconcave, so −K 3 L2 is quasiconvex. Hence this is a problem of maximising a quasiconcave function subject to a quasiconvex constraint. The partial derivatives of the objective are not zero. Thus the Kuhn-Tucker conditions are sufficient for a solution. The Lagrangian is: L = −K − L + λ(−1 + K 3 L2 ). The first order conditions are: ∂L = −1 + 3λK 2 L2 = 0 ∂K ∂L = −1 + 2λK 3 L = 0 ∂L so: 3λK 2 L2 = 1 2λK 3 L = 1 which implies that: 2K =1 3L so K = (3/2)L. The first order conditions cannot be satisfied if λ = 0, so λ > 0 and complementary slackness implies that the constraint K 3 L2 ≥ q must be satisfied as an equality so q = K 3 L2 . As K = (3/2)L this implies that:  3/5 2 L= q 1/5 3 and as K = (3/2)L:  2/5 3 K= q 1/5 . 2

89

5. Quasiconcave and quasiconvex functions

These values of K and L are feasible and satisfy the first order conditions. From the first order conditions the multiplier λ = (1/2)K −3 L−1 > 0 so the multiplier is nonnegative. Complementary slackness is satisfied because the constraint is satisfied as an equality. Thus the Kuhn-Tucker conditions are satisfied so this is a solution to the problem. The minimum cost of production is:  3/5  2/5 ! 2 3 q 1/5 . K +L= + 3 2 (b) This is the same problem as before except for the additional constraint that ¯ This constraint is linear so quasiconvex. From the argument in the K ≤ K. previous question all the conditions of the Kuhn-Tucker theorem with quasiconcave objective and quasiconvex constraints are satisfied. The Lagrangian is: ¯ − K). L = −K − L + λ(−q + K 3 L2 ) + µ(K If µ = 0 this is the same as the Lagrangian for the previous question, and :  2/5 3 q 1/5 K= 2  3/5 2 L= q 1/5 3 solve the Kuhn-Tucker conditions for the new problem provided K is feasible, ¯ so: which requires that K ≤ K,  2 2 ¯ 5. K q≤ 3 ¯ is satisfied The complementary slackness condition on the constraint K ≤ K because µ = 0. ¯ 5 this solution is not feasible. Looking for a solution However if q > (2/3)2 K ¯ the constraint K 3 L2 ≥ q is satisfied as an equality if: with K = K, ¯ −3/2 . L = q 1/2 K The first order condition for L is: −1 + 2λK 3 L = 0 which implies that: 1 1 ¯ −3/2 −1/2 K q = 2K 3 L 2 so the multiplier λ is non-negative. The first order condition for K is: λ=

−1 + 3λK 2 L2 − µ = 0 so:

90

3 ¯ −5/2 1/2 µ = 3λK 2 L2 − 1 = K q − 1. 2

5.11. Comments on the sample examination questions

The multiplier µ is this non-negative if:  2 2 ¯ 5. q≥ K 3 Complementary slackness is satisfied because the constraints K 3 L2 ≥ q and ¯ are satisfied as equalities. The minimum cost of production is: K≤K ¯ + q 1/2 K ¯ −3/2 . K +L=K Summing up:   ¯ 5 the solution is K = 3 2/5 q 1/5 , L = 2 3/5 q 1/5 . The K  2 2/5  1/53 3/5 minimum cost of production is 23 + 32 q . 2 5 ¯ the solution is K = K, ¯ L = q 1/2 K ¯ −3/2 . The minimum cost ◦ If q > 32 K ¯ + q 1/2 K ¯ −3/2 . of production is K ◦ If q ≤

 2 2 3

91

5. Quasiconcave and quasiconvex functions

92

Chapter 6 Expenditure and cost minimisation problems 6.1

Aim of the chapter

This chapter introduces the consumer’s expenditure minimisation problem, and derives the properties of its solution (compensated demand) and minimum value function (the expenditure function). It shows that the firm’s cost minimisaton problem is mathematically the same as the consumer’s expenditure minimisation problem. The solution to the firm’s cost minimisation problem (conditional factor demand) and the minimum cost function (the cost function), have the same mathematical properties as compensated demand and the expenditure function. Finally, the chapter explores the close relationship between the solutions to the consumer’s utility maximisation and expenditure minimisation problems, and shows that Roy’s identity and the Slutsky equation derive from this relationship.

6.2

Learning outcomes

By the end of this chapter, you should be able to: define compensated demand describe the relationship between compensated demand, uncompensated demand, income and substitution effects outline the properties of compensated demand state the definition and properties of the expenditure function explain the relationship between the firm’s cost minimisation problem and the consumer’s expenditure minimisation problem explain the relationship between the solutions to the consumer’s utility maximisation problem and expenditure minimisation problem describe the relationship between uncompensated demand, the indirect utility function, compensated demand and the expenditure function state and derive Roy’s identity state and derive the Slutsky equation.

93

6. Expenditure and cost minimisation problems

Figure 6.1: Income and substitution effects.

6.3

Essential reading

This chapter is self-contained and therefore there is no essential reading assigned. But you may find further reading very helpful.

6.4

Further reading

Varian, H.R. Intermediate Microeconomics. For a reminder of the intermediate microeconomics treatment of consumer theory read Chapter 8 on the Slutsky equation, and Chapter 20 on cost minimisation. Varian, H.R. Microeconomic Analysis. For a more sophisticated treatment comparable to this chapter read Chapters 5, 6 and 9 or the relevant section of any mathematical treatment of the theory of the consumer and the firm.

6.5 6.5.1

Compensated demand and expenditure minimisation Income and substitution effects

Any intermediate microeconomics textbook will show you how to decompose the effects of a change in the price of good 1 into income and substitution effects, as shown in Figure 6.1. The substitution effect takes the consumer from A to B. A is the original point where the original budget line with gradient −p1 /p2 is tangent to the indifference curve with utility x¯ and B is a point on the same indifference curve as A, which is tangent to a budget line with gradient −p01 /p2 where p01 is the new price of good 1. The income effect is the move from B to C, which results from a downward shift of the budget line with gradient −p01 /p2 .

94

6.5. Compensated demand and expenditure minimisation

Figure 6.2: The consumer’s expenditure minimisation problem.

6.5.2

Definition of compensated demand

Diagrams are helpful in understanding income and substitution effects, but it is also useful to have a mathematical formulation. This is done by considering the consumer’s expenditure minimisation problem where the objective is to minimise the consumer’s total expenditure p1 x1 + p2 x2 on R2+ subject to the constraint that the consumer has utility of at least u¯ so u(x1 , x2 ) ≥ u¯. Throughout this chapter the notation u(x1 , x2 ) indicates a function whereas u¯ is a number, the level of the constraint. We also assume that x1 ≥ 0 and x2 ≥ 0. This is a minimisation rather than a maximisation problem, with a ≥ rather than ≤ in the constraint. However it is straightforward to turn this into the standard form for constrained optimisation problems, maximising a function, with a ≤ constraint by multiplying the objective and constraint by −1, so the problem is to maximise −p1 x1 − p2 x2 subject to −u(x1 , x2 ) ≤ −¯ u, x1 ≥ 0 and x2 ≥ 0. Figure 6.2 illustrates the problem. The constraint requires that utility is u¯ or more. The diagram shows the indifference curve with utility u(x1 , x2 ) = u¯. Points on the indifference curve have utility u¯. Under the nonsatiation assumption, points above the indifference curve provide greater utility than u¯ while points below the indifference curve provide lower utility than u¯. The parallel straight lines L1 , L2 , L3 and L4 all have gradient −p1 /p2 . The solution to the expenditure minimisation problem is at the point A on the lowest budget line that meets the indifference curve. This is on the budget line L2 . The dotted budget line L1 gives a higher level of expenditure than L2 , the dashed budget lines L3 and L4 do not meet the indifference curve so have utility less than u¯. The minimum amount of money e that the consumer has to spend to get utility u¯ can be seen from the points where the budget line L2 meets the x1 -axis, which is e/p1 , or the point where L2 meets the x2 -axis, which is e/p2 .

95

6. Expenditure and cost minimisation problems

Definition 29 Compensated demand is the solution to the problem of minimising expenditure p1 x1 + p2 x2 on R2+ subject to the utility constraint u(x1 , x2 ) ≥ u¯. It is a function of prices p1 and p2 and the utility level u¯. Compensated demand for goods 1 and 2 is written as h1 (p1 , p2 , u¯) and h2 (p1 , p2 , u¯). Compare this with uncompensated demand which is the solution to the problem of maximising utility u(x1 , x2 ) on R2+ subject to the budget constraint p1 x1 + p2 x2 ≤ m. It is a function of prices p1 and p2 and income m. Uncompensated demand for goods 1 and 2 is written as x1 (p1 , p2 , m) and x2 (p1 , p2 , m). When prices change but u¯ does not the consumer stays on the same indifference curve, so the change in compensated demand is the substitution effect. The change in uncompensated demand reflects both the income effect and the substitution effect.

6.5.3

Properties of compensated demand

Homogeneity of degree zero in prices Compensated demand is homogeneous of degree zero in prices, so is s > 0 then: h1 (p1 , p2 , u¯) = h1 (sp1 , sp2 , u¯) and h2 (p1 , p2 , u¯) = h2 (sp1 , sp2 , u¯). The intuitive explanation for this is that compensated demand is found by finding the lowest budget line with gradient −p1 /p2 that meets the indifference curve with utility u¯. As the gradient of the budget line is −p1 /p2 is not changed by multiplying both prices by s > 0 this budget line and the point it meets the indifference curve, that is uncompensated demand, do not change when both prices are multiplied by s > 0. More formally if (x∗1 , x∗2 ) is compensated demand, so solves the problem of minimising expenditure p1 x1 + p2 x2 on R2+ subject to u(x1 , x2 ) ≥ u¯, then (x∗1 , x∗2 ) ∈ R2+ , u(x∗1 , x∗2 ) ≥ u¯ and p1 x∗1 + p2 x∗2 ≤ p1 x1 + p2 x2 for all (x1 , x2 ) with (x1 , x2 ) ∈ R2+ and u(x1 , x2 ) ≥ u¯. But as s > 0, p1 x∗1 + p2 x∗2 ≤ p1 x1 + p2 x2 implies that sp1 x∗1 + sp2 x∗2 ≤ sp1 x1 + sp2 x2 so (x∗1 , x∗2 ) also solves the problem of minimising sp1 x1 + sp2 x2 on R2+ subject to u(x1 , x2 ) ≥ u¯. Downward sloping compensated demand curve The single most important property of compensated demand is that if the price of a good increases then demand for the good cannot increase, so if p01 > p1 then h1 (p01 , p2 , u¯) ≤ h1 (p1 , p2 , u¯), so the compensated demand curve cannot gradient upwards. It is quite easy to prove this result. Suppose that compensated demand with utility u¯ is (x1 , x2 ) with prices (p1 , p2 ) and (x01 , x02 ) with prices (p01 , p2 ). These points are solutions to the expenditure minimisation problem with utility level u¯ but different prices. Thus both (x1 , x2 ) and (x01 , x02 ) must give utility at least u¯. The point (x1 , x2 ) is the solution to the expenditure minimisation problem with prices (p1 , p2 ) and utility u¯. This implies that it cannot be more expensive at these prices than any other bundle that gives utility

96

6.5. Compensated demand and expenditure minimisation

u¯ or more, and in particular it cannot be more expensive than (x01 , x02 ). Thus: p1 x1 + p2 x2 ≤ p1 x01 + p2 x02 . For the same reason: p01 x01 + p2 x02 ≤ p01 x1 + p2 x2 . Adding these inequalities gives: p1 x1 + p2 x2 + p01 x01 + p2 x02 ≤ p1 x01 + p2 x02 + p01 x1 + p2 x2 which after rearrangement gives: p01 (x01 − x1 ) ≤ p1 (x01 − x1 ) or: (p01 − p1 )(x01 − x1 ) ≤ 0. This inequality cannot hold if both p01 > p1 and x01 > x1 , that is if an increase in the price of good 1 from p1 to p01 increases compensated demand for good 1. Thus an increase in the price of good 1 must either decrease or leave unchanged compensated demand for good 1. Activity 6.1 function

Expenditure minimisation with the Cobb-Douglas utility

Solve the consumer’s expenditure minimisation problem with the Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 . Assume that a > 0, b > 0, a + b = 1 and u¯ > 0. Write down the compensated demand functions for this utility function. Confirm that these functions are homogeneous of degree zero in prices, and that compensated demand for a good is a decreasing function of its price. Activity 6.2

Expenditure minimisation with a linear utility function

The objective of this activity is to solve the consumer’s problem of minimising expenditure p1 x1 + p2 x2 subject to the constraints u(x1 , x2 ) ≥ u¯, x1 ≥ 0, x2 ≥ 0 with a linear utility function u(x1 , x2 ) = 2x1 + x2 . Assume that p1 > 0 and p2 > 0. (a) Draw indifference curves for the utility function u(x1 , x2 ) = 2x1 + x2 . What is the marginal rate of substitution? (b) Assume that p1 /p2 < 2. Use your graph to guess the values of x1 and x2 that solve the minimisation problem. (c) Assume that p1 /p2 = 2. Use your graph to guess the values of x1 and x2 that solve the minimisation problem. (d) Assume that p1 /p2 > 2. Use your graph to guess the values of x1 and x2 that solve the minimisation problem. (e) Explain why the Kuhn-Tucker conditions are necessary and sufficient for a solution to this problem.

97

6. Expenditure and cost minimisation problems

(f) Write down the Lagrangian for the problem. Confirm that your guesses are correct by finding the Lagrange multipliers associated with the constraints in (b), (c) and (d). Does the problem have more than one solution? (g) Write down the compensated demand functions for this utility function. Confirm that these functions are homogeneous of degree zero in prices, and that compensated demand for a good is a decreasing function of its price.

6.6 6.6.1

The expenditure function Definition of the expenditure function

The consumer’s expenditure function minimisation problem involves finding a minimum rather than a maximum value function. This minimum value function is called the expenditure function. It is the minimum expenditure needed to obtain utility u¯ or more at prices p1 and p2 . The compensated demand functions h1 (p1 , p2 , u¯) and h2 (p1 , p2 , u¯) are the solutions to the problem of minimising expenditure p1 x1 + p2 x2 subject to the constraint u(x1 , x2 ) ≥ u¯. Thus the expenditure function e(p1 , p2 , u¯) is the value of the objective p1 x1 + p2 x2 when x1 = h1 (p1 , p2 , u¯) and x2 = h2 (p1 , p2 , u¯), so: e(p1 , p2 , u¯) = p1 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯). Because compensated demand is a function of (p1 , p2 , u¯) the expenditure function is also a function of p1 , p2 and u¯.

6.6.2

Properties of the expenditure function

Monotonicity in utility It follows from the general result that a maximum value function is non-decreasing in the level of the constraint k. Here the level of the constraint is u¯. If (x1 , x2 ) solves the consumer’s expenditure minimisation problem for u¯ and (x01 , x02 ) solves the problem for u¯0 > u¯, then (x01 , x02 ) satisfies the constraint u(x01 , x02 ≥ u¯). Thus (x01 , x02 ) satisfies the constraint for the problem of minimising the cost of getting utility u¯ which implies that e(p1 , p2 , u¯) ≤ p − 1x01 + p − 2x02 = e(p1 , p2 , u¯0 ). The stronger result that the expenditure function is increasing in u¯, so if u¯ < u¯0 then e(p1 , p2 u¯) < e(p1 , p2 , u¯) can be established provided compensated demand (x01 , x02 ) at utility u¯0 has one or both of x01 > 0 or x02 > 0 and utility is a continuous function of (x1 , x2 ). In this case u(x01 , x02 ) ≥ u¯0 > u¯, so starting with consumption (x01 , x02 ) it is possible to reduce expenditure whilst still having utility above u¯. This implies that the cheapest way of getting utility u¯ must be less than the cheapest way of getting utility u¯0 , so e(p1 , p2 , u¯)< e(p1 , p2 , u¯). Monotonicity in prices Intuitively, when the price of a good increases it costs more to get the same level of utility. However a consumer who does not buy the good can maintain the same level of

98

6.6. The expenditure function

utility at the same expense so the result is that the expenditure function is non-decreasing in prices. To see why this is so think about a fall in the price of good 1 from p1 to p01 whilst the price of good 2 does not change. The expenditure function is the minimum cost of getting utility u¯ at prices (p1 , p2 ). At prices (p1 , p2 ) this is done by buying h1 (p1 , p2 , u¯) and h2 (p1 , p2 , u¯). When prices change to (p01 , p2 ) it is still possible to obtain utility u¯ by buying h1 (p1 , p2 , u¯) and h2 (p1 , p2 , u¯), at a cost of: p01 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯) which implies that e(p01 , p2 , u¯) the minimum cost of getting utility u¯ at these prices cannot be more than p01 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯) so: e(p01 , p2 , u¯) ≤ p01 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯). As we require that h1 (p1 , p2 , u¯) ≥ 0 and p01 < p1 it follows that p01 h1 (p1 , p2 , u¯) ≤ p1 h1 (p1 , p2 , u¯) so: p01 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯) ≤ p1 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯) = e(p1 , p2 , u¯). Putting these inequalities together gives: e(p01 , p2 , u¯) ≤ e(p1 , p2 , u¯) so when the price of good 1 falls the expenditure function cannot rise, or put another way the expenditure function is non-decreasing in p1 . A similar argument establishes that the expenditure function is non-decreasing in p2 . Homogeneity We have already seen that the compensated demand function is homogeneous of degree zero in prices meaning that if s > 0: h1 (sp1 , sp2 , u¯) = h1 (p1 , p2 , u¯) h2 (sp1 , sp2 , u¯) = h2 (p1 , p2 , u¯). This implies that the expenditure function is homogeneous of degree one in prices, meaning that if s > 0, e(sp1 , sp2 , u¯) = se(p1 , p2 , u¯). This means that if all prices are multiplied by s, for example s = 2, so prices double then the cost of maintaining a particular level of utility also doubles. To see why this is, note that as compensated demand is homogeneous of degree zero in prices: e(sp1 , sp2 , u¯) = = = =

sp1 h1 (sp1 , sp2 , u¯) + sp2 h2 (sp1 , sp2 , u¯) sp1 h1 (p1 , p2 , u¯) + sp2 h2 (p1 , p2 , u¯) s[p1 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯)] se(p1 , p2 , u¯)

which establishes the result.

99

6. Expenditure and cost minimisation problems

Figure 6.3: Concavity of the expenditure function.

Concavity Theorem 20 The expenditure function is concave in prices. This means that for any (p01 , p02 ) and (p001 , p002 ) if: (p∗1 , p∗2 ) = t(p01 , p02 ) + (1 − t)(p001 , p002 ) where 0 ≤ t ≤ 1, then: te(p01 , p02 , u¯) + (1 − t)e(p001 , p002 , u¯) ≤ e(p∗1 , p∗2 , u¯). This is illustrated in Figure 6.3 which shows the expenditure function as a function of p1 with p2 being held constant at p∗2 . Proof. Suppose that (x01 , x02 ), (x001 , x002 ) and (x∗1 , x∗2 ) are the cheapest ways of getting utility u¯ at prices (p01 , p02 ), (p001 , p002 ) and (p∗1 , p∗2 ). Thus: p∗1 x∗1 + p∗2 x∗2 = e(p∗1 , p∗2 , u¯). As (x01 , x02 ) is the cheapest way of getting utility u¯ at prices (p01 , p02 ) and (x∗1 , x∗2 ) is a way of getting utility u¯: p01 x01 + p02 x02 = e(p01 , p02 , u¯) ≤ p01 x∗1 + p02 x∗2 . Similarly: p001 x001 + p002 x002 = e(p001 , p002 , u¯) ≤ p001 x∗1 + p002 x∗2 . Multiplying the first of these inequalities by t and the second by 1 − t, recalling that t

100

6.6. The expenditure function

and 1 − t cannot be negative because 0 ≤ t ≤ 1, and then adding gives: ¯ ≤ te(p01 , p02 , u¯) + (1 − t)e(p001 , p002 , uj) = = =

t(p01 x∗1 + p02 x∗2 ) + (1 − t)(p001 x∗1 + p002 x∗2 ) (tp01 + (1 − t)p001 )x∗1 + (tp02 + (1 − t)p002 )x∗2 p∗1 x∗1 + p∗2 x∗2 e(p∗1 , p∗2 , u¯)

because (p∗1 , p∗2 ) = t(p01 , p02 ) + (1 − t)(p001 , p002 ) = (tp01 + (1 − t)p001 , tp02 + (1 − t)p002 ).  Shephard’s Lemma Another important property of the expenditure function, called Shephard’s Lemma, is that if the expenditure function has partial derivatives with respect to prices they are equal to compensated demand, that is: ∂e(p1 , p2 , u¯) = h1 (p1 , p2 u¯). ∂p1

(6.1)

Proof. Suppose that the price of good 2 is fixed at p∗2 and that the price of good 1 changes from p∗1 to p1 . Suppose that (x1 , x2 ) is the cheapest way of getting utility u¯ at prices (p1 , p∗2 ). Because (x∗1 , x∗2 ) is the cheapest way of getting utility u¯ at prices (p∗1 , p∗2 ) it is also a way of getting utility u¯ at prices (p1 , p∗2 ) so: p1 x1 + p∗2 x2 = e(p1 , p∗2 , u¯) ≤ p1 x∗1 + p∗2 x∗2 = p∗1 x∗1 + p∗2 x∗2 + (p1 − p∗1 )x∗1 = e(p∗1 , p∗2 , u¯) + (p1 − p∗1 )x∗1 . Thus: e(p1 , p∗2 , u¯) − e(p∗1 , p∗2 , u¯) ≤ (p1 − p∗1 )x∗1 . If p1 > p∗1 this implies that: e(p1 , p∗2 , u¯) − e(p − 1∗ , p∗2 , u¯) ≤ x∗1 . p1 − p∗1 If the expenditure function has a derivative with respect to p1 at the point (p∗1 , p∗2 , u¯) the left-hand sides of these inequalities tend to the derivative as p1 tends to p∗1 . The only way this can happen is if the derivative is x∗1 .  Downward sloping compensated demand again We have already seen in Section 6.1.3 of this chapter that the compensated demand curve cannot gradient upwards, that is compensated demand is a non-increasing function of price. We established this with a simple direct argument. There is another way of showing the same thing with the result we have just got. As: ∂e(p1 , p2 , u¯) = h1 (p1 , p2 , u¯) ∂p1

101

6. Expenditure and cost minimisation problems

differentiating again with respect to p1 gives: ∂ 2 e(p1 , p2 , u¯) ∂h1 (p1 , p2 , u¯) = . ∂p21 ∂p1 As the expenditure function is concave in prices: ∂ 2 e(p1 , p2 , u¯) ≤0 ∂p21 so from the above equation: ∂h1 (p1 , p2 , u¯) ≤0 ∂p1 that is compensated demand is a non-increasing function of price. Activity 6.3 function

The expenditure function with a Cobb-Douglas utility

Find the expenditure function for the Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 where a > 0, b > 0 and a + b = 1. Make sure that the function you write down depends only on the parameters of the utility function a and b, the level of utility u¯ and prices p1 and p2 . Confirm that the expenditure function has the properties listed in the previous section. Activity 6.4

The expenditure function with a linear utility function

Find the expenditure function for the linear utility function u(x1 , x2 ) = 2x1 + x2 . Make sure that the function you write down depends only on the level of utility u¯ and prices p1 and p2 . Confirm that the expenditure function has the properties listed in the previous section.

6.7 6.7.1

The firm’s cost minimisation problem Definitions

The problem The firm’s cost minimisation problem is to minimise rK + wL on R2+ subject to f (K, L) ≥ q where K is the input of capital with price r, L is the input of labour with price w, f (K, L) is the production function, giving output as a function of inputs, and q is the required level of output. The sample examination questions for Chapter 5 ask you to solve cost minimisation problems. Conditional factor demand The solution to the firm’s cost minimisation problem gives the levels of inputs which minimise the cost of producing q when input prices are r and w. The solution is called conditional factor demand, written as K(r, w, q) and L(r, w, q).

102

6.8. Utility maximisation and expenditure minimisation

The cost function The cost function c(w, r, q) is the minimum cost of producing output q, so: c(r, w, q) = rK(r, w, q) + wK(r, w, q).

6.7.2

The firm’s problem and the consumer’s problem

The consumer’s expenditure minimisation problem is to minimise p1 x2 + p2 x2 on R2+ subject to u(x1 , x2 ) ≥ u¯. This has exactly the same mathematical form as the consumer’s expenditure minimisation problem, so all the properties of the expenditure function and the cost function are the same, and the mathematics used to solve the problems is the same. For example the activity for Sections 6.1 and 6.2 asks you to find compensated demand and the expenditure function with a Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 . The conditional factor demand functions and cost function for the Cobb-Douglas production function f (K, L) = K a Lb are exactly the same as the compensated demand expenditure function, except that the expenditure function notation (p1 , p2 , u¯) is replaced by the cost function notation (r, w, q). The cost function has exactly the same mathematical properties as the expenditure function, so: The cost function is non-decreasing in output and input prices. The cost function is homogeneous of degree one in input prices. The cost function is concave in input prices. Shephard’s Lemma for the cost function states that: ∂c(r, w, q) = K(r, w, q) ∂r ∂c(r, w, q) = L(r, w, q). ∂w

6.8 6.8.1

Utility maximisation and expenditure minimisation The relationship

Figure 6.4 shows a point x∗ lying on the indifference curve with utility u¯ that is also on the tangent budget line p1 x1 + p2 x2 = m so p1 x∗1 + p2 x∗2 = m. In this diagram the point x∗ is the solution both to the problem of minimising expenditure p1 x1 + p2 x2 on R2+ subject to the utility constraint u(x1 , x2 ) ≥ u¯ and to the problem of maximising utility u(x1 , x2 ) on R2+ ) subject to the budget constraint p1 x1 + p2 x2 ≤ m. The diagram suggests that at appropriate levels of income and utility the two problems have the same solution. This holds provided that the utiilty function satisfies two of the standard assumptions, one of which is continuity. Continuity has a formal defintion, but for the purposes of this proof it is enough to know that a utility function is continuous if small changes in x result in small changes in u(x).

103

6. Expenditure and cost minimisation problems

Figure 6.4: Duality of the utility maximisation and expenditure minimisation problems.

Figure 6.5: Proving Theorem 21.

Theorem 21 Suppose that p1 x∗1 + p2 x∗2 = m and u(x∗1 , x∗2 ) = u¯. Suppose also that the utility function u(x) is continuous and the consumer is not satiated. Then: If x∗ solves the problem of maximising utility u(x) on R2+ subject to p1 x1 + p2 x2 ≤ m then x∗ also solves the problem of minimising expenditure p1 x1 + p2 x2 subject to u(x) ≥ u¯. If x∗ solves the problem of minimising expenditure p1 x1 +p2 x2 subject to u(x) ≥ u¯ then x∗ also solves the problem of maximising utility u(x) subject to p1 x1 + p2 x2 ≤ m. Proof. Suppose that x0 solves the problem of maximising utility subject to the constraint p1 x1 + p2 x2 ≤ m and has utility u(x0 ) = u¯. Nonsatiation then implies that p1 x01 + p2 x02 = m. Suppose also that x0 does not solve the problem of minimising p1 x1 + p2 x2 subject to u(x1 , x2 ) ≥ u¯, which implies that there is a point y as shown in

104

6.8. Utility maximisation and expenditure minimisation

Figure 6.5 with: p1 y1 + p2 y2 < p1 x01 + p2 x02 = m and u(y) ≥ u(x0 ) = u¯. As p1 y1 + p2 y2 < p1 x01 + p2 x02 , there is a point y0 with y10 > y1 and y20 > y2 but p1 y10 + p2 y20 < m. Nonsatiation implies that u(y0 ) > u(y) ≥ u(x0 ). As p1 y10 + p2 y20 < m this implies that x0 cannot solve the utility maximisation problem contradicting the original supposition. Thus points that solve the utility maximisation problem must also solve the expenditure minimisation problem. Now suppose that x0 solves the problem of minimising expenditure subject to u(x1 , x2 ) ≥ u¯ but does solve the utility maximisation problem. This implies that there is a point z with u(z) > u(x0 ) = u¯ and p1 z1 + p2 z2 ≤ p1 x01 + p2 x02 = m. If p1 z1 + p2 z2 < p1 x01 + p2 x02 this implies directly that z does not solve the expenditure minimisation shown in Figure 6.5, p1 z10 + p2 z20 < p1 x01 + p2 x02 . Since the utility function u(x) is continuous, small changes in x result in small changes in u(x), so as u(z) > u(x0 ) if z0 is close enough to z then u(z0 ) > u(x0 ) and p1 z10 + p2 z20 < p1 x01 + p2 x02 , implying that x0 does not solve the expenditure minimisation problem. Thus points that solve the expenditure minimisation problem also solve the utility maximisation problem. 

6.8.2

Demand and utility relationships

Theorem 21 has some immediate consequences. Recall that: (x1 (p1 , p2 , m), x2 (p1 , p2 , m)) is uncompensated demand, that is the solution to the consumer’s utility maximising problem. The indirect utility function v(p1 , p2 , m) is the maximum value function for this problem, so: v(p1 , p2 , m) = u(x1 (p1 , p2 , m), x2 (p1 , p2 , m)). Also: (h1 (p1 , p2 , u¯), h2 (p1 , p2 , u¯)) is compensated demand, that is the solution to the expenditure minimising function. The expenditure function e(p1 , p2 , u¯) is the minimum value function for this problem so: e(p1 , p2 , u¯) = p1 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯). The result is: Theorem 22 If the utility function is continuous and the consumer is not satiated: x1 (p1 , p2 , e(p1 , p2 , u¯)) = h1 (p1 , p2 , u¯) x2 (p1 , p2 , e(p1 , p2 , u¯)) = h2 (p1 , p2 , u¯)

(6.2) (6.3)

v(p1 , p2 , e(p1 , p2 , u¯)) = u¯.

(6.4)

and:

Proof. From Theorem 21 the solution (x1 (p1 , p2 , m), x2 (p1 , p2 , m)) to the utility maximising problem also solves the expenditure minimising problem with utility: u¯ = u(x1 (p1 , p2 , m), x2 (p1 , p2 , m)) = v(p1 , p2 , m)

(6.5)

105

6. Expenditure and cost minimisation problems

so: x1 (p1 , p2 , m) = h1 (p1 , p2 , u¯) x2 (p1 , p2 , m) = h2 (p1 , p2 , u¯).

(6.6) (6.7)

Thus the expenditure function: e(p1 , p2 , u¯) = p1 h1 (p1 , p2 , u¯) = p2 h2 (p1 , p2 , u¯) = p1 x1 (p1 , p2 , m) + p2 x2 (p1 , p2 , m) = m because, given nonsatiation uncompensated demand satisfies the budget constraint as an equality. Thus: m = e(p1 , p2 , u¯). Substituting e(p1 , p2 , u¯) for m in (6.6) and (6.7) gives (6.2) and (6.3). Similarly substituting (6.5) gives (6.4).  Activity 6.5 Confirm that uncompensated demand, compensated demand, the expenditure function and the indirect utility function associated with the Cobb-Douglas utility function satisfy Theorem 22.

6.9 6.9.1

Roy’s identity The statement of Roy’s identity

Roy’s identity gives a relationship between the uncompensated demand functions x1 (p1 , p2 , m) and x2 (p1 , p2 , m) and the indirect utility function v(p1 , p2 , m). It states that: x1 (p1 , p2 , m) = − x2 (p1 , p2 , m) = −

6.9.2

∂v(p1 ,p2 ,m) ∂p1 ∂v(p1 ,p2 ,m) ∂m ∂v(p1 ,p2 ,m) ∂p2 ∂v(p1 ,p2 ,m) ∂m

.

The derivation of Roy’s identity

Roy’s identity is derived from Theorem 22 which states that: v(p1 , p2 , e(p1 , p2 , u¯)) = u¯

(6.8)

where v(p1 , p2 , m) is the indirect utility function and e(p1 , p2 , u¯) is the expenditure function. Another way of writing this is to define a new function g(p1 , p2 ,u) by: g(p1 , p2 ,u) = v(p1 , p2 , e(p1 , p2 , u¯)) so (6.8) becomes: g(p1 , p2 , u¯) = u¯.

106

(6.9)

6.9. Roy’s identity

This equation holds for all values of (p1 , p2 , u¯) so if both sides of this equation are differentiated with respect to p1 holding p2 and u¯ constant the derivatives are equal. The derivative of the right-hand side is zero, so this implies that: ∂g(p1 , p2 , u¯) = 0. ∂p1 The next step is to find this derivative. You may be able to see at once that using the chain rule and (6.9) this derivative is: ∂g(p1 , p2 , u¯) ∂v(p1 , p2 , m) ∂v(p1 , p2 , m) ∂e(p1 , p2 , u¯) = + . ∂p1 ∂p1 ∂m ∂p1

(6.10)

If this step is not clear to you take it on trust at this stage, we will come back to it. Putting the last two equations together gives: ∂v(p1 , p2 , m) ∂v(p1 , p2 , m) ∂e(p1 , p2 , u¯) + = 0. ∂p1 ∂m ∂p1

(6.11)

The derivation of Roy’s identity is now easy. From Shephard’s Lemma: ∂e(p1 , p2 , u¯) = h1 (p1 , p2 , u¯). ∂p1 But if m = e(p1 , p2 , u¯) then h1 (p1 , p2 , u¯) = x1 (p1 , p2 , m), so: ∂e(p1 , p2 , u¯) = x1 (p1 , p2 , m). ∂p1 Substituting this expression into (6.11) and rearranging gives Roy’s identity.

6.9.3

The chain rule for Roy’s identity

The objective here is to provide an explanation, if you need it, of why (6.10) follows from (6.9) using the chain rule. Any calculus textbook will tell you what the chain rule is. For functions of a single variable the chain rule says that if v is a function of y and y is a function of z, then thinking of v as a function of z: dv dv dy = . dz dy dz This notation gets confusing when working with functions of many variables. It is better to define a new function g(z) by g(z) = v(y(z)) and write this as: dg dv dy = . dz dy dz The chain rule for many variables says that if v is a function of the vector y = (y1 , . . . , yn ), y is a function of the vector z = (z1 , . . . , zn ), and g(z) = v(y(z)) then: ∂g ∂v ∂y1 ∂v ∂y2 ∂v ∂yn = + + ··· + . ∂zi ∂y1 ∂zi ∂y2 ∂zi ∂yn ∂zi

107

6. Expenditure and cost minimisation problems

The objective now is to find the derivative of the function: g(p1 , p2 , u¯) = v(p1 , p2 , e(p1 , p2 , u¯))

(6.12)

with respect to p1 . Although this is in principle a straightforward application of the chain rule, in practice many students find it difficult. The problem is that p1 and p2 appear on both sides of the equation. To make things easier for a beginner, write this equation as: g(p1 , p2 , u¯) = v(q1 , q2 , m) where: q1 = p1 q2 = p2 m = e(p1 , p2 , u¯). Applying the chain rule with z = (p1 , p2 , u¯) and y = (q1 , q2 , m) gives: ∂g ∂v ∂q1 ∂v ∂q2 ∂v ∂m = + + . ∂p1 ∂q1 ∂p1 ∂q2 ∂p1 ∂m ∂p1 As: ∂q1 = 1 ∂p1 ∂q2 = 0 ∂p1 ∂m ∂e(p1 , p2 u¯) = ∂p1 ∂p1 this implies that: ∂g ∂v ∂v ∂e(p1 , p2 , u¯) = + . ∂p1 ∂q1 ∂m ∂p1 Writing in all the arguments of the functions this becomes: ∂g(p1 , p2 , u¯) ∂v(q1 , q2 , m) ∂v(q1 , q2 , m) ∂e(p1 , p2 , u¯) = + . ∂p1 ∂q1 ∂m ∂p1 As q1 = p1 and q2 = p2 it is possible to replace q1 and q2 by p1 and p2 to get: ∂g(p1 , p2 , u¯) ∂v(p1 , p2 , m) ∂v(p1 , p2 , m) ∂e(p1 , p2 , u¯) = + ∂p1 ∂p1 ∂m ∂p1 which is the result required. Activity 6.6 Confirm that Roy’s identity holds for the Cobb-Douglas utility function xa1 xb2 where a + b = 1.

108

6.10. The Slutsky equation

6.10

The Slutsky equation

6.10.1

Statement of the Slutsky equation

The Slutsky equation relates the partial derivatives of uncompensated demand x1 (p1 , p2 , m), x2 (p1 , p2 , m) and compensated demand h1 (p1 , p2 , u¯), h2 (p1 , p2 , u¯) to each other. In its simplest form it refers to the effect of a change in the price of good 1 on demand for good 1 as below: ∂x1 (p1 , p2 , m) ∂h1 (p1 , p2 , u¯) ∂x1 (p1 , p2 , m) = − x1 (p1 , p2 , m). ∂p1 ∂p1 ∂m

6.10.2

Derivation of the Slutsky equation

From Proposition 22: x1 (p1 , p2 , e(p1 , p2 , u¯)) = h1 (p1 , p2 , u¯)

(6.13)

where x1 (p1 , p2 , m) is uncompensated demand, h1 (p1 , p2 , u¯) is compensated demand, and e(p1 , p2 , u¯) is the expenditure function. This equation holds for all values of p1 , p2 and u¯, so the partial derivative of the two sides with respect to p1 , holding p2 and u constant are equal. The derivative of the right-hand side is: ∂h1 (p1 , p2 , u¯) . ∂p1

(6.14)

From the discussion of the chain rule in Section 6.5.3 treating v(p1 , p2 , e(p1 , p2 , u¯)) as a function of p1 , p2 and u¯ and differentiating with respect to p1 gives: ∂v(p1 , p2 , m) ∂v(p1 , p2 , m) ∂e(p1 , p2 , u¯) + . ∂p1 ∂m ∂p1 Exactly the same argument establishes that treating x1 (p1 , p2 , e(p1 , p2 , u¯)) as a function of p1 , p2 and u¯ and differentiating with respect to p1 gives ∂x1 (p1 , p2 , m) ∂x1 (p1 , p2 , m) ∂e(p1 , p2 , u¯) + . ∂p1 ∂m ∂p1

(6.15)

The derivatives in (6.14) and (6.15) must be the same, because they are the derivatives of the two sides of (6.13). Thus: ∂x1 (p1 , p2 , m) ∂x1 (p1 , p2 , m) ∂e(p1 , p2 , u¯) ∂h1 (p1 , p2 , u¯) + = . ∂p1 ∂m ∂p1 ∂p1 As argued in the course of deriving Roy’s identity, Shephard’s Lemma implies that when m = e(p1 , p2 , u¯): ∂e(p1 , p2 , u¯) = h1 (p1 , p2 , u¯) = x1 (p1 , p2 , m). ∂p1 Substituting this in the previous equation and rearranging gives the Slutsky equation.

109

6. Expenditure and cost minimisation problems

Activity 6.7 We have just proved the Slutsky equation for the effect of a change in the price of good 1 on demand for good 1. Use a similar argument to find the effect of a change in the price of good 2 on demand for good 1.

6.11

Solutions to activities

Solution to Activity 6.1 [Expenditure minimisation with a Cobb-Douglas utility function] The problem here is to minimise expenditure p1 x1 + p2 x2 subject to the utility constraint u(x1 , x2 ) ≥ u¯ where the utility function is Cobb-Douglas so u(x1 , x2 ) = xa1 xb2 . By assumption a > 0, b > 0, a + b = 1 and u¯ > 0. Points with either x1 = 0 or x2 = 0 give utility 0 so are not feasible because u¯ > 0. Hence the solution must have x1 > 0 and x2 > 0, so the problem is that of minimising p1 x1 + p2 x2 subject to xa1 xb2 ≥ u¯, on R2++ . (R2++ is the set on which x1 > 0 and x2 > 0.) The general problem we have studied is the maximisation of an objective function g(x1 , x2 ) subject to a constraint h(x1 , x2 ) ≤ k. Here we have the problem of minimising p1 x1 + p2 x2 subject to the constraint u(x1 , x2 ) ≥ u¯. In order to apply the general theory it is necessary to turn the minimisation problem into a maximisation problem by multiplying the objective by −1 to get −p1 x1 − p2 x2 . it is also necessary to turn the ≥ constraint into a ≤ constraint by multiplying the constraint by −1 to get −u(x1 , x2 ) ≤ −¯ u. The problem is solved using Lagrangians. To use Lagrangian results you need to establish concavity and convexity, or quasiconcavity and quasiconvexity. This is straightforward here; any linear function is concave and convex so the objective −p1 x1 − p2 x2 is concave in (x1 , x2 ). The constraint is −xa1 xb2 ≤ −¯ u. We have already seen in Chapter 1 that the Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 defined on the set R2++ is concave, so all the concavity and convexity conditions for this problem are satisfied. Note that if you were answering this as an examination question you could discuss concavity and convexity more briefly, but it is essential that you do discuss them. The Lagrangian is: L = −p1 x1 − p2 x2 + λ(−u + xa1 xb2 ). The first order conditions are: b −p1 + λaxa−1 1 x2 = 0 −p2 + λbxa1 xb−1 = 0. 2

As p1 , p2 , a, b, x1 and x2 are all strictly positive any value of λ that satisfies these equations must also be strictly positive, so the constraint xa1 xb2 ≥ u¯ must bind. This is intuitive, if xa1 xb2 > u¯ it is possible to reduce x1 and x2 a little while still satisfying the utility constraint which reduces expenditure, so x1 and x2 cannot solve the problem.

110

6.11. Solutions to activities

The first order conditions imply that: b p1 ax2 λaxa−1 1 x2 = = b−1 p2 bx1 λbxa1 x2

(6.16)

which is the tangency condition that the marginal rate of substitution is equal to the bp1 x1 . As the constraint binds: price ratio. This implies that x2 = ap 2 xa1 xb2 = u¯. Substituting x2 =

bp1 x ap2 1

(6.17)

in this equation gives: xa1



bp1 x1 ap2

b = u¯.

(6.18)

Now: xa1



bp1 x1 ap2

b

 =

bp1 ap2

b

xa+b 1

 =

bp1 ap2

b x1

because a + b = 1, so (6.18) implies:  x1 = and using x2 =

bp1 x ap2 1

ap2 bp1

b u¯

gives:  x2 =

bp1 ap2

a u¯.

As the solution to this minimisation problem is compensated demand x1 = h1 (p1 , p2 , u¯) and x2 = h2 (p1 , p2 , u¯) this gives compensated demand with a Cobb-Douglas utility function: b  ap2 h1 (p1 , p2 , u¯) = u¯ (6.19) bp1  a bp1 h2 (p1 , p2 , u¯) = u¯. (6.20) ap2 The problem has only one solution. Note that multiplying p1 and p2 by s > 0 does not change the ratio p1 /p2 so does not change compensated demand. As:  h1 (p1 , p2 , u¯) =

ap2 bp1

b

u¯ = p−b 1

 ap b 2

b



 ap b ∂h1 (p1 , p2 , u¯) 2 = −bp−b−1 u¯ < 0 1 ∂p1 b so compensated demand for good 1 is decreasing in its price p1 . A similar argument applies to good 2.

111

6. Expenditure and cost minimisation problems

Figure 6.6: Expenditure minimisation with u(x1 , x2 ) = 2x1 + x2 and p1 /p2 < 2.

Solution to Activity 6.2 [Expenditure minimisation with a linear utility function] (a) The indifference curves for the utility function u(x1 , x2 ) = 2x1 + x2 are parallel straight lines with gradient −2 because the marginal rate of substitution is 2. (b) When p1 /p2 < 2 the gradient of the dotted line showing points with equal expenditure shown in Figure 6.6 is less than the gradient of the indifference curve. The solution is at the corner where x1 = u¯/2, x2 = 0. (c) When p1 /p2 = 2 the line showing points with equal expenditure coincides with the indifference curve as shown in Figure 6.7. Any point on the indifference curve 2x1 + x2 = u¯ with x1 ≥ 0 and x2 ≥ 0 solves the problem. (d) When p1 /p2 > 2 the gradient of the dotted line showing points with equal expenditure shown in Figure 6.8 is greater than the gradient of the indifference curve. The solution is at the corner where x1 = 0, x2 = u¯. (e) The problem can be written as maximise −p1 x1 − p2 x2 subject to −2x1 − x2 ≤ −¯ u, x1 ≥ 0, x2 ≥ 0. The objective is linear so concave, the constraint functions are linear so convex, the constraint qualification is satisfied because there are points that satisfy the constraints as strict inequalities, for example x1 = x2 = u¯. Thus the Kuhn-Tucker conditions are necessary and sufficient for a solution to this problem. (f) The Lagrangian is: L = −p1 x1 − p2 x2 + λ0 (−¯ u + 2x1 + x2 ) + λ1 x1 + λ2 x2 = x1 (−p1 + 2λ0 + λ1 ) + x2 (−p2 + λ0 + λ2 ). The first order conditions require that: p1 = 2λ0 + λ1 p2 = λ0 + λ2 .

112

6.11. Solutions to activities

Figure 6.7: Expenditure minimisation with u(x1 , x2 ) = 2x1 + x2 and p1 /p2 = 2.

Figure 6.8: Expenditure minimisation with u(x1 , x2 ) = 2x1 + x2 and p1 /p2 > 2.

113

6. Expenditure and cost minimisation problems

Looking first for conditions under which the point x1 = u¯/2, x2 = 0 is a solution; the point is feasible and satisfies the constraints 2x1 + x2 ≥ u¯ and x2 ≥ 0 as equalities, so complementary slackness is satisfied for these constaints. The constraint x1 ≥ 0 is satisfied as a strict inequality, so complementary slackness is satisfied if λ1 = 0. The first order conditions are satisfied if λ0 = p1 /2 and λ2 = p2 − λ0 = p1 − p1 /2 so non-negativity of the multiplier λ2 is satisfied if p1 /p2 ≤ 2. Thus if p1 /p2 ≤ 2 the point x1 = u¯/2, x2 = 0 with multipliers λ0 = p1 /2, λ1 = 0 and λ2 = p1 − p1 /2 satisfies the Kuhn-Tucker conditions so solves the problem. Now look for solutions with x1 > 0, x2 > 0 and 2x1 + x2 = u¯. The point is feasible. Complementary slackness is satisfied if λ1 = λ2 = 0. The first order conditions are satisfied if λ0 = p1 /2 = p2 , so this cannot be a solution unless p1 /p2 = 2. The constraint 2x1 + x2 = u¯ is satisfied as an equality so complementary slackness is satisfied for this constraint. The multiplier λ0 > 0. Thus if p1 /p2 = 2 any point with x1 > 0, x2 > 0 and 2x1 + x2 = u¯ with multipliers λ0 = p1 /2 = p2 , λ1 = 0 and λ2 = 0 satisfies the Kuhn-Tucker conditions so solves the problem. Finally look for conditions x1 = 0, x2 = u¯ is a solution; the point is feasible and satisfies the constraints 2x1 + x2 ≥ u¯ and x1 > 0 as equalities, so complementary slackness is satisfied for these constraints. The constraint x2 > 0 is satisfied as a strict inequality, so complementary slackness is satisfied if λ2 = 0. The first order conditions are satisfied if λ0 = p2 and λ1 = p1 − 2λ0 = p1 − 2p2 so non-negativity of the multiplier λ2 is satisfied if p1 /p2 ≥ 2. Thus if p1 /p2 ≥ the point x1 = 0, x2 = u¯ with multipliers λ0 = p2 , λ1 = p1 − 2p2 and λ2 = 0 satisfied the Kuhn-Tucker conditions so solves the problem. The problem has more than one solution if p1 /p2 = 2. (g) The compensated demand functions are: If p1 /p2 < 2, h1 (p − 1, p2 , u¯) = u¯/2, h2 (p1 , p2 , u¯) = 0. If p1 /p2 > 2, h1 (p − 1, p2 , u¯) = 0, h2 (p1 , p2 , u¯) = 0. If p1 /p2 = 2, there are many values of x1 and x2 that solve the problem, as a function can take only one value for each value of its arguments, the compensated demand function does not exist for these prices. As demand depends only on the price ratio p1 /p2 it is homogeneous of degree zero in p1 and p2 because (sp1 )/(sp2 ) = p1 /p2 for any s > 0. An increase in prices from p1 to p01 does affect compensated demand if either p1 /p2 < p01 /p2 < 2 or 2 < p1 /p2 < p01 /p2 . If p1 /p2 ≤ 2 < p01 /p2 demand for x1 is either u¯/2 or somewhere between 0 and u¯/2 before the price change and 0 after the price change, then it does not increase following the price increase. If p1 /p2 < 2 ≤ p01 /p2 demand for x1 is u¯/2 before the price change. This demand for good 1 does not increase when its price increases. A similar argument applies to good 2 and its price p2 .

114

6.11. Solutions to activities

Solution to Activity 6.3 [The expenditure function with a Cobb-Douglas utility function] With a Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 compensated demand is: b ap2 h1 (p1 , p2 , u¯) = u¯ bp1  a bp1 h2 (p1 , p2 , u¯) = u¯ ap2 

so the expenditure function is:  e(p1 , p2 , u¯) = p1

ap2 bp1

b

 u¯ + p2

bp1 ap2

a u¯.

This is a perfectly good answer to the problem of finding the expenditure function. However, the formula can be made to look much simpler by doing some algebra as follows:  b  a ap2 bp1 e(p1 , p2 , u¯) = p1 u¯ + p2 u¯ bp1 ap2 "  b  a # bp1 ap2 + p2 u¯ = p1 bp1 ap2   a   b b −b b a a −a = p1 p1 p2 + p1 p2 p2 u¯. b a 1−b 1−a Because a + b = 1, p1 p−b = pa1 and p2 p−a = pb2 so: 1 = p1 2 = p2   a   b b a b a a b e(p1 , p2 , u¯) = p1 p2 + p1 p2 u¯ b a    a  a b b = + pa1 pb2 u¯. b a

This can be simplified further because as a + b = 1:  a b  b a + = ab b−b + a−a ba b a = a−a b−b (aa+b + ba+b ) = a−a b−b . The third line in these equations follows from the second line because a + b = 1. This gives a neat expression for the expenditure function: e(p1 , p2 , u¯) = a−a b−b pa1 pb2 u¯. The expenditure function is increasing in utility because: ∂e(p1 , p2 , u¯) = a−a b−b pa1 pb2 > 0. ∂ u¯

115

6. Expenditure and cost minimisation problems

The expenditure function is increasing in prices because: ∂e(p1 , p2 , u¯) b b = aa−a b−b pa−1 ¯ = ab b−b p−b ¯ > 0. 1 p2 u 1 p2 u ∂p1 ∂e(p1 , p2 , u¯) = ba−a b−b pa1 pb−1 ¯ = a−a ba pa1 p−a ¯ > 0. 2 u 2 u ∂p2 The expenditure function is homogeneous of degree one in prices because: e(sp1 , sp2 , u¯) = a−a b−b (sp1 )a (sp2 )b u¯ = sa−a b−b pa1 pb2 u¯ = se(p1 , p2 , u¯). The second derivative matrix of the expenditure function is: " ∂ 2 e(p ,p ,¯u) ∂ 2 e(p ,p ,¯u) # 1

2

∂p21 ∂ 2 e(p1 ,p2 ,¯ u) ∂p2 ∂p1

1

2

∂p1 ∂p2 ∂ 2 e(p1 ,p2 ,¯ u) ∂p22

−a −ab ba p−b−1 pb2 u¯ ab ba p−b ¯ 1 1 p2 u = −a b a a −a−1 ab ba p−b p p −a b p 1 2 1 2   2 −p2 p1 p2 b a −b−1 −a−1 . = a b p1 p2 p1 p2 −p21





The term ab ba p−b−1 p−a−1 > 0, so the second derivative matrix is negative semidefinite if 1 2 the matrix:   −p21 p1 p2 A= p1 p2 −p21 is negative semidefinite. This requires that A11 ≤ 0, and det A ≥ 0. Now A11 = −p22 < 0 and: det A = p21 p22 − (p1 p2 )(p1 p2 ) = 0 so A is negative semidefinite which implies that the expenditure function is concave as required. ∂e(p1 , p2 , u¯) b = ab b−b p−b ¯ 1 p2 u ∂p1  b ap2 = u¯ = h1 (p1 , p2 , u¯) bp1 ∂e(p1 , p2 , u¯) ¯ = a−a ba pa1 p−a 2 u ∂p2  a bp1 = u¯ = h2 (p1 , p2 , u¯) ap2 so Shephard’s Lemma is satisfied. Solution to Activity 6.4 [The expenditure function with a linear utility function] With the utility function u(x1 , x2 ) = 2x1 + x2 :

116

6.11. Solutions to activities

If p2 > p1 /2 or equivalently p1 /p2 < 2 the solution is x1 = u¯/2, x2 = 0, so h1 (p1 , p2 , u¯) = u¯/2, h2 (p1 , p2 , u¯) = 0. Expenditure is: p1 u¯ e(p1 , p2 , u¯) = p1 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯) = . 2 If p2 < p1 /2 or equivalently p1 /p2 > 2 the solution is x1 = 0, x2 = u¯, so h1 (p1 , p2 , u¯) = 0, h2 (p1 , p2 , u¯) = u¯. Expenditure is: e(p1 , p2 , u¯) = p1 h1 (p1 , p2 , u¯) + p2 h2 (p1 , p2 , u¯) = p2 u¯. If p2 = p1 /2 or equivalently p1 /p2 = 2 any x1 and x2 with x1 ≥ 0, x2 ≥ 0 and 2x1 + x2 = u¯ solve the problem, so x2 = u¯ − 2x1 and expenditure is: e(p1 , p2 , u¯) = p1 x1 + p2 (¯ u − 2x1 )   1 p1 u¯ = p1 x1 + (¯ u − 2x1 ) = = p2 u¯. 2 2 This is a perfectly acceptable answer. However the answer can be written more concisely as: p  1 u¯. e(p1 , p2 , u¯) = min 2 The expenditure function is increasing in utility because: p  ∂e(p1 , p2 , u¯) 1 = min , p2 > 0. ∂ u¯ 2 The expenditure function is non-decreasing in prices because: If p1 < p01 ≤ 2p2 then: e(p01 , p2 , u)

 − e(p1 , p2 , u¯) =

p01 − p1 2

 u¯ > 0.

If p1 < p2 and p1 ≤ 2p2 ≤ p01 then: e(p01 , p2 , u)

p1  − e(p1 , p2 , u¯) = p2 − u¯ > 0. 2 

If 2p2 ≤ p1 < p01 then: e(p01 , p2 , u) − e(p1 , p2 , u¯) = (p2 − p2 )¯ u = 0. The expenditure function is homogeneous of degree one in prices because:  sp  1 e(sp1 , sp2 , u¯) = min , sp2 u¯ 2p  1 = s min , p2 u¯ = se(p1 , p2 , u¯). 2 so

∂e(p1 ,p2 ,¯ u) ∂p1

=

If p1 > 2p2 , e(p1 , p2 , u¯) = p2 u¯, so

∂e(p1 ,p2 ,¯ u) ∂p1

= 0 = h1 (p1 , p2 , u¯) and

If p1 < 2p2 , e(p1 , p2 , u¯) = ∂e(p1 ,p2 ,¯ u) ∂p2

∂e(p1 ,p2 ,¯ u) ∂p2

p1 u¯, 2

u ¯ 2

= h1 (p1 , p2 , u¯) and

= 0 = h2 (p1 , p2 , u¯).

= u¯ = h2 (p1 , p2 , u¯).

If p1 = 2p2 the expenditure function is not differentiable. Thus Shephard’s Lemma holds at points where the expenditure function is differentiable.

117

6. Expenditure and cost minimisation problems

Solution to Activity 6.5 With a Cobb-Douglas utility function u(x1 , x2 ) = xa1 xb2 where a > 0, b > 0 and a + b = 1, uncompensated demand is: m x1 (p1 , p2 , m) = a p1 m x2 (p1 , p2 , m) = b . p2 The indirect utility function is: v(p1 , p2 , m) = u(x1 (p1 , p2 , m), x2 (p1 , p2 , m))  a  b m m b = a p1 p2 a b m = a b a b. p1 p2 Compensated demand is: b ap2 h1 (p1 , p2 , u¯) = u¯ bp1 a  bp1 u¯ h2 (p1 , p2 , u¯) = ap2 

and the expenditure function is: e(p1 , p2 , u¯) = a−a b−b pa1 pb2 u¯. Thus when m = e(p1 , p2 , m) = a−a b−b pa1 pb2 u¯ uncompensated demand for good 1 is: m a−a b−b pa1 pb2 u¯ =a p1 p1 1−a −b a−1 b = a b p1 p2 u¯ b = ab b−b p−b ¯ 1 p2 u b  ap2 u¯ = bp1

x1 (p1 , p2 , m) = a

which is indeed compensated demand for good 1. (Note that this argument uses the fact that 1 − a = b.) Similarly: m a−a b−b pa1 pb2 u¯ =b p2 p2 −a 1−b a b−1 = a b p1 p2 u¯ = a−a ba pa1 p−a u¯  a 2 bp1 = u¯. ap2

x2 (p1 , p2 , m) = b

Also when m = e(p1 , p2 , m) = a−a b−b pa1 pb2 u¯ the indirect utility function is: aa bb a−a b−b pa1 pb2 u¯ m v(p1 , p2 , m) = a b a b = p1 p2 pa1 pb2 = a0 b0 u¯ = u¯ a b

as implied by the general result that if m = e(p1 , p2 , u¯) then v(p1 , p2 , m) = u¯.

118

6.11. Solutions to activities

Solution to Activity 6.6 The indirect utility function for the Cobb-Douglas utility function is: −b v(p1 , p2 , m) = aa bb p−a 1 p2 m.

Thus: ∂v(p1 , p2 , m) = −aaa bb p−a−1 p−b 1 2 m ∂p1 ∂v(p1 , p2 , m) −b = aa bb p−a 1 p2 ∂m so:

∂v(p1 ,p2 ,m) ∂p − ∂v(p1 ,p12 ,m) ∂m

=

am = x1 (p1 , p2 , m) p1

as required. Solution to Activity 6.7 From Proposition 22: x1 (p1 , p2 , e(p1 , p2 , u¯)) = h1 (p1 , p2 , u¯)

(6.21)

where x1 (p1 , p2 , m) is uncompensated demand, h1 (p1 , p2 , u¯) is compensated demand, and e(p1 , p2 , u¯) is the expenditure function. This equation holds for all values of p1 , p2 and u¯, so the partial derivative of the two sides with respect to p1 , holding p2 and u constant are equal. The derivative of the left-hand side with respect to the price of good 2 is: ∂h1 (p1 , p2 , u¯) . ∂p2

(6.22)

The next step is to find the derivative of x1 (p1 , p2 , e(p1 , p2 , u¯)) with respect to the price of good 2. Using the chain rule in the same way as in the original derivation of the Slutsky equation this is: ∂x1 (p1 , p2 , m) ∂x1 (p1 , p2 , m) ∂e(p1 , p2 , u¯) + . ∂p2 ∂m ∂p2

(6.23)

The derivatives in (6.22) and (6.23) must be the same, because they are the derivatives of the two sides of (6.21). Thus: ∂x1 (p1 , p2 , m) ∂x1 (p1 , p2 , m) ∂e(p1 , p2 , u¯) ∂h1 (p1 , p2 , u¯) + = . ∂p2 ∂m ∂p2 ∂p2 Shephard’s Lemma implies that when m = e(p1 , p2 , u¯): ∂e(p1 , p2 , u¯) = h2 (p1 , p2 , u¯) = x2 (p1 , p2 , m). ∂p2 Substituting this in the previous equation and rearranging gives the Slutsky equation: ∂x1 (p1 , p2 , m) ∂h1 (p1 , p2 , u¯) ∂x1 (p1 , p2 , m) = − x2 (p1 , p2 , m). ∂p2 ∂p2 ∂m

119

6. Expenditure and cost minimisation problems

A reminder of your learning outcomes By the end of this chapter, you should be able to: define compensated demand describe the relationship between compensated demand, uncompensated demand, income and substitution effects outline the properties of compensated demand state the definition and properties of the expenditure function explain the relationship between the firm’s cost minimisation problem and the consumer’s expenditure minimisation problem explain the relationship between the solutions to the consumer’s utility maximisation problem and expenditure minimisation problem describe the relationship between uncompensated demand, the indirect utility function, compensated demand and the expenditure function state and derive Roy’s identity state and derive the Slutsky equation.

6.12

Sample examination questions

Note that many of the activities for this chapter could also be examination questions. 1. (a) Define the firm’s conditional factor demand and the cost function and explain their relationship. (b) Prove that conditional factor demand is homogeneous of degree zero in input prices. (c) Is the cost function homogeneous in prices? If so, of what degree? (d) Find the conditional factor demand and cost function for the production function: f (K, L) = (K ρ + Lρ )1/ρ . (Assume that 0 < ρ < 1.) 2. (a) Define the indirect utility function v(p1 , p2 , m). (b) Define the expenditure function e(p1 , p2 , u¯). (c) Define the compensated demand function. (d) What is the relationship between the derivative: ∂e(p1 , p2 , u¯) ∂p1 and the compensated demand function? (e) Write down and derive Roy’s identity.

120

6.13. Comments on the sample examination questions

6.13

Comments on the sample examination questions

1. (d) The conditional factor demand functions for the production function f (K, L) = (K ρ + Lρ )1/ρ are: K = L =

r−1/(1−ρ) 1/ρ

q

1/ρ

q.

(r−ρ/(1−ρ) + w−ρ/(1−ρ) ) w−1/(1−ρ) (r−ρ/(1−ρ) + w−ρ/(1−ρ) )

The cost function is: c(r, w, q) = r−ρ/(1−ρ) + w−ρ/(1−ρ)

−(1−ρ)/ρ

q.

121

6. Expenditure and cost minimisation problems

122

Chapter 7 Dynamic programming 7.1

Learning outcomes

By the end of this chapter and the relevant reading, you should be able to: formulate a Bellman equation for the general dynamic maximisation problem solve general dynamic problems by using the envelope theorem and factoring out the Bellman value function to obtain the intertemporal optimality conditions (Euler equations) apply backward induction to explicitly solve for the sequence of Bellman value functions in a finite horizon maximisation problem explain the logic of the ‘guess and verify’ approach to the solution of infinite horizon maximisation problems use the above methods in case of multiple state and control variables.

7.2

Essential reading

Sydsæter, Knut, Peter Hammond, Atle Seierstad, Arne Strøm Further Mathematics for Economic Analysis. Chapter 12.

7.3

Further reading

Rangarajan, S. A first course in optimization theory. Chapters 11 and 12. Sargent, T.J. Dynamic macroeconomic theory. Chapter 1. Lunjqvist, L. and T.J. Sargent Recursive macroeconomic theory.

7.4

Introduction

In this chapter we study the solution techniques for the dynamic optimisation problems in discrete time, traditionally known as dynamic programming problems. We use these techniques to analyse the central problem of macroeconomic dynamics: the optimal consumption/saving decision, complicated by habit formation, durable goods, and variable labour supply.

123

7. Dynamic programming

7.5

The optimality principle

Suppose you are stranded on an uninhabited island with nothing but gingerbreads to eat. You have to survive for T < ∞ periods (say, days), denoted t = 0, 1, 2, . . . , T . At the beginning of your stay you are endowed with k¯ > 0 gingerbreads. Gingerbreads are storable (do not perish easily), so in any given day a gingerbread that is not eaten can be saved for later. Your period t utility is given by ut (ct ) and is potentially time-varying (this may take the form of exponential discounting considered in the next section or other PT forms). Your intertermporal utility for the duration of your stay on the island is t=1 ut (ct ). In mathematical terms the objective you face on the island is to maximise the intertemporal utility as follows: max {ct }

T X

ut (ct )

t=1

s.t.

X

ct = k¯

t

ct ≥ 0 ∀ t ≤ T. Let us introduce a new variable called capital, with the interpretation of the amount of gingerbreads left on a given date. This capital evolves according to a simple equation: if at the end of period t − 1 you have kt gingerbreads left, then in period t you eat ct of those and store kt+1 = kt − ct . In the terminology of dynamic programming, kt is the state variable and ct is the control variable for this problem. With the new set of state variables the problem can be equivalently rewritten as to maximise the intertemporal utility subject to the conditions kt+1 = kt − ct for all t = 0, 1, 2, . . . , T (the evolution equations for ‘capital’), k1 = k¯ (the initial condition), and non-negativity constraints for all ct and kt : max {ct }

T X

ut (ct )

(7.1)

t=1

s.t. kt+1 = kt − ct ∀ t ≤ T kt , ct ≥ 0 ∀ t ≤ T ¯ kT +1 ≥ 0. k1 = k, In what follows we will often omit the non-negativity constraints for ct and kt as they are straightforward. We will, however, retain the last of these constraints, kT +1 ≥ 0, as it prevents consuming non-existent gingerbreads and in general ensures that the rest of the non-negativity constraints are actually satisfied. If T is finite, then one way to solve the problem is to use the Lagrange method you learned in Chapter 2 of this guide, setting up the Lagrangian: L=

T X t=1

ut (ct ) +

T X

λt (k1 − ct − kt+1 ) + λT +1 kT +1

t=1

¯ This is quite unwieldy due to the many constraints and may be and using k1 = k. further exacerbated by T = ∞. We will consider instead the dynamic programming

124

7.5. The optimality principle

approach to the solution of this simple problem. To grasp the main idea behind the approach imagine that the optimal consumption plan denoted by {c∗1 , c∗2 , . . . , c∗T } has been already computed. Consider what happens if at the end of some intermediate period τ < T you have forgotten the rest of the original optimal plan {c∗τ +1 , . . . , c∗T }. You would have to reoptimise the revised objective:

max {ct }

T X

ut (ct )

t=τ +1

s.t. kt+1 = kt − ct ∀ t = τ + 1, . . . , T kτ +1 given kτ +1 ≥ 0. Notice that kτ +1 is uniquely determined by your original plan {c∗1 , c∗2 , . . . , c∗τ } prior to the period τ + 1. The question is, would you be able to uncover the remainder of this original consumption plan {c∗τ +1 , c∗τ +2 , . . . , c∗T }? To pose the question even more starkly, suppose you are still in possession of the original plan {c∗τ +1 , c∗τ +2 , . . . , c∗T } but decide that you might improve on it (after all your past decisions are not binding for the present). Would you be able to do so? Suppose you alter yourP plan by choosing a sequence {˜ cP ˜T } distinct from τ +1 , . . . , c T T ∗ ∗ ∗ ct ) were bigger than t=τ +1 ut (c∗t ), we would have: {cτ +1 , cτ +2 , . . . , cT }. If t=τ +1 ut (˜ τ X

ut (c∗t ) +

t=1

T X

ut (˜ ct ) >

t=τ +1

T X

ut (c∗t )

t=1

which contradicts the assumption that {c∗1 , c∗2 , . . . , c∗T } was a maximising sequence for (7.1). Thus the important realisation emerges: Proposition 11 Holding {c∗1 , c∗2 , . . . , c∗τ } fixed, the sequence {c∗τ +1 , . . . , c∗T } maximises subject to the capital constraints.

PT

t=τ +1

ut (ct )

Definition 30 Define the value function V (·) as: V (kτ , τ ) = max {ct }

T X

ut (ct )

t=τ

s.t. kt+1 = kt − ct kt , ct ≥ 0 ∀ t ≤ T kT +1 ≥ 0, kτ given. Then, using the asterisk notation for the optimal consumption sequence and omitting

125

7. Dynamic programming

the constraints: V (k1 , 1) = max {ct }

=

τ X

T X

ut (ct ) =

t=1

=

ut (c∗t )

t=1 T X

ut (c∗t ) +

ut (c∗t ) =

τ X

T X

ut (c∗t ) + max

ut (ct )

t=τ +1

t=1

t=τ +1

t=1 τ X

T X

ut (c∗t ) + V (kτ +1 , τ + 1)

t=1

=

( T X

max

kτ +1=k− ¯ Pτ

t=1 ct

) ut (ct ) + V (kτ +1 , τ + 1) .

t=1

The last step involves the realisation that by altering the first few elements of the sequence c1 , . . . , cτ you change the amount of capital left for subsequent bins (or periods in the intertemporal interpretation). The Bellman value function summarises the information about the future utility of consumption given the level of the state variable and the notion that the future consumption choices will be made optimally even from the present point of view. Proposition 12 Optimality principle for the Bellman value function: ( τ ) X V (k1 , 1) = max ut (ct ) + V (kτ +1 , τ + 1)

(7.2)

t=1

s.t. kτ +1 = kτ − cτ . Notice that the Bellman value function takes two arguments: the state variable and the time period. V (k, 1) and V (k, T − 1) are admittedly vastly different things for any T > 2, since to survive on any given amount of food for a day or for a year is very different. The special case of (7.2) for two consecutive periods is known as the Bellman equation. Definition 31 The Bellman equation for problem (7.1), for any period t: V (k, t) = max {u(ct ) + V (k − ct , t + 1)} . ct

Example 7.1 V (k1 , 1) = max c1 {u(c1 ) + V (k2 , 2)} k2 = k1 − c1 .

(7.3)

Let us apply the envelope theorem to this equation: denoting by λ2 the Lagrange

126

7.6. A more general dynamic problem and the optimality principle

multiplier at the single constraint: ∂V (k2 , 2) = λ2 . ∂k2 Recall that the interpretation given to the Lagrange multiplier in Chapter 2 was the marginal value of relaxing the associated constraint. Here it is shown that this value is equated to the marginal value of an extra unit of capital left for future consumption, subject to the optimising behaviour in the future. Notice that (7.3) would be a univariate maximisation problem if V (k2 , 2) were a known function. It captures the trade-off between the marginal utility of consumption today (c1 ) and the marginal value of a saved gingerbread.

7.6

A more general dynamic problem and the optimality principle

In most macroeconomic applications studying consumption and saving choice, the time-varying nature of the utility function takes the concrete form of exponential discounting: ut (ct ) = δ t u(ct ), for some non-decreasing utility function u(·) and 0 < δ < 1. The production technology, on the other hand, may be more sophisticated than simple storage, with the evolution equation for capital kt+1 = f (kt ) − ct for a general regular production function f .1 The intertemporal maximisation problem then becomes: max {ct }

T X

δ t u(ct )

t=1

s.t. kt+1 = f (kt ) − ct ¯ kT +1 ≥ 0. k1 = k, As before, we omit the non-negativity constraints when no confusion arises. In this instance we define the Bellman value function as: V (k, τ ) = max {ct }

T X

δ t−τ u(ct )

t=τ

s.t. kt+1 = f (kt ) − ct ¯ kT +1 ≥ 0. k1 = k, Notice that the Bellman value function involves discounting (the term δ t−τ ) from the perspective of period τ (for example in period 10 you do not discount your utility in period 10, you discount the utility in period 11 at rate δ, etc.). Now confirm for yourself that the Bellman equation is written as: V (kt , t) = max{u(ct ) + δV (kt+1 , t + 1)} ct

(7.4)

s.t. kt+1 = f (kt ) − ct . 1

Here the production uses only one factor, capital, but it is easy to incorporate more factors (for example labour), although at the expense of having more than one control variable. You will see examples of this in the exercises for this chapter.

127

7. Dynamic programming

What can we do with this equation? Taking the first order conditions for maximum yields: u0 (ct ) = δV 0 (kt+1 , t + 1). (7.5) This seems to be of limited use, however, since the functional form of the sequence of functions V (k, 1), V (k, 2), . . . , V (k, T ) is not known prior to solving the problem. To overcome this difficulty, a few different approaches exist. One can either factor out (get rid of) V (·) from (7.5) or attempt to find V (·) explicitly.

7.6.1

Method 1. Factoring out V (·)

Finite horizon (T < ∞). Differentiate the Bellman equation (7.4) with respect to kt using the envelope theorem: V 0 (kt , t) = δV 0 (kt+1 , t + 1) · f 0 (kt )

(7.6)

where V 0 (k, t) denotes the derivative of the value function for time period t with respect to the capital argument k. Notice that since V (k, t) and V (k, t + 1) are in general distinct functions, their derivatives are likewise different. Recall the FOC from (7.5) and the fact that they hold for all t, therefore can be shifted back one period: u0 (ct ) = δV 0 (kt+1 , t + 1) u0 (ct−1 ) = δV 0 (kt , t).

(7.7)

Combining the three equations (7.5), (7.6) and (7.7) allows us to factor out both occurrences of V 0 : u0 (ct−1 ) = δu0 (ct ) · f 0 (kt ). This is known as the Euler equation. One can get a nice economic interpretation of the equation by rewriting it as: u0 (ct−1 ) = f 0 (kt ). δu0 (ct ) This way the marginal rate of substitution in consumption on the left is equated to the marginal rate of transformation of the consumption good on the right. Infinite horizon (T = ∞) Now consider solving: max {ct }

∞ X

δ t u(ct )

(7.8)

t=1

s.t. kt+1 = f (kt ) − ct . The infinite horizon case turns out to be easier to deal with. Observe that quintessential stationarity of the problem setup and therefore, of the value function: whether one has k¯ units of capital in period 1 or in period 1001 makes no difference when one faces the

128

7.6. A more general dynamic problem and the optimality principle

¯ 1) = V (k, ¯ 1000) = V (k, ¯ t) ∀ k, ¯ t. This same infinite lifespan ahead. Formally, V (k, implies that instead of a sequence of different value functions as in the finite horizon case, there is only one value function with the single argument being capital: V (k). The Bellman equation for (7.8) is written in exactly the same way as before: V (kt ) = max{u(ct ) + δV (f (kt ) − ct )} ct

and the factoring out procedure may be likewise replicated to yield the Euler equation: u0 (ct−1 ) = f 0 (kt ). 0 δu (ct ) Notice that the Euler equations in both the finite and infinite horizon cases are ¯ This identical. Moreover, they are independent of the initial level of capital in t = 1, k. is because the Euler equations are inherently about the differences in consumption across adjacent periods, and they do not give any information about the level of consumption (this point will be more clearly seen in the exercises). Activity 7.1 variables.

The point of this problem is to think about the correct set of state

max

T X

β t u(ct − γct−1 )

t=1

s.t. wt+1 = (1 + r)(wt + yt − ct ). Here ct is consumption at time t, wt are financial assets yielding constant net interest r, yt is labour income. (a) Set up the Bellman equation. (b) Applying the Envelope theorem (twice) derive the Euler equation.

7.6.2

Method 2. Finding V .

Finite horizon case and backward induction The advantage of having the finite horizon is actually knowing how the value function for the last period looks like. Since there is going to be no more consumption after time T , it makes sense to fully consume whatever capital is left in period T : c∗T = kT ⇒ V (kT , T ) = u(f (kT )). Applying the Bellman equation for period T − 1 allows to recover the functional form for the Bellman function in the preceding period (see Figure 7.1): V (kT −1 , T − 1) = max{u(cT −1 ) + δV (kT , T )} = max{u(cT −1 ) + δu(f (kT ))}. This can be solved for any given u(·) and f (·). Once V (·, T − 1) it is easy to recover V (·, T − 2) from the Bellman equation for period T − 2, etc.

129

7. Dynamic programming

Figure 7.1: Recovering value functions by backward induction.

Example 7.2

Suppose u(c) = log c, f (k) = Ak α .

V (kT , T ) = log(AkTα ) = log A + α log kT V (kT −1 , T − 1) = max{log cT −1 + δ[log A + α log(AkTα−1 − cT −1 )]} cT −1

The first order condition is: 1 ct−1 Substitute c∗T −1 =

α AkT −1 1+αδ



αδ = 0. − cT −1

AkTα−1

back into the Bellman equation (7.9):

v(kT −1 , T − 1) = a(1 + αδ) log kT −1 + D where term D is a constant: D = (1 + δ + αδ) log A + αδ log αδ − (1 + αδ) log(1 + αδ). This two-step procedure can now be iteratively repeated to find V (k, T − 2), V (k, T − 3), etc.

Activity 7.2

Consider the problem: max

s.t.

T X t=1 T X

log at at = k¯

t=1

at ≥ 0 ∀ t. (a) Set up the Bellman equation. (b) Solve by backward induction.

130

(7.9)

7.6. A more general dynamic problem and the optimality principle

Activity 7.3

Consider the problem: max

s.t.

T X t=1 T X

1 1 + at at = k¯

t=1

at ≥ 0 ∀ t. Propose the quickest method of solving this problem. Solve it. Be very careful! (Hint: check the SOC.)

Activity 7.4 Consider a general intertemporal problem of an agent endowed with wealth W0 in period 0. This wealth grows at the constant net interest rate r and the agent has no other sources of income such as labour income. His maximisation problem therefore is: max

s.t.

T X t=0 T X t=0

δ t u(ct ) ct = W0 (1 + r)t

ct ≥ 0. Assume that the utility function is twice differentiable and strictly concave (u00 < 0). (a) In light of what you learned solving Activity 7.3 above, what is your intuition about the solution? (b) Derive the Euler equations. Was your intuition correct? (c) For what values of r and δ can you obtain an exact answer without knowing the functional form of u? (d) Now assume that u(ct ) = ct . Provide the complete solution to the problem for any δ and r. Guess and verify approach for the infinite horizon case When T = ∞ one cannot start in the last period and work backwards, since there is no last period. Another possibility is to just guess a solution, and then verify that it is correct. In practice, this method works only for a limited class of problems, where utility/production functions are logarithmic, exponential or polynomial, but this is enough for our purposes. The main idea is best grasped by means of an example.

131

7. Dynamic programming

Example 7.3

Let us again suppose that the utility is logarithmic and consider: max

∞ X

δ t log ct

t=0

s.t. kt+1 = Aktα − ct k0 = k¯0 . The Bellman equation for this problem is: V (kt ) = max{log ct + δV (Aktα − ct )}. ct

(7.10)

Step 1. Guess the functional form of V as: V (k) = E + F log k for some as yet undetermined coefficients E, F . Why logarithm? There seems to be no other reason than experience. Why have an extra constant term E? Again, from experience, but also one would like to allow sufficient flexibility for a guessed functional form. More remarks on this later. If the guess is correct, then the Bellman equation (7.10) should hold for V (k) evaluated at kt+1 = Aktα − ct : V (kt ) = E + F log kt = max{log ct + δE + δF log(Aktα − ct )}. ct

The necessary condition for the maximum implies: δF A 1 = ⇒ c= kα. α ct Akt − ct 1 + δF t Notice that there are still unknown bits (F ) in this equation, however, we can still substitute it into the evolution equation: kt+1 = Aktα − ct =

δAF α k . 1 + δF t

Step 2. Here we verify our guess. If it was correct, then (7.10) should hold for V (k) = E + F log k both on the left- and right-hand side:   δAF α E + F log kt = log ct + δE + δF log k 1 + δF t A δAF = log + δE + δF log + (1 + δF )α log kt . 1 + δF 1 + δF This should hold as an identity (for all kt ). Hence, gather the similar terms on the RHS and the LHS (constants apart from logs) and solve for E, F : F = (1 + δF )α A δAF + δE + δF log . E = log 1 + δF 1 + δF

132

7.6. A more general dynamic problem and the optimality principle

This system of two equations is solved by: α 1 − δα  1 α E = log A(1 − δα) + δ log δAα . 1−δ 1 − δα F =

The mere fact that we are able to find the parameter values that make the Bellman equation an identity confirms that our initial guess was correct. The consumption on the optimal path is given by: c=

A k α = (1 − δα)Aktα 1 + δF t

a function of the current level of the state variable. This is called a policy function. How does one hazard a guess for a value function? There are some empirical regularities in that power utility functions generally result in power value functions, polynomials in polynomials, logs in logs, exponentials and the like. However, beyond these simple classes of functional forms your luck is likely to be very thin. Fortunately, the examination questions won’t make you hazard guesses that are impossible to crack! Another observation concerns over and underparamterisation. To overparameterise the objective in the preceding example would mean, for instance, to add an extra term Gk α to the guess and work through the algebra as above. If we did so we would necessarily have found that G = 0, though (check that you understand why). On the other hand, underparameterising the guess by omitting a term that belongs there, say, E would result in the failed identity in the last step of the calculation: there would exist no value of F that would make the Bellman equation hold for all k. Activity 7.5 Consider the intertemporal optimisation problem of an individual endowed with wealth W0 and period 0 and no labour income. The individual gets coupon payments on bank balance, at a gross interest rate R: max

∞ X t=0

c1−σ δ t 1−σ t

s.t. Wt+1 = R(Wt − ct ). (a) Write the Bellman equation. (b) Notice since this is an infinite-horizon problem, one cannot do backward induction. Guess the value function isomorphic to the utility function: 1−σ V (W ) = AW . Enter this into the Bellman equation and compute the policy 1−σ function. (c) Now go back to the Bellman equation and verify your guess. Was it correct? Provide the final expression for consumption as a function of the current state variable and the value function itself.

133

7. Dynamic programming

Example 7.4 Consider an intertemporal consumption choice problem, in which the utility is derived from consumption of two (sets of) goods, non-durable and durable: max {ct }

T X

β t u(ct , st+1 )

t=1

s.t. bt+1 = (1 + r)bt + y − ct − dt pt st+1 = st (1 − δ) + dt . Here the non-durable consumption is still denoted by ct , bt stands for bonds (earning constant interest rate r); st is the stock of durable goods, which depreciates at constant rate δ and gets supplemented by purchases dt . The sequence of prices of durables pt (denominated in units of non-durable good) is assumed to be known in advance. The value function V (·, t) should be a function of all endogenous variable that influence the outcome of optimisation at date t – including bt and st : V (bt , st , t). One can include the expected sequence of prices of durables, {pτ }Tt as well, as in V (bt , st , {pτ }Tt , t), however, notice that the price sequence is exogenous for the consumer and hence not a true state variable. The Bellman equation is written as follows, taking care to include the evolution equations for both state variables: V (bt , st , t) = max{u(ct , st+1 ) + βV (bt+1 , st+1 , t + 1)} ct ,dt

s.t. bt+1 = (1 + r)bt + y − ct − dt pt st+1 = st (1 − δ) + dt . An easy way to deal with the equality constraints is to substitute them into the objective function: V (bt , st , t) = max{u(ct , st (1 − δ) + dt ) + βV ((1 + r)bt + y − ct − dt pt , st (1 − δ) + dt , t + 1)} ct ,dt

The FOC w.r.t. to the new purchases of both non-durables and durables are: uc (ct , st+1 ) = βVb (bt+1 , st+1 ) us (ct , st+1 ) − βpt Vb (bt+1 , st+1 ) + βVs (bt+1 , st+1 ) = 0.

(7.11) (7.12)

Meanwhile two applications of the envelope theorem generate (here Vb and Vs stand for partial derivatives of the value function): Vb (bt , st ) = βVb (bt+1 , st+1 )(1 + r) Vs (bt , st ) = (1 − δ)[us (ct , st+1 ) + βVs (bt+1 , st+1 )]. Shift (7.11) back one period and combine it with (7.13): uc (ct−1 , st ) = βVb (bt , st ) uc (ct , st+1 ) = βVb (bt+1 , st+1 ). Combine with (7.13): uc (ct−1 , st ) Vb (bt , st ) = = (1 + r). βuc (ct , st+1 ) βVb (bt+1 , st+1 )

134

(7.13) (7.14)

7.7. Solutions to activities

This is the usual Euler equation, which says that the marginal rate of intertemporal substitution should equal the relative price of consuming across time. (7.12) implies: us (ct , st+1 ) + βVs (bt+1 , st+1 ) = βpt Vb (bt+1 , st+1 ) = pt uc (ct , st+1 ) while substituting this into (7.14): Vs (bt , st ) = (1 − δ)pt uc (ct , st+1 ) and back into (7.12): us (ct , st+1 ) − βpt Vb (bt+1 , st+1 ) + β(1 − δ)pt+1 uc (ct+1 , st+2 ) = 0. Substitute the βVb (bt+1 , st+1 ) term from (7.11) and uc (ct+1 , st+2 ) from the Euler equation above: us (ct , st+1 ) − pt uc (ct , st+1 ) + β(1 − δ)pt+1

uc (ct , st+1 ) =0 β(1 + r)

or rearranging, we obtain an intratemporal optimality condition centred on the MRS between the durables and non-durables:   (1 − δ)pt+1 − pt = 0 us (ct , st+1 ) + uc (ct , st+1 ) (1 + r) us (ct , st+1 ) (1 − δ)pt+1 = pt − . uc (ct , st+1 ) (1 + r) This shows that the true shadow price of consuming services of durable goods is not t+1 pt , it’s qt = pt − (1−δ)p , which can also be thought of as an implicit rental rate of (1+r) durables. Think of it this way: the opportunity cost of consuming a unit of durables today is pt minus the discounted 1/(1 + r), depreciated ((1 − δ)/·) resale price tomorrow (pt+1 /·).

7.7

Solutions to activities

Solution to Activity 7.1 (a) This is the habit-formation model of consumption in which people suffer excessively from reductions in consumption levels (as if they developed a habit for high levels of consumption, they suffer from withdrawal). Any variable affecting the value of your problem is the state variable. Here there are two state variables that affect cirrent behaviour, ct−1 and wt : V (ct−1 , wt , t) = max{u(ct − γct−1 ) + βV (ct , wt+1 , t + 1)} s.t. wt+1 = (1 + r)(wt + yt − ct ).

135

7. Dynamic programming

(b) Now plug the constraint into the objective function: V (ct−1 , wt , t) = max{u(ct − γct−1 ) + βV (ct , (1 + r)(wt + yt − ct ), t + 1)} and take the FOC: u0 (ct − γct−1 ) = β(1 + r)

∂V (·, t + 1) ∂V (·, t + 1) −β . ∂wt+1 ∂ct

Shift this one period back: u0 (ct−1 − γct−2 ) = β(1 + r)

∂V (·, t) − βγu0 (ct − γct−1 ). ∂wt

Now apply the envelope theorem twice: ∂V (ct−1 , wt , t) ∂V (ct , wt+1 , t + 1) = β(1 + r) ∂wt ∂wt+1 ∂V (ct−1 , wt , t) = −γu0 (ct − γct−1 ) ∂ct−1 and combine the preceding four equations: ∂V (·, t + 1)  ∂V (·, t) u0 (ct − γct−1 ) − βγu0 (ct+1 − γct ) = 0 0 u (ct−1 − γct−2 ) − βγu (ct − γct−1 ) ∂wt+1 ∂wt 1 = . β(1 + r) Solution to Activity 7.2

max

T X

log at

t=1

s.t.

T X

at = k¯

t=1

at ≥ 0 ∀t. (a) Rewrite in the dynamic programming form. V (k1 , t) =

max {log at + V (kt+1 , t + 1)}

at ,kt+1

s.t. kt+1 = kt − at ,

at ≥ 0, kt ≥ 0.

(b) Solve by backward induction. For t = T , a∗T = kT , V (kT , T ) = log kT . For t = T − 1:



136

V (kT −1 , T − 1) = max{log aT −1 + log(kT −1 − aT −1 )} a∗T −1 = kT −1 /2 kT −1 V (kT −1 , T − 1) = 2 log 2

7.7. Solutions to activities

etc. In the end: a∗t ≡

k¯ ∀t T

and: ¯ 1) = T log V (k,

k¯ . T

Solution to Activity 7.3 Suppose one tries to use the Euler equation as in 2: 1 1 = . 1 + at 1 + at+1 ¯ This gives the impression that at = k/T is the optimum. Check the second order conditions in either any backward induction step, or the Euler equation: 2 2 + > 0. 3 (1 + at ) (1 + at−1 )3 ¯ The second derivative of the objective function is positive, hence at = k/T is in fact the arg min, with the intertemporal utility of: ! T2 1 = . T ¯ T +k 1 + Tk The maximum is achieved by setting any one aτ = k¯ and at = 0 for t 6= τ , yielding: 1 + (T − 1). 1+k Solution to Activity 7.4

max

s.t.

T X t=0 T X t=0

δ t u(ct ) ct = W0 (1 + r)t

ct ≥ 0. (a) One might be tempted to guess that the optimal solution involves perfect smoothing; ct = constant. (b) The Euler equation: u0 (ct−1 ) = (1 + r). δu0 (ct ) Consumption need not be constant in general.

137

7. Dynamic programming

(c) Only for δ = 1/(1 + r) one can obtain the exact answer: u0 (ct−1 ) = u0 (ct ) implies ct−1 = ct for all concave u(·). (d) u(ct ) = ct . For δ ≶ 1/(1 + r) either c∗0 = W0 and the rest are zero, or c∗T = W0 and the rest are zero. For δ = 1/(1 + r) any sequence of ct ≥ 0 is a maximising sequence. Solution to Activity 7.5

max

∞ X t=0

δt

c1−σ t (1 − σ)

s.t. Wt+1 = R(Wt − ct ). (a)  V (W ) = max c

 c1−σ + δV (R[W − c]) . (1 − σ)

(b) Guess: V (W ) = Then:

 V (W ) = max c

AW 1−σ . 1−σ

 (R[W − c])1−σ c1−σ + Aδ ) . (1 − σ) 1−σ

FOC: c−σ = AδR1−σ (W − c)−σ W ⇒ c = . 1 + (δAR1−σ )1/σ (c) Verify: AW 1−σ 1 = 1−σ 1−σ



W 1+B

1 W 1−σ 1 − 1+B 1−σ + AδR 1−σ

1−σ

1−σ

Denote: B = (δAR1−σ )1/σ . From here one can solve for A: σ−1

A = (1 + B)

+B

σ



1 1− 1+B

1−σ

A(1 + B)1−σ = 1 + B A1/σ = 1 + (δAR1−σ )1/σ A−1/σ = 1 − δ 1/σ R(1−σ)/σ . Thus: c=

138

W = W [1 − δ 1/σ R1/σ−1 ]. 1 + (δAR1−σ )1/σ

.

7.8. Sample examination/practice questions

A reminder of your learning outcomes By the end of this chapter and the relevant reading, you should be able to: formulate a Bellman equation for the general dynamic maximisation problem solve general dynamic problems by using the envelope theorem and factoring out the Bellman value function to obtain the intertemporal optimality conditions (Euler equations) apply backward induction to explicitly solve for the sequence of Bellman value functions in a finite horizon maximisation problem explain the logic of the ‘guess and verify’ approach to the solution of infinite horizon maximisation problems use the above methods in case of multiple state and control variables.

7.8

Sample examination/practice questions

1. Solve: max s.t.

4 X

(4dt − td2t )

t=1 4 X

dt ≤ k.

t=1

A complete solution consists of V (kt , t) and policy functions for each time period. Take special care when dealing with corner solutions. 2. Bob has to read M pages of Simon and Blume in T days (1 < T < ∞). If on day t he reads pt pages, then his disutility on that day is 2pt + p2t . There is no discounting. Thus, Bob solves the following dynamic optimisation problem: choose {p1 , p2 , . . . , pT } to minimise subject to: T X

(2pt + p2t )

t=1

s.t.

T X

pt = M

t=1

pt ≥ 0, t = 1, 2, . . . , T.

(a) Formulate this problem as a dynamic programming problem. Be sure to write down the Bellman equation. (b) Solve the problem by backward induction. You may guess the form of the value function, but you must verify your guess by showing that the Bellman equation is satisfied. Will pt be increasing, decreasing or constant over time?

139

7. Dynamic programming

(c) What does the Bellman function look like? 3. Consider the problem of choosing a consumption sequence ct to maximise: max

∞ X

β t (ln ct + γ ln ct−1 )

t=0

s.t. ct + kt+1 ≤ Aktα k0 > 0 and c−1 given γ > 0, A > 0, 0 < α < 1. Here ct is consumption at time t, kt is capital stock at the beginning of period t. The current utility function ln ct + γ ln ct−1 is designed to represent habit persistence in consumption. P t (a) Let V (·) be the maximum value of ∞ t=0 β (ln ct + γ ln ct−1 ). Set up the Bellman equation (be especially careful about the appropriate set of arguments of the function V (·)). (b) Make a guess about the functional form of V (·) (hint: selecting the appropriate guess, notice the log in the per-period utility and disregard the power function in the constraint). Write first order conditions for optimisation and obtain the policy function (up to undetermined coefficients). (c) Substitute the policy function back into the Bellman equation and verify your guess. Solve for the values of undetermined coefficients in terms of A, β, α and γ. Write down the resulting value function V (·) and the policy function. 4. Consider an individual who uses Cobb-Douglas technology to produce output from capital and labour: Yt = Ktα lt1−α . The output can be in turn split between new capital Kt+1 and consumption Ct . Let V (Kt ) be the present value of lifetime utility of the individual. Consider the Bellman equation: V (Kt ) = max{ln Ct + b ln(1 − lt ) + δV (Kt+1 )} s.t. Kt+1 =

ct ,lt Ktα lt1−α

− Ct .

(a) Convert the problem intoPan intertemporal optimisation problem with infinite horizon (of the like max ∞ t=1 u(Xt ) etc.). (b) Make a guess about the functional form of V (Kt ) (Hint: notice the two logs in the per-period utility). Write down the FOC w.r.t. Ct , simplify, and show that the optimally chosen consumption-output ratio Ct∗ /Yt is independent of Kt . (c) Write down the FOC w.r.t lt and simplify. (d) Substitute the optimal solutions Ct∗ , lt∗ found under the assumption that your guess was correct into the Bellman equation. Verify your guess. Find parameters (if any) that make your guess consistent. (e) Complete the solution by providing experssions for the policy functions. 5. Consider the following model: an immortal gardener Firth characterised by CARA preferences and living in a cherry orchard. Firth obtains utility from admiring

140

7.9. Comments on the sample examination/practice questions

cheery blossoms bt and from consuming cherry jam, ct ; he discounts the future with a constant factor β ∈ (0, 1). By preparing jam, Firth has to sacrifice cherries that otherwise would have given rise to more trees and more blossoms next year. If no jam is made this year, cherry trees multiply by two. Firth likes the jam twice as much as observing the blossoms. Thus the optimisation problem amounts to choosing a jam-preparation sequence ct to maximise:   ∞ X 1 −bt −ct t max β −e − e {ct } 2 t=0 s.t. bt+1 = 2bt − ct b0 given.

(a) Let V (·) be the maximum value of: max

∞ X t=0

β

t



−ct

−e

1 − e−bt 2



subject to the above constraints. Define control and state variables. Set up the Bellman equation (be especially careful about the appropriate set of arguments of the function V (·)). (b) Make a guess about the functional form of V (·). Write the first order conditions for the optimisation problem and obtain the policy function (up to undetermined coefficients). (c) Substitute the policy function back into the Bellman equation and verify your guess. Find parameter values that are consistent with the guessed functional form. Write down the resulting value function V (·) and the policy function.

7.9

Comments on the sample examination/practice questions

1. Starting in period 4: max{4d4 − 4d24 } s.t. d4 ≤ k4 yields d4 = min{k4 , 1/2}. Hence: V (k4 , 4) = max{1, 4k4 (1 − k4 )}. To avoid dragging the max with us into preceding periods, the fact that it is never optimal to set d4 > 1/2 can be grasped by the constraint k4 ≤ 1/2. Period 3: max{4d3 − 3d23 + 4(k3 − d3 ) − 4(k3 − d3 )2 } s.t. d3 + k4 ≤ k3 1 k4 ≤ . 2

141

7. Dynamic programming

One can again conjecture that there will be a maximum level of d3 , which is obtained by: 2 arg max(4d3 − 3d23 ) = . 3 The maximum values of d3 and d4 sum up to the maximum useful level of k3 : 2/3 + 1/2 = 7/6. One proceeds to find an interior and a corner solution: 4 3 7 k3 , k4 = k3 k3 < 7 7 6 1 7 2 , k4 = k3 ≥ . = 3 2 6

d3 = d3

The value function V (k3 , 3) = max{− 12 k 2 + 4k3 , 37 }. 7 3 Period 2:   12 2 2 max 4d2 − 2d2 − (k2 − d2 ) + 4(k2 − d2 ) 7 s.t. d2 + k2 ≤ k2 7 k3 ≤ . 6 Maximum value of d2 = arg max{4d2 − 2d22 } = 1. a corner solution: 6 7 d2 = k2 , k3 = k2 13 13 7 d2 = 1, k3 = k2 ≥ 6

As before, there is an interior and k2 <

13 6

13 . 6

The value function, V (k2 , 2) = max{− 12 k 2 + 4k2 , 13 }. 13 2 3 Period 1:   12 2 2 max 4d1 − d2 − (k1 − d1 ) + 4(k1 − d1 ) 13 s.t. d1 + k2 ≤ k2 13 k2 ≤ . 6 Maximum value of d2 = arg max{4d1 − d21 } = 2. As before, there is an interior and a corner solution: 12 13 25 d1 = k1 , k2 = k1 k1 < 25 25 6 13 25 d1 = 2, k2 = k1 ≥ . 6 6 12 2 The value function, V (k2 , 2) = max{− 25 k1 + 4k1 , 25 }. 3

Summing up: d1 = if k1 <

25 , 6

12 k1 , 25

d2 =

d3 =

4 k1 , 25

d4 =

otherwise: d1 = 2,

142

6 k1 , 25

d2 = 1,

2 d3 = , 3

1 d4 = . 2

3 k1 25

7.9. Comments on the sample examination/practice questions

2. Consider: min

{pt }T 1

s.t.

T X

(2pt + p2t )

t=1 T X

pt = M

t=1

pt ≥ 0, t = 1, 2, . . . , T. The minimisation problem is equivalent to maximising: max

{pt }T 1

T X

−(2pt + p2t ).

t=1

Define mt as the number of pages left to read at the beginning of period T and define the Bellman value function as: V (m, τ ) = max {pt }T 1

s.t.

X

T X

[−(2pt + p2t )]

t=τ

T

t = τ pt = m.

The Bellman equation is written as: V (mτ , τ ) = max{−(2pτ + p2τ ) + V (mτ +1 , τ + 1)} pτ

s.t. mτ +1 = mτ − pτ . (a) V (m, T ) = −(2m + m2 ) V (m, T − 1) = max{−(2pT −1 + p2T −1 ) + V (m − pT −1 , T )} pT −1

= max{−(2pT −1 + p2T −1 ) − 2(m − pT −1 ) − (m − pT −1 )2 }. pT −1

The FOC is: −2 − 2pT −1 + 2 − (−1) · 2(m − pT −1 ) = 0 −pT −1 + (m − pT −1 ) = 0 m pT −1 = . 2     m  m 2 m2 m  m 2 V (m, T − 1) = − 2 + −m− = −2 2 + . 2 2 4 2 2 We conjecture that Bob will read the same number of pages each day, that is, m m/(T − τ ) pages for T − τ days. This implies the policy function pt = T −τ +1 for all t ≥ τ and the associated value function is:  2 ! m m V (m, τ ) = −(T − τ + 1) 2 + . T −τ +1 T −τ +1

143

7. Dynamic programming

Verifying: m V (m, τ ) = −(T − τ + 1) 2 + T −τ +1



m T −τ +1

2 !

= max{−(2pτ + p2τ ) + V (m − pτ , τ + 1)} pτ (  2 !) m − p m − p τ τ + . = max −(2pτ + p2τ ) − (T − τ ) 2 pτ T −τ T −τ pτ =

m . T −τ +1

Using this value, the RHS becomes: m −(T − τ + 1) 2 + T −τ +1



m T −τ +1

2 !

which is the same as the LHS. Hence, the Bellman equation is satisfied. 3. (a) V (kt , ct−1 ) = max{ln ct + γ ln ct−1 + βV (kt+1 , ct )} s.t. ct + kt+1 ≤ Aktα . (b) V (kt , ct−1 ) = E + F ln kt + G ln ct−1 max{ln ct + γ ln ct−1 + βE + βF ln(Aktα − ct ) + βG ln ct }. The FOC is:

βF βG 1 − + = 0. α ct Akt − ct ct

The policy function is: ct =

1 + βG Ak α . 1 + βG + βF t

(c) E + F ln kt + G ln ct−1 = ln ct + γ ln ct−1 + βE + βF ln(Aktα − ct ) + βG ln ct     βF 1 + βG = (1 + βG + βF ) ln A + βF ln + (1 + βG) ln 1 + βG + βF 1 + βG + βF +(1 + βG + βF )α ln kt + γ ln ct−1 . G = γ 1 + βγ α. 1 − βα     βF 1 + βG E = (1 + βG + βF ) ln A + βF ln + (1 + βG) ln 1 + βG + βF 1 + βG + βF   1 + βγ α = ln A + β(1 + βγ) ln(βα) + (1 + βγ) ln(1 − βα). 1 − βα 1 − βα F = (1 + βG + βF )α =

144

7.9. Comments on the sample examination/practice questions

4. (a) max

∞ X

δ t [ln Ct + b ln(1 − lt )]

t=1

s.t. Ct + Kt+1 = Ktα lt1−α . (b) Guess V (Kt ) = β + βk ln K and substitute into the Bellman equation: V (Kt ) = max{ln Ct + b ln(1 − lt ) + δ(β + βk ln Kt+1 )}. ct ,lt

Substitute in the budget constraint denoting Ktα lt1−α by Yt : V (Kt ) = max{ln Ct + b(1 − lt ) + δ[β + βk ln(Yt − Ct )]}. ct ,lt

The FOC w.r.t. Ct is: 1 βk −δ =0 Ct Yt − C t



Ct 1 = Yt δβk + 1

so it does not depend on Kt . (c) The FOC w.r.t. lt is:   βk ∂Y b =0 +δ · − 1 − lt Yt − Ct ∂l  α ∂Y Yt k = (1 − α) = (1 − α) ∂l l lt   b βk Yt − +δ · (1 − α) = 0 1 − lt Yt − Ct lt lt δβk (1 − α) Yt = 1 − lt b Yt − C t   1 b Ct = 1− +1 lt δβk (1 − α) Yt   δβk b +1 δβk (1 − α) δβk + 1

⇒ ⇒ = =

b + 1. (δβk + 1)(1 − α)

(d) Substituting the values from (b) and (c) into the Bellman equation: V (Kt ) = ln Ct + b ln(1 − lt ) + δ(β + βk ln Kt+1 ) Yt K − tα lt1−α Ct = = δβk + 1 δβk + 1   1 α 1−α Kt+1 = Kt lt 1− δβk + 1  −1 b b 1 − lt = 1 − +1 = . (δβk + 1)(1 − α) b + (1 − α)(δβk + 1)

145

7. Dynamic programming

V (Kt ) = − ln(δβk + 1) + α ln Kt + (1 − α) ln lt +b{ln b − ln[b + (1 − α)(δβk + 1)]} + δβ δβk [ln(δβk ) − ln(δβk + 1) + α ln Kt + (1 − α) ln lt ] = β + β + K ln Kt . Consistency requires: α(1 + δβK ) = βK βK =

α 1 − αδ

as well as:   1 αδ αδ ln 1−αδ + 1−αδ ln 1−αδ β =  1 − δ 1−α αδ b ln b − ln b + 1−αδ + ln(1 − αδ) + 1−αδ ln(αδ) = . 1−δ b ln b − ln b +

1−α 1−αδ





1 1−αδ

(e) Substituting the found coefficients into the FOC: !−1 b 1−α  lt = +1 = α b(1 − αδ) + 1 − α δ 1−αδ + 1 (1 − α) Ct 1 = = 1 − αδ α Yt δ 1−αδ + 1 Ct = (1 −

αδ)Ktα lt1−α

= Ct = (1 −

αδ)Ktα



1−α b(1 − αδ) + 1 − α

These are the policy functions. 5. (a)   1 −bt −ct V (bt ) = max −e − e + βV (2bt − ct ) . c 2 (b) Suppose V (b) = −αe−b .  V (bt ) = max −e c

−ct

 1 −bt ct −2bt − e − αβe . 2

The FOC is: e−ct − αβect −2bt = 0 c∗t = bt −

ln(αβ) . 2

(c) 1 ∗ ∗ −αe−bt = −e−ct − e−bt − αβect −2bt 2 1 1 −b2 + 12 ln(αβ) = −e − e−bt − αβcbt − 2 ln(αβ)−2bt 2 p 1 −bt αβ −bt = − αβe − e − √ e−bt 2 αβ p 1 = −2 αβe−bt − e−bt . 2

146

1−α .

7.9. Comments on the sample examination/practice questions

1 αβ + 2 p 1 α = 2β + 4β 2 + 2β + . 2 α = 2



p

147

7. Dynamic programming

148

Chapter 8 Ordinary differential equations 8.1

Learning outcomes

By the end of this chapter and the relevant reading, you should be able to: explain the notions of an ordinary differential equation and of the family of solutions solve first order separable differential equations in closed form solve homogeneous and non-homogeneous second order differential equations in closed form conduct the stability analysis of a steady state linearise non-linear system of differential equations around a steady state draw phase portraits for one- and two-dimensional differential equations use the logic of rational expectations to analyse dynamical systems in macroeconomics.

8.2

Essential reading

Sydsæter, K., P. Hammond, A. Seierstad and A. Strøm Further mathematics for economic analysis. Chapters 5 and 6.

8.3

Further reading

Simon and Blume Mathematics for economists. Chapters 24 and 25. Barro, R. and X. Sala-i-Martin Economic growth. Mathematical Appendix.

8.4

Introduction

In this chapter we provide a concise and self-contained introduction into ordinary differential equations (ODEs). This serves as a foundation for Chapter 9, since solving ODEs is an integral part of the optimal control problems. We also apply the qualitative

149

8. Ordinary differential equations

phase diagram analysis methods to a range of non-optimisation-based log-linear macroeconomic models. Note that there are no sample examination questions for this chapter. This is because the methods studied here are needed to solve optimal control problems and as such make part of the sample examination questions for Chapter 9.

8.5 8.5.1

The notion of a differential equation: first order differential equations Definitions

Definition 32 An ordinary differential equation (ODE) is given by: y˙ = F (y, t).

Example 8.1

Consider the equation: dx = αx dt

(8.1)

where α is a real number. It is easy to check by substitution that a solution to this equation is given by: x(t) = ceαt (8.2) for an arbitrary real value c: dx = cαeαt = αx(t). dt

Notice that (8.2) solves (8.1) for any constant c. As we will see later, ODEs have families of solutions rather than unique solutions. The solution family (8.2) is one-dimensional and parameterised by a single parameter c. ODEs may involve higher order derivatives of the unknown functions, for example: d2 y dy = 3 − 2y + 2. 2 dt dt Definition 33 An equation involving higher order derivatives of the unknown function of a single variable t:  y [j] = F t, y, y, ˙ . . . , y [j−1] is called a jth order ODE.

150

8.5. The notion of a differential equation: first order differential equations

In order to reduce clutter in the formulae, it is customary to use the following notation for first, second, and higher order derivatives of a function of a single argument: y˙ +

dy , dt

y¨ +

d2 y , dt2

3 ... d y y + 3. dt

Using this notation the equation above would be rewritten as: y¨ = 3y˙ − 2y + 2. Remark Solutions to higher order ODEs are given by families of functions. These families have as many dimensions (degrees of freedom) as the order of the ODE. When a jth order ODE admits a closed-form solution, the solution can be written as a parametric family containing j parameters k1 , . . . , kj . In most economic applications one needs to pick up a unique solution out of a family to satisfy certain conditions. For instance, suppose (8.1) describes the evolution of a growing account balance at a bank when the accruing interest rate is α. The initial balance in the bank account may be 1, which defines the initial condition y(0) = 1. This is also called a boundary condition. ODEs with boundary conditions are called Cauchy problems. Since only c = 1 satisfies the condition, y = ert is the unique solution to the Cauchy problem with the boundary condition y(0) = 1. Consider c = 0 in the above example. Clearly, x(t) ≡ 0 is a solution to (8.1). It is special in that x(t) is not varying at all. We call this special solution a steady state. Definition 34 A steady state (stationary solution, stationary point) of an ODE is a constant solution to the ODE.

8.5.2

Stationarity

Definition 35 Consider a first order ODE: y˙ = F (y, t). The equation is called autonomous or stationary if t is not an explicit argument of the right-hand side: y˙ = F (y). Otherwise, the equation is time-dependent, for example: y˙ = t2 y. We have already seen the stationary equation x˙ = αx with the parameterised solution x = ceαt . The coefficient at x can be time-varying: x˙ = α(t)x, in whichcase the  Rt equation is non-stationary, but the solution can be found as x = c exp 0 α(s) ds as long as α(s) can be integrated in closed form (verify this).

151

8. Ordinary differential equations

8.5.3

Non-homogeneous linear equations

Staying within the class of linear equations, a harder problem is posed by equations with extra additive terms: x˙ = a(t)x + b(t). Equations with a term b(t) are often called non-homogeneous (even for b(t) ≡ constant). Whenever a(t) and b(t) explicitly depend on t, the equation is non-stationary. A solution to a non-homogeneous equation is found as a sum of a general solution, which is the solution to the homogenised equation x˙ = a(t)x and a particular solution, which is an arbitrary solution to the original non-homogeneous equation, and is most of the time guessed. Here is an example of solving a non-homogeneous but stationary equation in the context of a bank balance problem. Example 8.2 You are planning your finances for the next sixty years. For the first thirty years, interest rates are 6% per annum. For the next thirty years, they are 8% per annum. Assume that you save continuously at a rate of £600 per year and the interest is compounded continuously. How much will accumulate at the end of 60 years? Denote current account balance at t by y(t). The equations for the evolution of the bank balance can be written for the two separate time periods as follows: y˙ = 0.06y + 600 for t ∈ [0, 30] y˙ = 0.08y + 600 for t ∈ [30, 60]. The general solution for the homogeneous part of the first equation is y(t) = c0 e0.06t . Guess a particular solution as y(t) = b = constant. y˙ = 0 = 0.06b + 600 ⇒ b = −10000. So: y(t) = c0 e0.06t − 10000 for t ∈ [0, 30]. To find the constant c0 , impose: y(0) = c0 − 10000 = y0 ⇒ c0 = y0 + 10000. Hence: y(30) = (y0 + 10000)e1.8 − 10000. For t ∈ [30, 60], a particular solution is y(t) = b2 = constant. Substituting into the ODE, 0 = 0.08b2 + 600 ⇒ b2 = −7500. The general solution of the homogeneous equation is c1 e0.08t . Thus: yt = c1 e0.08t − 7500 for t ∈ [30, 60]. Impose the initial condition: y(30) = c1 e2.4 − 7500 = (y0 + 10000)e1.8 − 10000 ⇒ c1 = (y0 + 10000)e−0.6 − 2500e−2.4 . This yields the expression for y(t) for t ∈ [30, 60], y(t) = c1 e0.08t − 7500. Evaluate at 60: y(60) = (y0 + 10000)e4.2 − 2500e2.4 − 7500 ≈ 31628.39 + y0 · 66.6863.

152

8.5. The notion of a differential equation: first order differential equations

8.5.4

Separable equations

Definition 36 A differential equation y˙ = F (y, t) is called separable if the RHS can be written as a product: y˙ = F (y, t) = g(y)h(t). Examples of separable equations: y˙ = y 2 t1/2 ,

y˙ =

ln t , y

y˙ = y 2 + 1.

The following, on the other hand, are non-separable: y˙ = a(t)y + b(t),

y˙ = y 3 +

√ t.

The following heuristic derivation demonstrates a solution method for separable equations. Write down the equation y˙ = f (t)g(y) using the differential notation: dy = f (t)g(y). dt Perform a symbolic flip of the differentials (which is rigorously not well-defined, but leads to correct results): dy = f (t) dt g(y) and integrate both sides: Z Z dy = f (t) dt. g(y) Suppose functions 1/g(y) and f (t) can be integrated in closed form. As an example consider the separable equation: y˙ = y 2 cos t. (8.3) Splitting the y and t terms: Z

dy = y2 which can be integrated in closed form as:

Z cos t dt.

1 − = sin t + c y where c is an arbitrary constant of integration. Simplifying this: y=

1 . c − sin t

Notice that integrating and inverting negative powers of y assumes that y = 0, but additionally it is easy to check that y ≡ 0 also solves the original equation (8.3). Remark See Example 24.8 in Simon and Bume, Mathematics for Economists (W.W. Norton, 1994).

153

8. Ordinary differential equations

Activity 8.1

Solve the following initial value problems:

(a) y˙ = t, y(0) = 2 (b) y˙ = −3y, y(0) = 10 (c) y˙ = y(2 − 3y), y(0) = 4 (d) ty(t) ˙ = y(1 − t). Find the solution passing through the point (t, y) = (1, 1/e).

8.6 8.6.1

Linear second order equations Homogeneous (stationary) equations

Consider the equation: α¨ x + β x˙ + γx = 0.

(8.4)

Recall x˙ = αx ⇒ x = ceαt . Differentiating an exponential function yields an exponential, but differentiating the exponential twice still yields the exponential! So, conjecture a solution x = ert . The would-be identity is: αr2 ert + βrert + γert = 0 ert (αr2 + βr + γ) = 0. Since ert > 0, for the identity to hold, r must be a root of the polynomial L(r) = αr2 + βr + γ. We call this the characteristic polynomial. Its roots are called characteristic values. For a characteristic value r, exp(rt) is indeed a solution to (8.4). Notice that roots may be complex numbers if the determinant of the polynomial is negative. We will not discuss this case (save for a mention at the end of the section) as it is relatively rare and exotic in economic applications, and focus instead on the real roots, which may be different or equal. Real unequal roots For r1 6= r2 , the solution is given by: x(t) = k1 er1 t + k2 er2 t for arbitrary k1 , k2 . Perform a quick verification: x˙ = k1 r1 er1 t + k2 r2 er2 t x˙ = k1 r12 er1 t + k2 r22 er2 t . α¨ x + β x˙ + γx = αk1 r12 er1 t + αk2 r22 er2 t + βk1 r12 er1 t + βk2 r22 er2 t +γk1 er1 t + γk2 er2 t = (αr12 + βr1 + γ)k1 er1 t + (αr22 + βr2 + γ)k2 er2 t .

154

(8.5)

8.6. Linear second order equations

The family of solutions has two degrees of freedom. In many applications one has to find a unique solution to satisfy certain conditions, for example, initial conditions for the function itself and its derivative: x(t0 ) = x0 , x0 (t0 ) = z0 . These conditions exhaust the two degrees of freedom and lead to a unique solution of k1 , k2 . Alternatively, one may have to deal with a Cauchy (boundary values) problem: x(t0 ) = x) , x(t1 ) = x1 . Remark See also example 24.13 in Simon and Blume, Mathematics for Economists (W.W. Norton, 1994). Real equal roots Suppose r1 = r2 , then the family of solutions is given by: x(t) = k1 er1 t + k2 ter1 t .

(8.6)

Notice the term t premultiplying the second term. We know k1 er1 t is a valid solution to the ODE, so it suffices to verify k2 ter1 t : x˙ = k2 r1 ter1 t + k2 er1 t x¨ = k2 r12 er1 t + k2 r1 er1 t + k2 r1 er1 t = k2 r12 ter1 t + 2k2 r1 er1 t . α¨ x + β x˙ + γx = = = =

αk2 r12 ter1 t + 2αk2 r1 er1 t + βk2 r1 ter1 t + βk2 er1 t + γk2 ter1 t αk2 r12 ter1 t + βk2 r1 ter1 t + γk2 ter1 t + 2αk2 r1 er1 t + βk2 er1 t k2 ter1 t (αr12 + βr1 + γ) + k2 er1 t (2αr1 + β) k2 er1 t (2αr1 + β) = 0.

The last equality follows from the fact that for a ‘double’ root: r=

−β ± (β 2 − 4αγ)1/2 β =− . 2α 2α

Two complex roots Purely for future reference, in case the roots of the charcateristic polynomial are complex values, r = α ± βi, the solutions are given by: x(t) = eαt (C1 cos βt + C2 sin βt).

8.6.2

Non-homogeneous second order linear equations

We have already mentioned non-homogeneous equations. In the context of second order equations, these are written with a non-zero function of t on the RHS: α¨ x + β x˙ + γx = g(t).

(8.7)

To characterise the family of solutions suppose that at least one solution is known, xp (t). This will serve as a particular solution.

155

8. Ordinary differential equations

The solution to the homogenised equation: α¨ x + β x˙ + γx = 0 is called the general solution of (8.7) and is given by (8.5) or (8.6). We claim that the entire family of solutions to (8.7) is given by xp (t) + xg (t). To verify this, substitute into (8.7): α(xp + xg )·· + β(xp + xg )· + γ(xp + xg ) = (α¨ x + βxp + γxp ) + (α¨ g + βxg + γxg ) = g(t) + 0 = g(t). How does one go about finding xp (t)? Sometimes this is purely a matter of luck with the method of undetermined coefficients. In general, one conjectures a sufficiently flexible parameterised functional form resembling the function on the RHS of (8.7): for ¯ k (t), a polynomial of the same degree. example for a polynomial g = Λk (t) try xp (t) = Λ αt For an exponential function g = e try xp (t) = keαt , for a logarithmic g = log t a generously parameterised guess is xp = t2 log t + c1 t log t + c2 log t + c3 t + c4 . Notice that overparameterisation is not a problem for the method, as the coefficients that should not have been there will come out as zeros upon imposing the identity (think: why?). Activity 8.2

Find the entire family of solutions to ODEs:

(a) y 00 − y 0 − 6y = 0 (b) 2y 00 + 3y 0 − 2y = 0 (c) y 00 − 3y 0 + 2y = 0 (d) y¨ + 14y˙ + 13y = t (e) y 00 − 3y 0 + 2y = e−x (f) y 00 − 3y 0 + 2y = (x + 1)2 . Activity 8.3

Solve for a particular solution subject to initial conditions:

(a) y 00 − y 0 − 30y = 0, y(0) = 1, y 0 (0) = −4 (b) y 00 − 2y 0 − 15y = 0, y(0) = 2, y 0 (0) = 0.

8.6.3

Phase portraits

So far we focused on solving ODEs in closed form. This, however, is often not possible. Instead we could learn a lot about the behaviour of a model described by an ODE in purely qualitative terms. Consider a non-linear equation: x˙ = (1 − x)x(x + 1).

156

(8.8)

8.6. Linear second order equations

Figure 8.1: Family of solutions to x˙ = (1 − x)x(x + 1).

The exact solution (with one degree of freedom) is given by: x(t) = ± √

1 (where well-defined) and x(t) ≡ 0 1 + ce−2t

(verify this).

Figure 8.1 plots representative solutions for different values of c. Notice that the solutions as functions of time look essentially the same, indeed, they are replicas of each other obtained by a parallel horizontal shift. This should not be surprising as (8.8) is stationary: if you alter a solution x(t) by defining x˜(t) = x(t + s) for any real number s, you will have obtained another valid solution to (8.8): x(t + s) = ± √

1 1 + ce−2(t+s)

1

= ±p

1 + (ce−2s )e−2t

= ±√

1 . 1 + c˜e−2t

In the applied context a researcher might be interested in the behaviour of the system when ‘restarted’ at a particular point. Definition 37 To ‘restart system’ at y0 at time t0 means to analyse the solution with an initial value condition y(t0 ) = y0 . If x denoted, say, national debt level, it would be meaningful to ask what would happen to a country that initially had neither debt nor foreign assets. The diagram depicts the constant solution x ≡ 0, thus we would expect that foreign asset position to prevail. However, if the country at any given time starts with a small but positive level of debt, it would asymptotically converge to the level given by 1 (unspecified unit). If the debt level is negative (the country is a net creditor to the rest of the world), it is expected to fall to −1; if the debt level is above 1 in absolute value, it is expected to

157

8. Ordinary differential equations

Figure 8.2: Graph of the RHS of x˙ = (1 − x)x(x + 1).

Figure 8.3: Phase portrait of x˙ = (1 − x)x(x + 1).

regress back to either positive or negative steady state ±1 (the above statements do not follow from any actual model of foreign debt and are purely for illustrating the concept of qualitative analysis). Clearly the diagram helps uncovering important qualitative properties of the model, but it is also quite redundant in doing so. It would be easier to observe the sign of the function on the RHS, f (x) = (1 − x)x(x + 1) to achieve the same end, as made evident by Figure 8.2. The arrows show the direction in which the system should evolve according to the differential ‘laws of motion’ (this is the synonym for the ODE). To make the same information even more succinct, omit the graph of the function f (·), leaving only the indication of its sign (embodied in the arrows). We have Figure 8.3. This figure is the phase diagram of the stationary equation (8.8). The space of the values taken by the endogenous variable (the unknown function of time) is called the phase space.

8.6.4

The notion of stability

Figures 8.2 and 8.3 above highlighted a special nature of certain points in the phase space. At points 0 and 2 the system described by (8.8) is expected to rest forever. Recall that we call the constant solutions steady states, and we extend the use of the term to the actual constant values these solutions take. In many applications it is of interest whether the system is expected to return to a steady state if it accidentally leaves it (in the formal language of the theory of ODEs, we are interested in the behaviour of the system restarted at various points away from the steady state, to see

158

8.7. Systems of equations

whether the respective solutions converge to steady state solutions). Definition 38 A steady state is locally stable if the system converges back to it if restarted from a point in the neighbourhood of that steady state. Formally a steady state x0 s.t. x˙ = 0 is stable if for a solution x˜(t), x˜(t0 ) ≈ x0 , x˜(t0 ) 6= x0 it is true that limt→∞ x˜(t) = x0 . We will expand on the concept in the context of the systems of equations in the next section.

8.7

Systems of equations

We speak about systems of equations whenever there are two or more interdependent differential equations, for example: x˙ = F (x, y, t) y˙ = G(x, y, t). Systems of the first order equations are particularly important in the analysis because higher order equations can be tranformed into (systems of) lower order equations as shown below. Consider the single equation: y˜ = F (y, ˙ y, t).

(8.9)

By introducing a new variable v = y˙ and writing: y˙ = v v˙ = F (y, ˙ y, t) = F (v, y, t) we have achieved an equivalent representation for (8.9).

8.7.1

Systems of linear ODEs: solutions by substitution

We centre our study on the systems of two linear equations: x˙ = αx + βy y˙ = γx + δy.

(8.10) (8.11)

Without loss of generality, assume β 6= 0 and express y through x and x˙ from the first equation. x˙ − αx . (8.12) y= β Differentiate this to obtain: y˙ =

x¨ − αx˙ x˙ − αx = γx + δy = γx + δ . β β

159

8. Ordinary differential equations

Substituting the resulting expression for y˙ into (8.11), we obtain a second-order linear equation in x, which we know how to solve:   x¨ (α + δ)x˙ αδ − + −γ x = 0 β β β ⇔ x¨ − (α + δ)x˙ + (αδ − γβ)x = 0. The characteristic values are: r1,2 =

α+δ±

p α2 + 2αδ + δ 2 − 4(αδ − βγ) . 2

Assuming that they are distinct: s = c1 er1 t + c2 er2 t . Plug this back into (8.12): x˙ − αx β c1 r1 er1 t c2 r2 er2 t α r1 t α r2 t + − c1 e − c2 e = β β β β c1 c2 r1 t r2 t = (r1 − α)e + (r2 − α)e . β β

y =

Since c1 and c2 are arbitrary constants, we can multiply both expressions by β to get an equivalent representation: x = β(c1 er1 t + c2 er2 t ) y = c1 (r1 − α)er1 t + c2 (r2 − α)er2 t . In vector notation: 

x y



 = c1

β r1 − α



r1 t

e

 + c2

β r2 − α



er2 t .

Remark r1 , r2 are eigenvalues of the associated matrix:   α β A= γ δ while:



β ri − a



are (multiples of) eigenvectors. Example 8.3

Consider the family of solutions to:  x˙ + x + y = 0 y˙ + 2x = 0.

Using the substitution, the characteristic polynomial is constructed as: L(λ) = λ2 + λ − 2. The roots are {−2, 1} and the family of solutions is given by: x(t) = c1 e−2t + c2 et y(t) = c1 e−2t − 2c2 et .

160

8.7. Systems of equations

Activity 8.4

8.7.2

Find the entire family of solutions to:  x˙ = x + 2y y˙ = y + 2x + et .

Two-dimensional phase portraits 

Consider a simple system consisting of two independent equations

x˙ = y The y˙ = x.

family of solutions is given by: 

x = c1 et + c2 e−t y = c1 et − c2 e−t .

How would the contours traced by individual solutions (x(t), y(t)) look like, if projected onto (x, y) plane? It helps to notice that: 2c1 x−y = e−t = 2c2 x+y hence: (x − y)(x + y) = 4c1 c2 = constant. For an arbitrary value of the constant this equation describes a phase curve traced by a given solution. As known from geometry, the curves defined by a quadratic equation like this are hyperbolae (see Figure 8.4). The arrows show the direction in which the solution is evolving through time. Notice that exactly two phase curves are straight lines (the diagonals): they correspond, respectively, to the cases when c1 = 0 and c2 = 0. Also notice that along the negative sloped diagonal the system is converging toward the steady state at the origin (limt→+∞ x(t) = limt→+∞ y(t) = limt→+∞ c2 e−t = 0), while along the positive sloped diagonal, the system is diverging (x = y = c1 et → +∞). Stability of steady states (linear systems) Having a phase portrait helps determine the stability properties of steady state(s). Inversely, knowing the possible cases with respect to the stability properties helps sketching an accurate phase portrait. Let us investigate the analytics of the stability and describe possible genera of steady states. x˙ = αx + βy + u y˙ = γx + δy + v. Steady state (x0 , y0 ): x˙ = αx0 + βy0 + u = 0 y˙ = γx0 + δy0 + v = 0. Stable if a solution with initial point in a neighbourhood of (x0 , y0 ) converges to (x0 , y0 ).

161

8. Ordinary differential equations

Figure 8.4: Planar phase diagram.

Solution (assume real-valued r1,2 ): 

x y



r1 t

= c1~v1 e

r2 t

+ c2~v2 e

 +

x0 y0

 .

(8.13)

As expected, the family of solutions is two-dimensional (c1 , c2 are arbitrary parameters). There are three special solutions that deserve a special mention:

c1 = c2 = 0. This is the steady state. c2 = 0, c1 6= 0. This is a solution where x and y coordinates evolve at the same speed: x = c1 v11 er1 t , y = c1 v12 er1 t , so the evolution takes place along the straight line. Whether the solution is moving away or toward the steady state depends on the sign of the characteristic values r1 . If r1 > 0, then limt→∞ er1 t → +∞, so both x and y are clearly exploding. If r1 < 0, then limt→∞ c1 v11 er1 t = limt→∞ c1 v12 er1 t = 0 and the solution converges to the steady state (x0 , y0 ). c1 = 0, c2 6= 0. Evolution along the straight line, similar to above. The two solutions above trace straight lines in the phase space, which are sometimes referred to as separatrices (singl. separatrix). To summarise the stability properties, the linear system has a stable steady state iff r1 < 0, r2 < 0 (see the diagram below for an example).

162

8.8. Linearisation of non-linear systems of ODEs

A steady state is totally unstable when r1 > 0, r2 > 0. Finally, a steady state is semi-stable or saddle-path stable when r1 < 0, r2 > 0 or vice versa. In this case the phase curve corresponding to the pure solution with the negative characteristic root is called the saddle path, or stable arm. The other solution is the divergent path or unstable arm. The rest of the solutions to the system trace phase curves that look like hyperbolae (in fact, they are), ‘stretched’ on the separatrices, moving away from the saddle path and reaching the divergent path in infinity: lim [c1~v1 er1 t + c2~v2 er2 t ] − c2~v2 er2 t = 0.

t→+∞

See the diagram below for an example.

8.8

Linearisation of non-linear systems of ODEs

For a function f (x), differentiable at x0 , the Taylor expansion is: f (x) = f (x0 ) + f 0 (x0 )(x − x0 ) + R(x) where R(x) = o(x − x0 )2 , that is, second order of magnitude. So, in the immediate vicinity of x0 : f (x) ≈ f (x0 ) + f 0 (x0 )(x − x0 ). If we have an ODE: x˙ = f (x) where f (·) is a non-linear function, then linearising around a point x0 will produce an approximate equation: x˙ = fˆ(x) = f (x0 ) + f 0 (x0 )(x − x0 ).

163

8. Ordinary differential equations

By the continuity principle, the solution to this approximate equation will be very close to the true solution. Linearisation is mostly done around steady states. Then f (x0 ) = 0. (Why?) x˙ = f 0 (x∗ )(x − x∗ ). Since the solution of the linearised equation is close to the true one, the stability property is verified by observing the sign of the linear coefficient in fˆ: x∗ is stable ⇔ f 0 (x∗ ) < 0. Two-dimensional case:



x˙ y˙



 =

F (x, y) G(x, y)



∂F (x, y) ∂F (x, y) F (x, y) ≈ F (x0 , y0 ) + (x − x0 ) + (y − y0 ). ∂x x0 ,y0 ∂y x0 ,y0 Now, for both functions F and G: ∂F (x, y) ∂F (x, y) F (x, y) ≈ F (x0 , y0 ) + (x − x0 ) + (y − y0 ) ∂x x0 ,y0 ∂y x0 ,y0 ∂G(x, y) ∂G(x, y) G(x, y) ≈ G(x0 , y0 ) + (x − x0 ) + (y − y0 ). ∂x x0 ,y0 ∂y x0 ,y0 At any steady state F (x0 , y0 ) = G(x0 , y0 ) = 0: ∂F (x, y) ∂F (x, y) (x − x0 ) + (y − y0 ) F (x, y) ≈ ∂x x0 ,y0 ∂y x0 ,y0 ∂G(x, y) ∂G(x, y) G(x, y) ≈ (x − x0 ) + (y − y0 ) ∂x x0 ,y0 ∂y x0 ,y0 or: F (x, y) ≈ a(x − x0 ) + b(y − y0 ) G(x, y) ≈ c(x − x0 ) + d(y − y0 ) (x,y) which highlights the fact that coefficients such as a = ∂F∂x

are constants so the x0 ,y0

RHS is linear. The characteristic polynomial is given by: L(λ) = λ2 − λ(d + a) + (ad − bc). If both λ1 , λ2 ∈ R are negative – system is locally stable, otherwise unstable. If λ1 < 0, λ2 > 0 – system has a saddle path property. Activity 8.5 (Question 25.20 (d) from Simon and Blume, Mathematics for Economists.) Analyse the system of non-linear equations: x˙ = x(8 − 4x − 2y) y˙ = y(8 − 2x − 4y).

164

8.9. Qualitative analysis of the macroeconomic dynamics under rational expectations

(a) Find all steady states of the model. (b) Linearise around each steady state. (c) Argue whether each steady state is stable or unstable, using the analysis of the local linearisation. (d) Sketch the stationary loci. Draw the arrows pointing in the directions of the phase curves. Complete the phase portrait by drawing sample phase curves.

8.9

Qualitative analysis of the macroeconomic dynamics under rational expectations

This is perhaps the topic that is least covered in textbooks. The skills for studying rational expectations dynamics are usually passed on in classes and tutorial meetings by means of examples. Here is one such example.

8.9.1

Fiscal expansion in a macro model with sticky prices

Phase portrait In what follows we will illustrate how the ODE theory can help us trace the effects of either unanticipated or pre-announced fiscal expansion in a typical macroeconomic model. The model is described by the following equations: yt mt − pt p˙t r˙t

= = = =

−αrt + ft (IS) βyt − γRt (LM) δ(yt − y¯) (Phillips curve) σ[rt − (Rt − p˙t )] (no arbitrage)

All the economic variables in this sytem and used in logs, in fact it is a long-followed convention to denote levels of variables, such as output Y , by uppercase letters, and their logs by lowercase ones, for example, y = log Y . yt stands for the (log of) potential output, pt is the log of the price level, mt is the log of the money stock, Rt is the nominal short-term interest rate (on bonds), rt is the real long-term interest rate (on business shares), and ft is the log of net public expenditure. Coefficients α, β, γ, δ and σ are positive constants, further, assume for simplicity β/γ = δ. The IS equation captures the negative response of investment to the higher long-term yield and the positive response of aggregate demand to the fiscal stimulus. The LM equation describes the equilibrium in the money/bond market. It is assumed that the aggregate price level pt can only change slowly (‘prices are sticky’), without discontinuous jumps. The rate of change in the price level is given by the Phillips curve as increasing in the ‘output gap’, the discrepancy between the actual and potential output.1 Finally, the last equation is the condition ensuring that no windfall profits could be made exploiting gaps between the yields on bonds and shares. 1

If you are curious about why this is a reasonable assumption, consider the fact that the price level

165

8. Ordinary differential equations

There are four endogeneous variables in this model: yt , Rt , rt , pt . However, only the latter two variables enter the equilibrium equations with their time derivatives. We want to reduce the dimensionality of the system to analyse a system of pure differential equations. To do that we begin by factoring out yt , Rt : p˙t = δ(−αrt + ft − y¯) 

r˙t

 mt − pt = σ[rt − (Rt − p˙t )] = σ rt + − δ y¯ . γ

Thus we end up with a system in two equations and two unknowns, p and r. The above system is called a dynamical system. ft and mt are exogenous variables, and although potentially time-varying, they are not forced to evolve due to equilibrium conditions of the problem (the same actually goes for y¯). We will consider the equilibrium dynamics for preset fixed values of f and m, thus omitting the time subscript until we explicitly consider an experiment with a change in either variable. Our first objective is to draw the phase diagram of the system on the plane (space p, r). One familiar possibility is to solve the equations explicitly, however, a heuristic method discussed below achieves the same ends with fewer calculations and is at the same time more insightful. We proceed to draw the two stationary paths, or loci of points on the phase plane, where either p˙ = 0, or r˙ = 0. The expressions for these are given by: p˙ = 0



r˙t = 0



f − y¯ α pt − m rt = + δ y¯. γ rt =

(8.14) (8.15)

One of the stationary paths is horizontal, the other is upward-sloping as evident from the expressions. The next step involves graphing the differential laws of motion. Consider point X. Since this point is located below the locus p˙ = 0, its r coordinate is lower than on the locus. With a slight stretch of notation, we denote this fact by r|X < r|p=0 ˙ . Consequently, it must be that: p| ˙ X = δ(−αr|X + f − y¯) > δ(−αr|p=0 + f − y¯) = 0. ˙ This allows us to conclude that the ODE solutions passing through point X must have increasing price level. We capture this graphically with an arrow pointing to the right → in the direction of increasing p. Now, compare point X with point Z directly above it, lying on the r˙ = 0 locus. X and Z have identical p coordinate, but r|X < r|Z .     m−p m−p − δ y¯ < σ r|Z + − δ y¯ = 0. r| ˙ X = σ r|X + γ γ Hence, the interest rate along the ODE solutions passing through point X must be declining. We capture this graphically with an arrow pointing downwards, ↓. As a result we have two arrows protruding from X. We can likewise repeat the same is actually an average of a multitude of individual prices of goods. These individual prices may of course change discontinuously reacting to events, but if only a small fraction of these are adjusted at any given moment, the average will evolve slowly. The lower the proportion of individual price setters who adjust their prices, the greater the degree of inertia the aggregate price level displays.

166

8.9. Qualitative analysis of the macroeconomic dynamics under rational expectations

Figure 8.5: Phase diagram of the macroeconomic model with sticky prices.

167

8. Ordinary differential equations

procedure for any point on the plane, however, a simple realisation is that as long as points lie on the same side of a locus r˙ = 0, they have the same sign of r, ˙ and as long as points lie on the same side of a locus p˙ = 0, they have the same sign of p. ˙ Hence we can easily put arrows corresponding to the laws of motion in all four quadrants of the plane divided by the two stationary paths. Observing the directions of laws of motion provides us with an idea of the stability properties of the solutions to the system of equations (‘dynamical system’, for short). The saddle path can only pass through the Southwestern and Northeastern quadrants of the plane (where the arrows pointing towards the steady state are consistent with the graphed laws of motion). Likewise, the divergent path must pass through the Northwestern and Southeastern quadrants and have a negative slope. This only permits a rough sketch as the exact slope of the stable and unstable arms cannot be deduced from the arrows. However, this is sufficient for most purposes if we concentrate on qualitative analysis. Even though it may seem somewhat non-rigorous, in reality this heuristic method of sketching the two separatrices has never failed anyone (provided one draws the arrows correctly). The rest of the phase curves are added to the diagram in a straightforward fashion as hyperbolae ‘stretched’ on the rays of the separatrices, with the motion indicated by the tiny arrows, away from the saddle path and towards the divergent path. This marks the end of the mathematical preliminaries, from now on the analysis becomes distinctively and essentially economic. Let us see what happens to the system if there is a permanent fiscal expansion, corresponding to an increase in f . For concreteness denote the time of the change by t = 0. First analyse the expressions for the stationary paths (8.14) and (8.15). Only the first one is affected by the change in f : the corresponding locus shifts up. The new steady state E 0 is located above and to the right of the old one O. However, if at t = 0 the system is still at E, what is the adjustment dynamics? The mathematics of the ODEs supplies us with a huge family of solutions, however, it does not suggest which solution the system will actually trace when subjected to a shock like an increase in f . Two important factors come into play to determine that: the jumping/slow-moving nature of the endogeneous variables and the implications of rationality of the markets at all points in time. Slow-moving and jump variables We have already highlighted the slows and inertial movement of the price index pt . In the economics jargon this is called a slow-moving or a crawler variable. It cannot be changed instantly, in zero time. In contrast, the other variable, the yield (interest rate) on shares r can adjust immediately. The reason is that unlike markets for physical goods with a lot of price inertia, stock exchanges resemble perfect markets much more, so the prices of all shares can change instantaneously (for example, on receipt of good news reports), thereby altering the yield. These particular assumptions are not set in stone for the whole body of economics. In a different macromodel that does not assume price rigidities the price level may be a jump variable. The mathematical modelling is driven by the need to best capture a specific economic environment or phenomenon.

168

8.9. Qualitative analysis of the macroeconomic dynamics under rational expectations

Figure 8.6: Adjustment to an unanticipated fiscal expansion.

Among natural candidates for crawler variable in other contexts is capital stock, which adjusts slowly for natural reasons (since this is a stock variable equal to the sum of the flow investments, which are finite). On the other hand, an exchange rate in a flexible exchange rate regime is a jump variable, it may change abruptly if the market sees as reason for that. Returning to the case of fiscal expansion, what does the fact that p is a crawler variable imply for the adjustment dynamics? Merely that whichever phase curve is going to describe the dynamics, at t = 0 it must be that p0 = p|E , in other words, the price level cannot adjust immediately from its pre-expansion steady state level. On the other hand, r can jump, and hence there are no restrictions at all on r0 . Imposing the assumption of rational expectations Consider Figure 8.6. The adjustment dynamics are still undetermined: from E, the system may possibly jump to points A, B or C (among many others), thereafter proceeding along very different phase curves. We will put forward an argument, which shows that only a jump to the new saddle path is consistent with the rationality of the economic agents. For suppose not, and the system jumps to point A. If the system makes no further discontinuous jumps, the joint evolution of r and p in the future will degenerate into r → +∞, p → 0. The fact that the real rate of return on stocks r grows without bound implies that the prices of stocks explode (grow faster than the exponential rate). This means that in finite time the stocks will be worth more that the entire planetary

169

8. Ordinary differential equations

economy, and sooner or later stock holders will want to cash in to consume their gains, only to find out that nobody would want to buy the stocks at that price. This is the familiar ‘end of a bubble’ scenario. The stock price will then have to adjust back to realistic values, so the system will evolve along a different phase curve. However, changing phase curves in the future is not allowed due to the no expected arbitrage argument. A discontinuous adjustment in r and stock prices at a certain time t¯ will mean that there are arbitrage opportunities (an arbitrageur would like to sell stocks short just prior to their devaluation and then buy them low right after the price crash). Since the agents are rational, everyone would like to do the same thing, but that is impossible as the aggregate supply of stocks is positive. Consequently, the price crash would have to happen earlier than at t¯. The argument is prevented by attempts to arbitrage it away leading to an earlier crash. The only time when the stock prices (and consequently, r) may jump, is t0 . Why? Because we are dealing with an unexpected fiscal policy change. No one expected it, hence, no one could arbitrage it away. An argument along the same lines applies to a possible jump to point C. Along the respective phase curve the prices of stocks are expected to shrink to zero. This would have implied that a discontinuous price adjustment was expected in the future, which is ruled out by the rational expectations argument. To summarise, the only possible ODE solution that the system may assume at t0 is the one in which it jumps onto the saddle path (point B). From there on the system gradually moves to the new steady state E 0 (see diagram), asympototically reaching it in infinity. Notice that the adjustment is discontinuous from E to B, and continuous thereafter (from B to E 0 ). A pre-announced expansion The adjustment dynamics are different if the expansion is anticipated in advance. Suppose the actual change in f does not come until t1 , but the market learns this news at t0 < t1 . We will have two distinct phases: t0 < t < t1 : the system obeys the differential laws of motion associated with the ‘old’ steady state E t > t1 : the system obeys the differential laws of motion associated with the ‘new’ steady state E 0 . The above means that if the system jumps up at t0 along the vertical line by any distance (remember that p is still a crawler variable so not allowed to adjust discontinuously), it will still be subject to the ‘old’ system of ODEs and will have to travel along the respective phase curves. The rational expectations argument, on the other hand, implies that at t1 the system must be on the new saddle path. Moreover, no discontinuous jumps in r are allowed after t0 . There is a unique adjustment path (shown in bold) that satisfies all requirements (see Figure 8.7). The system would jump to a point B roughly halfway between the old steady state and the new saddle path, then travel along the (divergent) phase curve passing through B, arrive to a point on the new stable arm exactly at t1 and then gradually move towards the new steady state as before. The adjustment trajectory and the point B are unique. If the jump is too small, say to point A, then by the time t1 the system will only have reached point A0 off

170

8.9. Qualitative analysis of the macroeconomic dynamics under rational expectations

Figure 8.7: Adjustment to a pre-announced fiscal expansion.

the saddle path, requiring a yield adjustment which cannot happen as explained in the preceding subsection. Likewise, if the jump is too large, the system will travel too fast and cross the saddle path before t1 (not depicted) again violating the no-arbitrage requirement. How do we know that the adjustment path through B exists? That it is unique? There is no way to prove this graphically as the argument is essentially about the speed of travel along the phase curves, while the phase diagram is unsuitable for gauging speed. Formally, we need to apply two boundary conditions to our system of ODEs, one is that pl0 = p|E and that rt1 satisfies the equation for the new saddle path. As we know, the general family of solutions to ODEs is two-dimensional, the two boundary conditions are necessary and sufficient to pick a unique solutions out of this family. Activity 8.6 Consider a small developing open economy in which the official foreign exchange market is over-regulated, and as a result the official, pegged exchange rate coexists with the parallel rate freely determined in a ‘black’ market. The official rate applies to transactions authorised by the government, and in practice (since the official rate may be set at an unrealistic level) means that exporters are expected to bring in the hard currency proceeds and surrender some proportion of these to the authorities at the official rate (they can repatriate the

171

8. Ordinary differential equations

remaining proceeds via the black market). All private transactions are made in the black market. Agents are rational, and hold both domestic and foreign-currency balances in their portfolios. Domestic output consists of a single exportable good and is constant. The model is described by the following log-linear equations: m−p m p R˙

= = = =

−αs˙ γR + (1 − γ)d, 0 < γ < 1 δs + (1 − δ)e, 0 < δ < 1 −φ(s − e)

(8.16) (8.17) (8.18) (8.19)

where m denotes the nominal money stock, d is (constant) domestic credit, R is the stock of net foreign reserves of the central bank, p is the domestic price level, e is the (exogenous) official exchange rate, and s is the black market (parallel) exchange rate. (8.16) is the money demand equation (notice that s˙ is the parallel depreciation rate). (8.17) defines the domestic money stock (akin to M2) as a weighted average of domestic credit and foreign reserves. (8.18) indicates that the price level depends on the official and parallel exchange rates (suppose, the state imports certain goods and sells to the public at subsidised prices, while the rest of the consumption basket reflects the true cost of imports s). Finally, (8.19) describes the behaviour of reserves. The negative effect of the black market premium (the difference between the official and the parallel exchange rates) on reserves results from its impact on under-invoicing of exports. The higher the parallel rate is relative to the official one, the greater will be the incentive to falsify export invoices and to divert export proceeds to the black market. (a) Solve the system of (8.16) to (8.19) to obtain a system of ODEs in s, R. Draw stationary paths and determine the steady state. Argue which endogenous variable should be a jump variable and which a predetermined variable. (b) Using the characteristic polynomial of the ODEs, analyse the stability properties of the steady state. Graph stable/unstable arms of the system of ODEs. (c) Suppose the system is initially in steady state. Consider an impact of unanticipated domestic credit expansion (permanent increase in d). Analyse the transition dynamics, draw time paths of all endogenous variables. (d) (More difficult.) How would your answer to (c) change if the expansion was only temporary and after some time period T , d went back to its initial level? (e) Suppose the system is initially in steady state. Consider an impact of a devaluation of the official exchange rate e. Analyse the transition dynamics, draw time paths of all endogenous variables. (Warning: the details of the adjustment depend on the relative position of the saddle path and the stationary loci. In your solution, proceed under the assumption that the new saddle path cuts below the old steady state.) (f) (More difficult.) How would your answer to (e) change if the devaluation was announced in advance (actual increaase in e happened T periods after the announcement)? Draw detailed dynamics diagrams. Describe adjustment in words.

172

8.10. Solutions to activities

8.10

Solutions to activities

Initial value problems: (a) y˙ = t, y(0) = 2. Direct integration: Z y˙ = t ⇔ Initial condition: 

t

τ dτ =

t2 + c. 2

 t2 t2 + c = c ⇔ y = 2 + . 2 2 t=0

(b) y˙ = −3y, y(0) = 10. Separable equation, or just recall linear first order equation: y(t) = ce−3t . ce−3t t=0 = c = 10 ⇔ y = 10e−3t . Solution to Activity 8.1 (a) y˙ = y(2 − 3y), y(0) = 4. Separate the terms: dy = dt y(2 − 3y) Z Z dy = dt y(2 − 3y) Z

dy = y(2 − 3y) = = = =

1 3y + 2 − 3y dy 2 y(2 − 3y) Z Z 1 dy 3 dy + 2 (2 − 3y) 2 y Z Z 1 du 1 dy − + (letting u = 2 − 3y, du = −3dy) 2 u 2 y 1 1 ln |y| − ln |u| + c 2 2 1 y ln + c. 2 2 − 3y Z

Hence: 1 y ln +c = t 2 2 − 3y y 2 − 3y = |K| exp 2t ⇔ y =

2Ke2t 1 + 3Ke2t

for K ∈ R.

173

8. Ordinary differential equations

Solution to Activity 8.2 (a) This is a separable equation: dy 1−t = dt y t Z 1−t ln |y| = dt = ln |t| − t + c t |y| = |K| |t|e−t y = tKe−t . From the initial condition, Ke−1 =

1 e

⇔ y = Kte−t , K = 1.

(b) y 00 − y 0 − 6y = 0. Roots of characteristic polynomial are 3, −2. It follows that y(x) = Ae3x + Be−2x for arbitrary A, B. (c) 2y 00 + 3y 0 − 2y = 0. Roots of characteristic polynomial are 1/2, −2. It follows that y(x) = Aex/2 + Be−2x for arbitrary A, B. (d) y 00 − 3y 0 + 2y = 0. Roots of characteristic polynomial are 1, 2. It follows that y(x) = Aex + Be2x for arbitrary A, B. (e) y¨ + 14y˙ + 13y = t. The general solution to the homogeneous problem is y = C1 e−t + C2 e−13t . Find a particular solution by guessing yp = at + b, hence:   14a + 13b = 0 ⇔ 14a + 13at + 13b = t ⇔ 13at = t 14 1 ⇔ y(t) = − + t + C1 e−t + C2 e−13t . 169 13 (f) y 00 − 3y 0 + 2y = e−x . Homogeneous equation has been solved in (a). For a particular solution, try yp (x) = ae−x , hence: 1 yp00 − 3yp0 + 2yp = a(1 + 3 + 2)e−x = e−x ⇔ a = . 6 Hence:

1 y(x) = Aex + Be2x + e−x . 6

(g) y 00 − 3y 0 + 2y = (x + 1)2 . The RHS is a second-degree polynomial, so look for a similiar particular solution as a polynomial: yp = ax2 + bx + c yp00 − 3yp0 + 2yp = 2a − 3b + 2c + (−6a + 2b)x + 2ax2 = 1 + 2x + x2 1 5 15 ⇒ a = , b= , c= . 2 2 4 The complete solution: 1 5 15 y(x) = Aex + Be2x + x2 + x + . 2 2 4

174

8.10. Solutions to activities

Solution to Activity 8.3 Solve for a particular solution subject to initial conditions: (a) y 00 − y 0 − 30y = 0, y(0) = 1, y 0 (0) = −4. λ = 6, −5. Substitute into the initial conditions to solve for coefficients c1 , c2 . y=

1 6x (e + 10e−5x ). 11

(b) y 00 − 2y 0 − 15y = 0, y(0) = 2, y 0 (0) = 0. λ = 5, −3. Substitute into the initial conditions to solve for coefficients c1 , c2 . 3 5 y = e5x + e−3x . 4 4 Solution to Activity 8.4 Find the whole family of solutions to: 

x˙ = x + 2y . y˙ = y + 2x + et

First solve the homogeneous system: the characteristic values are λ = −1, 3. So, by repeated substitution: x(t) = c1 e3t + c2 e−t y(t) = c1 e3t − c2 e−t . Look for a particular solution xp = c3 et , yp = c4 et . Substituting into the system: x˙ p = c3 et = c3 et + 2c4 et = (c3 + 2c4 )et y˙ p = c4 et = c4 et + 2c3 et + et = (c4 + 2c3 + 1)et . Solve for c3 , c4 to make these identities: c3 = −1/2, c4 = 0. Complete solution: 1 x(t) = c1 e3t + c2 e−t − et 2 y(t) = c1 e3t − c2 e−t . Solution to Activity 8.5 (a) 

x(8 − 4x − 2y) = 0 . y(8 − 2x − 4y) = 0

Solutions: x = 0, y = 0 x = 0, y = 2 y = 0, x = 2 4 x=y= . 3

175

8. Ordinary differential equations

(b) x˙ = 8x − 4x2 − 2xy = F (x, y) y˙ = 8y − 2xy − 4y 2 = G(x, y) ∂F ∂x ∂G ∂x

= 8 − 8x − 2y = −2y

∂F ∂y ∂G ∂y

= −2x = 8 − 8y − 2x.

F (x, y) ≈ (8 − 8x − 2y)|(xSS ,ySS ) (x − xSS ) − 2x|(xSS ,ySS ) (y − ySS ) G(x, y) ≈ −2y|(xSS ,ySS ) (x − xSS ) + (8 − 8y − 2x)|(xSS ,ySS ) (y − ySS ).

i. In the vicinity of (0, 0): F (x, y) ≈ 8x G(x, y) ≈ 8y. Characteristic polynomial λ2 − 16λ + 64 has two identical positive roots. ii. In the vicinity of (0, 2): F (x, y) ≈ 4x G(x, y) ≈ −4x − 8(y − 2). Characteristic polynomial λ2 + 4λ − 32 has two roots of different sign (4 and −8). iii. In the vicinity of (2, 0): F (x, y) ≈ −8(x − 2) − 4y G(x, y) ≈ −4y. Characteristic polynomial λ2 + 4λ − 32 has two roots of different sign (4 and −8). iv. In the vicinity of (4/3, 4/3):    16 4 8 F (x, y) ≈ − x− − y− 3 3 3    8 4 16 x− − y− G(x, y) ≈ − 3 3 3

 4 3  4 . 3

Characteristic polynomial λ2 + 32 λ + 64 has two negative roots (−8/3 and −8). 3 3 (c) (0, 0) is totally unstable. (2, 0) and (0, 2) have saddle-path property. (4/3, 4/3) is locally unstable steady state.

176

8.10. Solutions to activities

(d)

Solution to Activity 8.6 The model is described by the following log-linear equations: m−p m p R˙

− = = =

αs˙ γR + (1 − γ)δ, 0 < γ < 1 δs + (1 − δ)e, 0 < δ < 1 −φ(s − e).

(8.20) (8.21) (8.22) (8.23)

(a) Factor out the other two endogenous variables m and p to get: 1 (1 − δ)e − (1 − γ)d [δs − γR] + α α ˙ R = −φs + φe. s˙ =

(8.24) (8.25)

s is the natural candidate for the jump variable (it’s a market price), while R is predetermined (reserves are depleted/accumulated only gradually). (b) Characteristic polynomial of the ODEs: λ2 −

δ φγ λ− = 0. α α

There are two real-valued roots, one of them positive, one negative, hence the steady state is semi-stable or saddle-path stable.

177

8. Ordinary differential equations

(c) s˙ = 0 locus shift to the left. The economy responds by instantaneous increase in s up to a point on the new saddle path directly above the old steady state. From then on the economy is gradually moving along the saddle path to the new steady state. RSS falls. No change in sSS . f

Figure 8.8: Diagram for the solution to Activity 8.6 (c).

(d) For concreteness assume the increase in d happens at time t0 and d goes back at t1 = t0 + T . The laws of motion pertaining to the system with temporarily high d are only in force between t0 and t1 jumps so as to be on a phase curve (belonging

178

8.10. Solutions to activities

Figure 8.9: Diagram for the solution to Activity 8.6 (d).

to the ‘temporary’ system of ODEs), which at t1 will cross the saddle path (belonging to the ‘permanent’ system of ODEs). From then on the economy is gradually moving along the saddle path to the old steady state. (e) Devaluation is defined as an increase in e, the official exchange rate. From (8.24) and (8.25), s˙ = 0 locus shifts to the right, while R˙ = 0 locus shifts up. The details of adjustment depend on the relative position of the saddle path and the stationary loci, which we do not make precise here. We proceed under the assumption in parameters that the new saddle path cuts below the old steady state. In the aftermath of the devaluation, parallel exchange rate s immediately drops (the economy must be on the saddle path). From then on the economy is gradually moving along the saddle path to the new steady state both s and R increasing.

Figure 8.10: Diagram for the solution to Activity 8.6 (e).

(f) Again, for concreteness assume the devalation is announced at time t0 and actually happens at t1 = t0 + T . At t0 parallel exchange rate s drops, and between t0 and t1 the system is moving along a phase curve (belonging to the old system of ODEs) so

179

8. Ordinary differential equations

as at t1 to be on the new saddle path, which takes the system to the new steady state.

Figure 8.11: Diagram for the solution to Activity 8.6 (f).

A reminder of your learning outcomes By the end of this chapter and the relevant reading, you should be able to: explain the notions of an ordinary differential equation and of the family of solutions solve first order separable differential equations in closed form solve homogeneous and non-homogeneous second order differential equations in closed form conduct the stability analysis of a steady state linearise non-linear system of differential equations around a steady state draw phase portraits for one- and two-dimensional differential equations use the logic of rational expectations to analyse dynamical systems in macroeconomics.

180

Chapter 9 Optimal control 9.1

Learning outcomes

By the end of this chapter and the relevant reading, you should be able to: explain what is meant by an optimal control problem form a Hamiltonian and produce the full set of necessary conditions for a maximum in accordance with Pontryagin’s maximum principle solve tractable optimal control problems by applying the maximum principle and the relevant boundary conditions explain the logic of the Ramsey-Kass-Coopmans model and the Tobin’s q model draw phase diagrams for optimal control problems with one control and one state variable analyse the adjustment dynamics in dynamic micro-founded macroeconomic problems.

9.2

Essential reading

Sydsæter, K., P. Hammond, A. Seierstad and A. Strøm Further mathematics for economic analysis. Chapter 9.

9.3

Further reading

Kamien, M. and N.L. Schwartz Dynamic optimization: the calculus of variations and optimal control in economics and management. 1991. Takayama, A. Analytical methods in economics. 1993. Barro, R. and X. Sala-i-Martin Economic Growth. Mathematical Appendix.

9.4

Introduction

In this chapter we study the solution techniques for the optimisation problems in continuous time, traditionally known as optimal control problems. We apply the

181

9. Optimal control

techniques to the analysis of the central problems of macroeconomic dynamics (not unlike those studied in Chapter 7 in discrete time settings): the optimal consumption/saving decision and the forward-looking investment decision.

9.5 9.5.1

Continuous time optimisation: Hamiltonian and Pontryagin’s maximum principle Finite horizon problem

We start with an abstract problem involving one control and one state variable, which have the interpretation of consumption and capital. Z T max f (ct , kt , t) dt (9.1) 0

s.t. k˙ t = g(ct , kt ) kT ≥ 0, k0 given. We will consider T < ∞. Note that the constraint has to be satisfied for all periods t (∀ t ∈ [0, T )). As in the discrete time case, kt is a state variable, ct is called the control variable. Form the Lagrangian as if you were doing that for the discrete case: Z T [f (ct , kt , t) + λt (g(ct , kt ) − k˙ t )] dt. L= 0

RT Now observe the trick. Integrate the term 0 λt k˙ t dt by parts: Z T Z T T ˙ − λt kt dt = −λt kt |0 + λ˙ t kt dt. 0

(9.2)

0

Let us assume for now that limt→T λt kt and limt→0 λt kt are finite, so that λt kt |T0 is a finite constant: Z T Z T L = [f (ct , kt , t) + λt (g(ct , kt ) − k˙ t )] dt + λ˙ t kt dt − λt kt |∞ 0 0 0 Z T [f (ct , kt , t) + λt g(ct , kt ) + λ˙ t kt ] dt + constant. = 0

The FOCs are:

∂L ∂L = 0, =0 ⇔ ∂ct ∂kt fc (·) + λt gc (·) = 0 fk (·) + λt gk (·) = −λ˙ t .

The Hamiltonian is just a way of obtaining the same result without having to go through the ‘integration by parts’ reasoning. Now define the Hamiltonian: H(ct , kt , λt , t) ≡ f (ct , kt , t) + λt g(ct , kt )

182

(9.3) (9.4)

9.5. Continuous time optimisation: Hamiltonian and Pontryagin’s maximum principle

where λ(t) is called a co-state variable (as k is a state variable). In terms of this functional, the optimality conditions can be written as: ∂H = 0, ∂ct

∂H = −λ˙ t . ∂kt

To confirm this, observe: ∂H ∂ct ∂H ∂kt

= fc (ct , kt , t) + λt gc (ct , kt ) = 0 = fk (·) + λt gk (·) = −λ˙ t .

This is called the co-state equation. Note the symmetry: ∂H(k, λ, t) ˙ = k. ∂λ Finally, notice that at time T , there is no point in hoarding any capital. Therefore, k(T ) = 0. There exist cases, however, in which the terminal value of the state variable need not be zero (for example, if the consumer reaches the satiation point after which no increases in consumption raise utility). The slightly more general condition is written as:

λ(T )k(T ) = 0.

(9.5)

The co-state variable λ(t) has an interesting and economically meaningful interpretation as the marginal value of increasing the state variable at a given point in time t. A quick exposition of this is contained in the appendix to this chapter.1 The control and state variable distinction has the same logic as in the dynamic programming case. The control variable is chosen freely (within an admissible range) in every instant, which determines the evolution of the state variable. The state variable kt is therefore predetermined at time t. It is even easier to distinguish the control and state variables in the optimal control problems because the state variables and only those have an associated dynamic constraint with k˙ t on the LHS. This is called the transversality condition that completes the list of necessary conditions for a maximum. We now state the general theorem that encompasses various assumptions on the end value k(T ).

1

Another potential reason for leaving behind non-zero capital k(T ) > 0 could be salvage value (for example, one may sell the remaining capital on the market for second-hand equipment). We will mention broader classes of problems with non-zero salvage value in Section 9.5 and observe how the transversality condition is modified.

183

9. Optimal control

Theorem 23 Maximum principle The necessary conditions for {c(t)} to solve the optimal control problem are as follows. There exists a function λ(t) such that at each t: i. c(t) maximises the Hamiltonian H(k(t), c(t), λ(t), t) for all t ii. g(k(t), c(t)) = k˙ t ˙ iii. Hk (k(t), c(t), λ(t), t) = −λ(t) iv. If kT is unrestricted, λ(T ) = 0; if kT ≥ 0, λ(T )k(T ) = 0. No transversality condition is needed if kT given.

Example 9.1 Consider a macroeconomic control problem. The stock variable xt takes on value 0 at the initial date 0, thereafter it evolves according to x˙ t = −ut , where ut is the control variable. You are asked to: Z 2 [et xt − u2t ] dt max 0

s.t. x˙ t = −ut x0 = 0, x ≥ 0. Necessary conditions for maximum: Hu = −2ut − λt = 0 λ˙ = −Hx = −et λ2 x2 = 0. The second equation implies λ = −et + k1 for some constant k1 ∈ R. Plug this into the first equation to obtain: et k1 et k1 − ⇔ xt = − + t + k 2 2 2 2 2 where k1 and k2 are unknown constants. ut =

x0 = 0 implies:

1 − + k2 = 0. 2

λ2 x2 = 0 implies:  2   2  e k1 e k1 1 2 − + 2+ (−e2 + k1 ) − + 2 + k2 (−e + k1 ) = 2 2 2 2 2 1 = (−e2 + 2k1 + 1)(−e2 + k1 ). 2 If x2 = 0, then k1 = (e2 − 1)/2, λ2 = (e2 − 1)/2 − e2 = (−e2 − 1)/2 < 0 which violates the optimality condition that λt ≥ 0. Hence the optimal solution has λ2 = 0, k1 = e2 : xt = −

184

1 et e2 + t+ 2 2 2.

9.6. Ramsey-Cass-Koopmans model

You may wonder if there is an analogue of the second order sufficiency conditions for maximisation as in the Lagrange theory. There are indeed theorems that impose certain concavity or convexity requirements on the functions f (·) and g(·), (t) or c∗ (t) and x∗ (t) guaranteeing that the maximum is reached; however, these sufficiency conditions are very demanding and quite often fail to be met by valid optimal solutions. The appendix to this chapter gives examples of these conditions, but in the guide we will just use direct comparison to claim that a given trajectory attains the maximum of the objective function.

9.5.2

Infinite horizon problem

If in the problem (9.1) T = ∞, the optimality condition and the co-state equation are still necessary conditions for a maximum. Under certain conditions, although not always, the condition (9.5) extends naturally to the infinite horizon case, T = ∞: lim λ(t)k(t) = 0.

t→∞

While this condition is applicable in a wide variety of ‘regular’ cases, there are some exceptions. Examples of the regularity conditions under which the transversality condition is necessary are mentioned in the appendix to this chapter. In the application considered in this guide we will not need to impose the condition to reduce the generality of candidate solutions. Activity 9.1

Consider a macroeconomic control problem:  Z T 3 2 1 2 −2t max − xt − ut e dt 4 2 0 s.t. x˙ t = xt + ut x0 = 1.

(a) Set up the Hamiltonian. Write down the necessary conditions for maximum. (b) Manipulate the conditions in (a) to obtain a system of first order linear differential equations in x and the control variable u. (c) Explicitly solve the system in (b) for the family of solutions of the ODE for x and u. How many dimensions does this family have? (d) Which conditions should you impose to identify the particular solution (out of the family in (c)) that is consistent with the optimality principle?

9.6

Ramsey-Cass-Koopmans model

The model presented in this section has been built in the 1920s by Frank Ramsey to answer the question of how much a nation should save. It is now the standard prototype for studying the optimal intertemporal allocation of resources. Consider an economy populated by Nt people, with Nt growing exponentially at rate n. The labour force is equal to the population (no labour force participation decisions).

185

9. Optimal control

Output is produced using capital, K and labour (1 unit per agent). There is no productivity growth. The output is either consumed or invested: Yt = F (Kt , Nt ) = Ct + Kt . For simplicity, we assume that there is no physical depreciation of capital, or that Yt is net rather than gross output. The production function exhibits constant returns to scale. We first rewrite the model in per capita terms (the reason for that will become clear in the end):   Kt F , 1 = f (kt ) = ct + k˙ t + nkt Nt where lower case letters denote per capita values of variables so that k is the capital-labour ratio; we assume f (kt ) to be strictly concave and to satisfy the so-called Inada conditions: f (0) = 0, f 0 (0) = ∞, f 0 (∞) = 0. The preferences of the agents for consumption over time are represented by the present discounted value of utility: Z ∞ U= u(ct )e−θ(t−s) dt. s

The function u(·) is known as the instantaneous utility function, or ‘felicity’; u(·) is non-negative and a concave increasing function of the per capita consumption. The parameter θ is the rate of time preference or the subjective discount rate, assumed to be strictly positive.

9.6.1

The command optimum

Suppose that a central planner wants at time t = 0 to maximise collective welfare. The only choice that has to be made at each moment of time is how much the representative individual should consume and how much it should add to the capital stock to provide consumption in the future. The planner has to find the solution to the following problem: Z



u(ct )e−θt dt

(9.6)

s.t. k˙ t = f (kt ) − ct − nkt kT ≥ 0, k0 given.

(9.7)

max U0 = 0

Using the maximum principle, the solution is obtained by setting up the Hamiltonian function: Ht = u(ct )e−θt + µt (f (kt ) − ct − nkt ). The variable µt is the co-state variable associated with the state variable k. The value of µt , as you recall, is the marginal value (in present value terms) of an additional unit of capital at time t.

186

9.6. Ramsey-Cass-Koopmans model

Necessary and sufficient conditions for a path to be optimal are: Hc = 0 ⇔ e−θt u0 (ct ) = µt Hk = −µ˙ ⇔ µ˙ t = −µt (f 0 (kt ) − n) lim kt µt = 0.

t→∞

(9.8) (9.9) (9.10)

Using (9.8), the transversality condition (9.10) can be rewritten as: lim [e−θt kt u0 (ct )] = 0.

t→∞

Differentiate (9.8) w.r.t. time: e−θt u00 (ct )c˙t − θe−θt u0 (ct ) = µ˙ t divide by (9.8): e−θt u00 (ct )c˙t − θe−θt u0 (ct ) u00 (ct )c˙t µ˙ t = −θ = . −θt 0 0 e u (ct ) u (ct ) µt Now use (9.9) to factor out µ˙ t /µt : θ− Rewrite this as:



u00 (ct )c˙t = f 0 (kt ) − n. u0 (ct )

u00 (ct )ct u0 (ct )



c˙t = θ + n − f 0 (kt ). ct

(9.11)

You may recognise the term in parentheses as the Arrow-Pratt coefficient of relative risk aversion (CRRA). Most of the time we will use the CRRA class of utility functions, for 00 which uu0(c(ctt)c) t = constant = −σ: f 0 (kt ) − θ − n c˙t = . ct σ

(9.12)

In this case the change in consumption is proportional to the excess of the marginal product of capital (net of population growth) over the discount rate. You can verify 00 that for u(c) = ln c, uu0(c(ctt)c) t ≡ −1.

9.6.2

Steady state and dynamics

The optimal path is characterised by (9.10), (9.11) and the constraint (9.7). We start with the steady state. In steady state both the per capita capital stock, k, and the level of consumption per capita, c, are constant. We denote the steady state values of these variables by k ∗ and c∗ , respectively: f 0 (k ∗ ) = θ + n c∗ = f (k ∗ ) − nk ∗ . Notice, that in the steady state, ultimately, the productivity of capital, and (thus the real interest rate), is determined by the rate of time preference and n. Tastes and population growth determine the real interest rate θ + n, and technology then determines the capital stock and level of consumption consistent with that interest rate.

187

9. Optimal control

Figure 9.1: Phase diagram for the Ramsey model.

9.6.3

Dynamics

To study dynamics, we use the phase diagram, drawn in (k, c) space. All points in the positive orthant except the points on the vertical axis, are feasible: without capital, output is zero, and this positive c is not feasible. The locus k˙ = 0 starts from the origin, reaches a maximum at a certain level (known by the special name ‘golden rule’), and crosses the horizontal axis at point A where f (k) = nk. The c˙ = 0 locus is, from (9.12), vertical. Anywhere above the k˙ = 0 locus, the capital-labour ratio k is decreasing: consumption is above the level that would just maintain k constant. Similarly, k is increasing at points below the k˙ = 0 locus. In the case of the c˙ = 0 locus, consumption is increasing to the left of the locus, where f 0 (kt ) > θ + n, and decreasing to the right of the locus. The vertical arrows demonstrate these directions of motion. The system has three steady states, the origin, point E and point A, however, for any starting values of k, only the trajectory DD, the saddle path that converges to E, satisfies the necessary conditions (9.7), (9.10) and (9.11). On all other paths, either (9.11) eventually fails (phase curves to the left of DD) or the transversality condition is not satisfied; finally, paths like DD00 can be improved upon in a straightforward fashion by simply increasing consumption at all times. The central planner’s solution to the optimising problem (9.6) is fully summarised by the path DD. For each initial capital stock, this implies a unique initial level of consumption. For instance, with initial capital stock k0 , the optimal initial level of consumption is c0 . Convergence of c and k to c∗ and k ∗ is monotone. Notice that the solution to the optimisation problem (9.6) is stationary in per capita terms, however, it

188

9.6. Ramsey-Cass-Koopmans model

is trending in terms of total levels of capital and consumption, growing at the same rate as population. Had we not restated the problem in per capita terms, we would have had to deal with a non-stationary system of non-linear differential equations, an overwhelming task. By transforming the problem to a stationary one we were able to use the phase diagram apparatus for the analysis.

9.6.4

Local behaviour around the steady state

Linearisation of the dynamic system (9.12), (9.7) yields further insights into the dynamic behaviour of the economy. Linearising both equations in the neighbourhood of the steady state gives: c˙ = σf 00 (k ∗ )c∗ · (k − k ∗ ) = −β(k − k ∗ ) and: k˙ = [f 0 (k ∗ ) − n](k − k ∗ ) − (c − c∗ ) = θ(k − k ∗ ) − (c − c∗ ). The characteristic matrix for the system in (c, k) is given by:   0 σf 00 (k ∗ )c∗ . −1 f 0 (k ∗ ) − n The characteristic polynomial is: −z(f 0 (k ∗ ) − n − z) + σf 00 (k ∗ )c∗ = z 2 + (−f 0 (k ∗ ) + n)z + σf 00 (k ∗ )c∗ . Since f 00 < 0, the last coefficient in the polynomial is negative, hence the roots are of different sign: one root is positive and the other negative. This implies the saddle point property of the (c∗ , k ∗ ) steady state. Activity 9.2 While discussing the Ramsey model we have analysed the planner’s solution. Consider now an individual optimisation problem, in which the resource constraint is given by k˙ = w + rk − c − nk (flow of income consists of wage receipts and interest on shareholdings). Firms produce using production function F (K, N ) = N f (k) and hire factors of production at wage rate w and interest rate r, determine in a perfectly competitive market. Set up the Hamiltonians of the individual optimisation problem and derive optimality conditions. Compare them to (9.9) and (9.10) above. What do you find?

9.6.5

Fiscal policy in the Ramsey model

Now let us introduce government fiscal policy into the standard Ramsey model: Z ∞ max U0 = u(ct )e−θt dt 0

s.t. k˙ t = f (kt ) − ct k0 > 0 given

189

9. Optimal control

where u(c) = ln c and f (kt ) = k(t)α − g (g is government expenditure). The Hamiltonian is: Ht = ln ct e−θt + µt (k(t)α − g − ct ). The necessary and sufficient conditions for a path to be optimal are: e−θt = µt ct = −µ˙ ⇔ µ˙ t = −µt αk(t)α−1 = 0.

sHc = 0 ⇔ Hk lim kt µt

t→∞

(9.13) (9.14) (9.15)

Using (9.13), the transversality condition (9.15) can be rewritten as:   −θt kt lim e = 0. t→∞ ct Differentiate (9.13) w.r.t. time and divide by (9.13): −θt −1 −e−θt c−2 ct = µ˙ t 1 c˙ t − θe −θt −1 −e−θt c−2 ct 1 µ˙ t t c˙ t − θe = − ( c ˙ + θc ) = . t t −1 ct µt e−θt ct Now use (9.14) to factor out µ˙ t /µt :

1 (c˙t + θct ) = αk(t)α−1 ct c˙t = αktα−1 − θ. ct The steady state values k ∗ , c∗ solve: α(k ∗ )α−1 = θ c∗ = (k ∗ )α − g. Linearising both equations in the neighbourhood of the steady state gives: c˙ = σf 00 (k ∗ )c∗ · (k − k ∗ ) = −β(k − k ∗ ) where β = α(1 − α)(k ∗ )α−2 c∗ , and: k˙ = f 0 (k ∗ )(k − k ∗ ) − (c − c∗ ) = θ(k − k ∗ ) − (c − c∗ ). The k˙ = 0 locus is no longer hump-shaped, it is monotone increasing and is shifted down by the magnitude of g (see Figure 9.2). Assume the economy is in steady state. Consider an unanticipated increase in parameter g from g0 to g1 > g0 . The economy will experience an instantaneous jump down to the new steady state (consumption falls with unchanged capital stock, see Figure 9.3). Again, assume the economy is in steady state. Consider instead that the increase in g happens at t1 > t0 while the news of the future change arrives at t0 . The economy will experience a smaller jump down, as c decreases, then a temporary increase in the capital stock and falling consumption until at t1 the economy is on the saddle path with falling capital and consumption (see Figure 9.4). The adjustment path is unique.

190

9.6. Ramsey-Cass-Koopmans model

Figure 9.2: Phase portrait for the associated system of ODEs.

Figure 9.3: Adjustment to an unanticipated fiscal expansion.

191

9. Optimal control

Figure 9.4: Adjustment to a pre-announced fiscal expansion.

9.6.6

Current value Hamiltonians and the co-state equation

Suppose in an optimal control problem the objective function under the integral is discounted: Z T max e−rt f (ct , kt , t) dt. 0

The canonical Hamiltonian is: H = e−rt f (ct , kt , t) + λt g(ct , kt ). This can be rewritten as: H = e−rt [f (ct , kt , t) + µt g(ct , kt )] where µt = ert λt . Now the co-state equation says: λ˙ = −Hk .

(9.16)

µ˙ = ert˙λt = ert λ˙ t + rert λt = −ert Hk + rµ = −Hk + rµ

(9.17)

Let us rewrite this, using µ:

where H is the current value Hamiltonian, defined as: H = f (ct , kt , t) + µt g(ct , kt ) = ert H. Notice that using regular Hamiltonians, or current value Hamiltonians makes no difference whatsoever to the resulting optimality conditions – this just affects whether they are written in terms of λ or µ. Remember (9.16) is marginally easier than (9.17).

192

9.7. Optimal investment problem: Tobin’s q

9.6.7

Multiple state and control variables

Activity 9.3 Consider the optimisation problem of a household that holds two categories of assets: money and government bonds. Domestic money bears no interest but the transactions technology is such that holding cash balances reduces liquidity costs of purchasing consumption goods, which we capture by introducing real money balances m alongside consumption c in the utility function: Z ∞ e−ρt u(c, m) dt. max 0

The real financial wealth of the representative household, a, is given by a = m + b, where b denotes the real stock of bonds. The flow budget constraint is: a˙ = y + rb − c − πm where r is the real interest rate on bonds, and π is the inflation rate. The utility function is assumed to be: u(c, m) =

c1−η + χ ln m. 1−η

Households treat y, r, π as given. (a) Determine the sets of control variables and state variables. If you have more endogenoues variables, factor them out to obtain a standard optimal control problem. (b) Set up the Hamiltonian. Write down the necessary conditions for maximum. (c) Factor out the co-state variables to obtain the law of motion of c. (d) Characterise the relationship between real money balances and consumption on the optimal path.

9.7

Optimal investment problem: Tobin’s q

Consider the following problem: Z max

+∞

e−rt [πk(t) − it − c(it )] dt

0

s.t. k˙ t = it k0 given where π(k) is the concave monotone increasing profit function (instantaneous profit, achievable in a given period of time with capital k), and c(it ) is the strictly convex cost of installing new capital, assumed to satisfy c00 (i) > o, c0 (0) = c(0) = 0. Using the maximisation principle, the solution is obtained by setting up the current value Hamiltonian with the (modified) co-state variable qt : H = π(kt ) − it − c(it ) + qt it .

193

9. Optimal control

The necessary and sufficient conditions for a path to be optimal are:

Hi = qt − 1 − c0 (it ) = 0 Hk = π 0 (kt ) = rqt − q˙t lim (e−rt qt kt ) = 0.

t→∞

(9.18) (9.19) (9.20)

From these we obtain a system of first order ODEs:

−1 c0 (k˙ t ) = qt − 1 ⇔ k˙ t = c0 (qt − 1) q˙t = rqt − π 0 (kt ).

9.7.1

(9.21) (9.22)

Steady state

We start with the steady state in which the capital stock, kt , and the level of the co-state variable qt are non-zero constants. We denote the steady state values of these variables by k ∗ and q ∗ , respectively:

c0 (0) = 0 = q ∗ − 1 0 = rq ∗ − π 0 (k ∗ ).

The co-state variable qt here has the interpretation of the net current value of an extra unit of capital at time t. It is commonly known in the macroeconomic literature as Tobin’s q. The steady state value of Tobin’s q when the firm should neither invest nor disinvest, is 1, which is the cost of adding an extra infinitesimal unit of k. Hence the intutitive interpretation of the steady state as the one in which marginal value of capital just equals the marginal cost. The steady state value of k ∗ is such that the market return r equals intra-firm return π 0 (k ∗ ).

9.7.2

Phase diagram

To study dynamics, we use the phase diagram, drawn in (k, q) space. The locus k˙ = 0 is horizontal at q = q ∗ = 1. The q˙ = 0 locus is downward sloping as q = π 0 (k)/r on it, and the profit function is concave, π 00 < 0. The arrows demonstrate directions of motion associated with the differential laws (9.21) and (9.22).

194

9.8. Continuous time optimisation: extensions

Only the saddle trajectory that converges to the steady state satisfies the necessary conditions (9.18) and (9.20). On all other paths, either the transversality condition is violated as q → ∞, k → ∞ or arriving at k = 0 the system jumps to the origin discontinuously violating (9.18).

9.7.3

Local behaviour around the steady state

Linearising both equations in the neighbourhood of the steady state produces: k˙ ≈

1 c00 (0)

(q − 1)

q˙ ≈ r(q − 1) − π 00 (k ∗ )(k − k ∗ ). The determinant of the characteristic polynomial formed from this system of ODEs is π 00 (k ∗ )/c00 (0) < 0, implying that one root is positive and the other negative, confirming the saddle point property.

9.8

Continuous time optimisation: extensions

In this section we briefly summarise the extensions to the canonic optimal control problem (9.1). Suppose the state variable (vector) is denoted by x and the control variable (vector) by u. Z T max f (x(t), u(t), t) dt 0

x˙ = Q(x(t), u(t), t).

195

9. Optimal control

There could be both inequality and equality constraints to be satisfied at all point on the interval [0, T ]: g(x(t), u(t), t) ≥ 0 ∀ t

and/or: h(x(t), u(t), t) = 0 ∀ t.

Another kind of constraints are intergral constraints, which can also be written as inequalities or equalities: T

Z

φ(x(t), u(t), t) dt ≥ 0 0

and/or: Z

T

ψ(x(t), u(t), t) dt = 0. 0

Finally, the objective function may be complicated by the terminal value:2

Z max

T

 f (x(t), u(t), t) dt + F (x(T )) .

0

The Hamiltonian apparatus can incorporate all of the above mentioned conditions and modifications as follows:

H = f (x(t), u(t), t) + λ(t)Q(x(t), u(t), t) + p1 g(x(t), u(t), t) +p2 h(x(t), u(t), t) + q1 φ(x(t), u(t), t) + q2 ψ(x(t), u(t), t).

2

Recall the discussion of the scrap value of capital at the beginning of Chapter 8.

196

9.8. Continuous time optimisation: extensions

Theorem 24 The necessary conditions for {u(t)} to solve the optimal control problem are as follows. There exist non-negative functions λ(t), p1,2 (t), q1,2 (t) such that at each t: ∂H ∂u ∂H − ∂x ∂H ∂λ

= 0 = λ˙ = x. ˙

p1 (t) ≥ 0, g(x(t), u(t), t) p1 (t)g(x(t), u(t), t) h(x(t), u(t), t) q1 (t) Z T φ(x(t), u(t), t) dt q1 (t) 0 Z T ψ(x(t), u(t), t) dt

≥ = = ≥

0 0 0 0

= 0 = 0

0

∂F = λ(T ). ∂x(T ) The last condition is the generalised transversality condition.

Example 9.2

Spatial pricing problem.

Suppose a monopolistic firm serves customers distributed on the one-dimensional segment of length T . Let t index the location of a customer and τ (t) be the transport cost of shipping to t, τ (0) = 0. Denote the price charged at location t (inclusive of delivery costs) p(t). Assume the demand at location t is characterised by the local inverse demand function P (y(t)) and suppose that price discrimination across locations is feasible (the monopoly can charge different prices at different locations not fearing arbitrage). If y(t) is the quantity sold at t, then the total sales are given by: Z T Y (T ) = y(t) dt. 0

The total costs of production are given by C(Y (T )). The objective of the firm is to maximise market-wide profits as follows: Z T π = [P (y(t)) − τ (t)]y(t) dt − C(Y (T )) 0

s.t. Y˙ = y(t) H = P (y(t))y(t) − τ (t)y(t) + λ(t)y(t).

197

9. Optimal control

The necessary conditions for optimum: ∂H = 0 = P 0 (yt )yt + P (yt ) − τt + λt = 0 ∂y ∂H − = λ˙ ⇔ λ˙ = 0 ∂Y y(t) = Y˙ . Finally, notice that the transversality condition is now: −C 0 (Y (T )) = λ(T ). This implies that λ is invariant with respect to the location index t: λ ≡ −C 0 (Y (T )). In view of the constant λ, the pricing equation can be rearranged as follows: P 0 y + P = M R(t) = τ (t) + C 0 (Y (T )). For a closed-form solution example, consider a constant elasticity demand described by: P (y) = y −ξ , 0 < ξ < 1. Then: M R(t) = −ξy −ξ + y −ξ = (1 − ξ)y −ξ = (1 − ξ)P (t) (1 − ξ)P (t) = τ (t) + C 0 (Y (T )) τ (t) + C 0 (Y (T )) ⇔ P (t) = . 1−ξ The last equation is the pricing rule. Notice that it implies that the price differentials across locations, in general, do not equal net transport cost differences: P (t) − P (0) =

τ (t) > τ (t). 1−ξ

For the general demand functions with elasticities η(t): P (t) =

τ (t) + C 0 (Y (T )) . 1 − 1/η(t)

If η 0 > 0, a slightly perverse result may be obtained whereupon remote locations may be charged lower prices despite higher transport costs. This is obviously the result of the monopolisitic price setting.

198

9.9. Solutions to activities

9.9

Solutions to activities

Solution to Activity 9.1 Set up the Hamiltonian:   3 2 1 2 −2t H = − xt − ut e + λt (xt + ut ). 4 2 Necessary conditions for maximum: Hu = −ut e−2t + λt = 0 3 λ˙ = −Hx = xt e−2t − λt 2 λT xT = 0 x˙ t = xt + ut ˙ 2t + 2λt e2t u˙ t = λe   3 −2t 2t = e xt e − λt + 2ut 2 3 = xt − ut + 2ut 2 3 = ut + xt . 2 The two-dimensional system is: x˙ t = xt + ut 3 u˙ t = ut + xt . 2 Solving the system yields: 1 √ r1 t 1 √ r2 t C1 6e − C2 6e 3 3 = C1 er1 t + C2 er2 t √ √ = ue−2t = C1 e−t+t 6/2 + C2 e−t−t 6/2

xt = ut λt where r1,2 = 1 ±

√ 6/2.

Verify that xT = 0. For suppose not, then the transversality condition implies λT 6= 0: 1 √ r1 T 1 √ r2 T C1 6e − C2 6e = 0 3 3 r2 T = C2 e√ = C1 e 6T .

xT = ⇒ C1 er1 T ⇒ C2

199

9. Optimal control

1 C1 3





x0 = 1 √ 1 6 − C2 6 = 1 3 r

⇔ C1 − C2 =

3 2



3 √ τ0 . Describe the adjustment trajectory (be explicit about jump variables and crawler variables).

9.11

Comments on the sample examination/practice questions

1. Z max

T

(ut aeαt − u2t eβt )e−rt dt

0

s.t. x˙ t = −ut x0 = x¯, xT = 0. We have: H = ut ae(α−r)t − u2t e(β−r)t − λt ut Hu = ae(α−r)t − 2ut e(β−r)t − λt = 0 −λ˙ = Hx = 0.

205

9. Optimal control

It is desired to extract all of the oil during a given time interval [0, T ] ⇒ x(T ) = 0. There is no transversality condition since x(T ) is not free. λ˙ t = 0 implies λt ≡ constant = K1 . Substitute this into the optimality condition w.r.t. ut to obtain: ae(α−r)t − 2ut e(β−r)t = K1 ae(α−r)t − K1 α (α−β)t K1 (r−β)t ut = e − e . = (β−r)t 2e 2 2 For xt we have: K1 (r−β)t a (α−β)t e − e 2 2 K1 a x = e(r−β)t − e(α−β)t + K2 . 2(r − β) 2(α − β) x˙ = −u =

Impose the two conditions: x(0) = x¯, x(T ) = 0.

K1 e(r−β)T 2(r − β)

a K1 − + K2 = x¯ 2(r − β) 2(α − β) a − e(α−β)T + K2 = 0. 2(α − β)

The solution is: (β − r)(2xβ − 2xα − a + aeT α−T β ) (β − α)(eT r−T β − 1) aeT α−T β − aeT r−T β − 2xαeT r−T β + 2xβeT r−T β = . 2(β − α)(eT r−T β − 1)

K1 = K2 So:

1 (2x(β − α) + a(eT (α−β) − 1)) (r−β)t a e − e(α−β)t T (r−β) 2 (α − β)(e − 1) 2(α − β) [a(eT (α−β) − eT (r−β) ) + 2x(eT (r−β) )(β − α)] + 2(β − α)(eT (r−β) − 1) a (α−β)t (β − r)(2xβ − 2xα − a + aeT α−T β ) (r−β)t = e − e 2 2(β − α)(eT (r−β) − 1) (β − r)(2xβ − 2xα − a + aeT α−T β ) = . (β − α)(eT (r−β) − 1)

x∗t =

u∗t λ∗t 2.



 1 2 max ax(t) − u (t) e−rt dt 2 0 s.t. x˙ = −bx + u x(0) = x0 . Z

206



9.11. Comments on the sample examination/practice questions

(a)  1 2 −rt ax − u e + λ(−bx + u) 2 λ λb − ae−rt 0. 

H = ue−rt = λ˙ = lim λt xt =

t→∞

Use µt = ert λt . (b) x˙ λ˙ x˙ µ˙ µt xt

−bx + λert λb − ae−rt −bx + µ −a + (b + r)µ a = C1 e(b+r)t + b+r e(b+r)t a = C2 e−bt + C1 + . 2b + r (b + r)b = = = =

(c) The solution to the optimal control problem is just one of the solutions of the system of ODEs. Impose the transversality condition to explicitly find the subfamily of solutions consistent with the optimality principle. What does it imply about the evolution of λ? a −rt e λt = ert µt = C1 ebt + b+r    a −rt a e(b+r)t bt −bt λt xt = C1 e + e + . C2 e + C1 b+r 2b + r (b + r)b (2b+r)t

As long as C1 6= 0, there is a term C12 e 2b+r in λt xt , and lim λt xt = ∞. So, C1 = 0. λt =

a −rt e b+r

xt = C2 e−bt +

a . (b + r)b

(d) Initial condition: a (b + r)b a = x0 − (b + r)b   a a = x0 − e−bt + . (b + r)b (b + r)b

x0 = C 2 + ⇒ C2 xt

The solution converges to a steady state.

207

9. Optimal control

3. Z

T

max

[αu2 (t) + βx(t)] dt

0

s.t. x(t) ˙ = u(t) x(0) = 0, x(T ) = B.

(a) H = αu2 + βx + λu. (b) Hu = 2αu + λ = 0 Hx = β = −λ˙ x˙ = u. (c) λ(t) = −βt + K1 . (d) Use the result in (c) to find the family of solutions for u and x (up to constants of integration). 2αu(t) = −λ(t) = βt − K1 βt − K1 u(t) = Z 2α β 2 K1 t βt − K1 x(t) = dt = t − + K2 . 2α 4α 2α (e) Impose x(0) = 0, x(T ) = B. βt(t − T ) t +B 4α T β(2t − T ) B u(t) = + 4α T T B λ(t) = β − 2α − βt. 2 T x(t) =

4. H = (2xt e−t − 2xt ut − u2t ) + λut Hu = −2xt − 2ut + λt = 0 −λ˙ t = Hx = 2e−t − 2ut . Substitute for ut : 1 x˙ t = ut = λt − xt 2 ˙λt = λt − 2xt − 2e−t .

208

9.11. Comments on the sample examination/practice questions

This is a non-homogeneous system of equations. Find the general solution first: 1 x˙ = ut = λt − xt 2 ˙λt = λt − 2xt 1 x0 = −x + λ 2 0 λ = −2x + λ. The matrix is:



−1 1/2 −2 1



characteristic polynomial is r2 and the roots are identical and equal to 0. This implies xgt = K1 + K2 t, λgt = 2(K1 + K2 ) + 2K2 t. To find a particular solution, assume xpt = αe−t , λpt = βe−t and use the method of undetermined coefficients: 1 x˙ t = −αe−t = βe−t − αe−t 2 ˙λt = −βe−t = βe−t − 2αe−t − 2e−t . The first identity implies β = 0, while the second one −2αe−t − 2e−t ≡ 0 ⇒ α = −1. To sum up: x∗t = K1 + K2 t − e−t λ∗t = 2(K1 + K2 ) + 2K2 t. Impose the border conditions x0 = 0, x1 = 1 to obtain: x0 = K 1 − 1 = 0 x1 = K1 + K2 − e−1 = 1. Hence K1 = 1, K2 = e−1 . The complete solution is given by: x∗t = 1 + e−1 t − e−t   1 2 ∗ + t λt = 2 1 + e e ∗ −1 −t ut = e + e . 5. (a) Set up the Hamiltonian: 1 Ht = − exp(−2ct ) + µt (ktα − ct − nkt ). 2 Necessary conditions for the consumption/savings path to be optimal are: Hc = 0 ⇔ exp(−2ct ) = µt Hk = θµt − µ˙ t = (θ + n)µt − µt αktα−1 lim kt e−θt µt = 0.

t→∞

(9.25) (9.26) (9.27)

209

9. Optimal control

Using (9.26), the transversality condition (9.27) can be rewritten as: lim [e−θt−2ct kt ] = 0.

t→∞

Differentiate (9.25) w.r.t. time and divide by (9.25): µ˙ t = −2e−2ct c˙t µ˙ t = −2c˙t = θ + n − αktα−1 . µt Now use (9.26) to factor out µ˙ t /µt : c˙t =

αktα−1 − θ − n . 2

(9.28)

Recalling the capital accumulation equation, we obtain the second equation for the system of ODEs: k˙ t = ktα − ct − nkt . (9.29) (b) The optimal path is characterised by (9.28) and (9.29) (as well as the initial condition and the constraint (9.27)). In steady state both the per capita capital stock, k, and the level of consumption per capita, c, are constant. We denote the steady state values of these variables by k ∗ and c∗ , respectively, and focus on the steady state with non-zero capital and consumption: 1/(α−1)  θ+n ∗ k = α ∗ ∗ α c = (k ) − nk ∗ . Linearising both equations in the neighbourhood of that steady state gives: α(α − 1)(k ∗ )α−2 c˙ = (k − k ∗ ) 2 k˙ = [α(k ∗ )α−1 − n](k − k ∗ ) − (c − c∗ ). (c) The central planner’s solution to the optimising problem is fully summarised by the saddle path. For each initial capital stock, this implies a unique initial level of consumption. For instance, with initial capital stock k0 , the optimal initial level of consumption is c0 . (d) Both stationary loci are changed, c˙ = 0 shifts to the left, while k˙ = 0 flattens. The new steady state capital is less than before (since α < 1). Without formally checking the slope of the saddle path, graphically it could be that the saddle path cuts the old c˙ = 0 locus above or below the old steady state. Assuming that it does so from below: the economy jumps to the new saddle path immediately with lower consumption (because c is a jump variable and k is a crawler variable). Then the capital stock is gradually eroded while consumption falls still more toward the new steady state. (e) Similar to (d), but the initial drop in consumption is smaller, and the capital starts increasing initially before the actual change in the population growth happens, when the capital and consumption are both declining along the saddle path.

210

9.11. Comments on the sample examination/practice questions

Figure 9.5: Diagram for the solution to Activity 5 (c).

Figure 9.6: Diagram for the solution to Activity 5 (d).

211

9. Optimal control

Figure 9.7: Diagram for the solution to Activity 5 (e).

6. Z max {it }



e−rt [π(kt ) − (1 − τ )It − c(It )] dt

0

s.t. k˙ = I k(0) = k0 .

(a) The current value Hamiltonian with the co-state variable qt is written as:

H = ln(1 + kt ) − (1 − τ )It −

eIt + e−It + 1 + qt It . 2

Necessary and sufficient conditions for an investment path to be optimal are:

HI = q − (1 − τ ) −

eI − e−I =0 2

1 = rq − q˙ 1+k lim (e−rt qt kt ) = 0. Hk =

t→∞

212

(9.30) (9.31) (9.32)

9.11. Comments on the sample examination/practice questions

From these we obtain the system of first order ODEs: eI − e−I 2 2I 1 e − (q − 1 + τ )eI − ⇒ 2 2 ⇒ eI ⇒ k˙

= q−1+τ = 0 p = q − 1 + τ + 1 + (q − 1 + τ )2   p = I = log q − 1 + τ + 1 + (q − 1 + τ )2 (9.33)

q˙ = rq −

1 . 1+k

(9.34)

(b) The steady state:   p log q − 1 + τ + 1 + (q − 1 + τ )2 p ⇒ q − 1 + τ + 1 + (q − 1 + τ )2 ⇒ q∗ − 1 + τ q∗ rq ∗ −

= 0 = 1 = 0 = 1 − τ.

1 − r(1 − τ ) 1 = 0 ⇒ k∗ = . 1+k r(1 − τ )

i p d h  2 log q − 1 + τ + 1 + (q − 1 + τ ) dq 1 + √2(q−1+τ ) 2 2 1+(q−1+r) p = q − 1 + τ + 1 + (q − 1 + τ )2 −1/2 1 = p = 1 + (q − 1 + τ )2 . 1 + (q − 1 + τ )2 Linearising both equations in the neighbourhood of the steady state gives: −1/2 k˙ ≈ 1 + (q ∗ − 1 + τ )2 (q − q ∗ ) = q − 1 + τ q˙ ≈ r(q − 1 + τ ) − π 00 (k ∗ )(k − k ∗ ) 1 (k − k ∗ ) = r(q − 1 + τ ) + (1 + k ∗ )2 = r(q − 1 + τ ) + r2 (1 − τ )2 (k − k ∗ ). The determinant of the matrix of coefficients:   r r2 (1 − τ )2 1 0 for this system of ODEs is r2 (1 − τ )2 < 0, implying that one root is positive and the other negative, confirming the saddle point property.

213

9. Optimal control

(c) To study dynamics, we use the phase diagram, drawn in (k, q) space.

The locus k˙ = 0 is horizontal at q = q ∗ = 1 − τ . The q˙ = 0 locus is downward sloping: q = 1/r(1 + k). The arrows demonstrate directions of motion associated with the differential laws (9.33) and (9.34). Only the saddle trajectory, which converges to the steady state, is the optimal solution to the problem. (d) The k˙ = 0 locus shifts down. Since q is a jump variable, there is an instantaneous jump down to the new saddle path. Capital starts accumulating as Tobin’s q is further falling. The adjustment path is unique.

9.12

Appendix A: Interpretation of the co-state variable λ(t)

Consider the maximum of (9.1) in Section 9.2 subject to constraints: ¯ t0 ) = max V (k,

Z

T

f (ct , kt , t) dt 0

¯ s.t. k0 = k. Denote the specific paths of control, state and co-state variables delivering the maximum by c∗t , kt∗ and λt , respectively. Consider altering the initial level of capital by a small magnitude a, and the maximum for the modified problem given by V (k0 + a, 0)

214

9.12. Appendix A: Interpretation of the co-state variable λ(t)

obtained for c˜t , k˜t . Z

T ∗

Z



T

f (c , k , t) dt =

V (k0 , 0) =

[f (c∗ , k ∗ , t) + λg(c∗ , k ∗ ) − λk˙ ∗ ] dt.

0

0

Integrating by parts as in (9.2): Z T ˙ ∗ ] dt + λ0 k ∗ − λT k ∗ . V (k0 , 0) = [f (c∗ , k ∗ , t) + λg(c∗ , k ∗ ) + λk 0 T

(9.35)

0

Similarly: Z

T

˜ t) dt f (˜ c, k,

V (k0 + a, 0) = 0

Z =

T

˜ t) + λg(˜ ˜ + λ˙ k] ˜ dt + λ0 k˜0 − λT k˜T [f (˜ c, k, c, k)

(9.36)

0

for the same function λ(t). Notice, though, that k0∗ = k0 , k˜0 = k0 + a. Subtract (9.35) from (9.36): Z T ˜ t) − f (c∗ , k ∗ , t)] dt V (k0 + a, 0) − V (k0 , 0) = [f (˜ c, k, 0 Z T ˜ t) + λg(˜ ˜ + λ˙ k˜ − f (c∗ , k ∗ , t) − λg(c∗ , k ∗ ) − λk ˙ ∗ ] dt [f (˜ c, k, c, k) = 0

+λ0 k˜0 − λT k˜T − (λ0 k0∗ − λT kT∗ ) Z =

T

˜ t) + λg(˜ ˜ + λ˙ k˜ − f (c∗ , k ∗ , t) − λg(c∗ , k ∗ ) − λk ˙ ∗ ] dt [f (˜ c, k, c, k)

0

+λ0 a − λT (k˜T − kT∗ ). Under the integral, susbtitute the first order Taylor expansion at every point t: ˜ t) ≈ f (c∗ , k ∗ , t) + fc (c∗ , k ∗ , t)(˜ f (˜ c, k, c − c∗ ) + fk (c∗ , k ∗ , t)(k˜ − k ∗ ) ˜ ≈ g(c∗ , k ∗ ) + gc (c∗ , k ∗ )(˜ g(˜ c, k) c − c∗ ) + gk (c∗ , k ∗ )(k˜ − k ∗ ). V (k0 + a, 0) − V (k0 , 0) Z T ˙ k˜ − k ∗ )] dt [{fc (c∗ , k ∗ , t) + λgc (c∗ , k ∗ )}(˜ c − c∗ ) + {fk (c∗ , k ∗ , t) + λgk (c∗ , k ∗ ) + λ}( ≈ 0

+λ0 a − λT (k˜T − kT∗ ). By (9.3) and (9.4), both terms under the integral are zero: fc (c∗ , k ∗ , t) + λgc (c∗ , k ∗ ) = 0 fk (c∗ , k ∗ , t) + λgk (c∗ , k ∗ ) + λ˙ = 0 and finally, by transversality, either λT = 0, or kT∗ = k˜T = 0, so: V (k0 + a, 0) − V (k0 , 0) ≈ λ0 a V (k0 + 1, 0) − V (k0 , 0) = Vk (k0 , 0) = λ0 . lim a→0 a Thus λ0 is the marginal valuation in the optimal program of an extra unit of the state variable at time 0. The discussion has only considered the initial time period 0. It is easily shown that λt is likewise the marginal value of relaxing the state variable constraint at time t.

215

9. Optimal control

9.13

Appendix B: Sufficient conditions for solutions to optimal control problems

Theorem 25 Consider the problem: Z max

T

f (ct , kt , t) dt

(9.37)

0

s.t. k˙ t = g(ct , kt ) k0 given, kT ≥ 0 and suppose that for a triple (c∗t , kt∗ , λ∗t ) the necessary conditions of the maximum principle are satisfied. Suppose further that maxc H(c, k, λ, t) has a solution and the ˆ resulting function H(k, λ, t) is concave in k for any t. Then (c∗t , kt∗ , λ∗t ) solves (9.37). See Section 9.7 in Sydsæter et al.

9.14

Appendix C: Transversality condition in the infinite horizon optimal contol problems

As mentioned in the footnote on page 172, there are circumstances in which the transversality condition is not necessary for a solution to an optimal control problem with no restriction on the terminal value kT . However, as shown by Kamihigashi (2001) ‘Necessity of Transversality Conditions for Infinite Horizon Problems’, Econometrica, Vol. 69, the transversality condition is necessary if on the optimal path c∗t can be written as c∗t = φt (kt∗ , k˙ t∗ , t) (this basically requires that the contraint k˙ t = g(ct , kt ) can be solved as an equation for ct ) and the function: f (c∗t , kt∗ , t) = f (φt (kt∗ , k˙ t∗ , t), kt∗ , t) can be integrated on (0, +∞) with respect to t. He also shows, that even if the integrability condition fails, the transversality condition is still necessary if the function f (φt (kt∗ , k˙ t∗ , t), kt∗ , t) is homogeneous of an arbitrary degree θ in kt and k˙ t . Additionally, in the context of infinite horizon problems, a sufficient condition for maximum akin to the transversality condition is given by the following: lim λ(t)[k(t) − k ∗ (t)] ≥ 0 for all feasible k(t).

t→∞

This condition has to be used in conjunction with the regular maximum principle conditions and the concavity of the Hamiltonian in k, c. See Section 9.11 in Sydsæter et al.

216

Appendix A Sample examination paper Important note: This Sample examination paper reflects the examination and assessment arrangements for this course in the academic year 2014–2015. The format and structure of the examination may have changed since the publication of this subject guide. You can find the most recent examination papers on the VLE where all changes to the format of the examination are posted.

Time allowed: 3 hours. Candidates should answer EIGHT of the following TEN questions: all FIVE from Section A (8 marks each) and THREE from Section B (20 marks each). Workings should be submitted for all questions requiring calculations. Any necessary assumptions introduced in answering a question are to be stated.

217

A. Sample examination paper

Section A Answer all questions from this section (8 marks each). 1. Answer all parts of this question. (a) Define concavity for a real valued function. (b) Define increasing returns to scale for a production function f (K, L), where K and L are capital and labour. (c) Prove that if a production function f (K, L) has f (0, 0) = 0 and is concave, then it cannot have increasing returns to scale. 2. Answer all parts of this question. (a) Define a homogeneous function. (b) Define the uncompensated demand functions and the indirect utility function. (c) Demonstrate that the uncompensated demand functions and the indirect utility function are homogeneous of degree zero in prices and income. 3. Consider the Ramsey model: Z ∞ max u(ct )e−θt dt s.t. k˙ t = f (kt ) − ct , ct

k0 given.

0

(a) Set up the Hamiltonian function H. Denote the co-state variable by λ. (b) Set up the current value Hamiltonian function H. Denote the co-state variable by µ. (c) Show that, in optimising the current value Hamiltonian in (b), the co-state equation which is a necessary condition is given by: µ˙ = −Hk + θµ where Hk is the derivative of the current value Hamiltonian with respect to capital k. (d) Do solving (a) and (b) lead to the same optimality conditions? Briefly explain. 4. For each of the following ordinary differential equations, find the solution which satisfies the given initial conditions: (a) y 00 + 7y 0 − 8y = 0, y(0) = 0, y 0 (0) = −2. (b) y 00 − 10y 0 + 25y = 0, y(0) = 0, y 0 (0) = 1. 5. The Slutsky equation relates the partial derivatives of the uncompensated demand functions x1 (p1 , p2 , m), x2 (p1 , p2 , m) and of the compensated demand functions h1 (p1 , p2 , u), h(p1 , p2 , u) to each other. It provides the effect of a change in the price of good 1 on demand for good 1, as follows: ∂x1 (p1 , p2 , m) ∂h1 (p1 , p2 , u) ∂x1 (p1 , p2 , m) = − x1 (p1 , p2 , m). ∂p1 ∂p1 ∂m By using duality, prove that the Slutsky equation holds true.

218

Section B Answer three questions from this section (20 marks each). 6. Consider a consumer with preferences represented by: u(x1 , x2 ) = −

1 1 − x1 x2

where x1 and x2 are quantities consumed of goods 1 and 2 respectively. Denote by pi the price of good i, for i = 1, 2 and by m the consumer’s income. (a) Calculate the uncompensated demand functions x1 (p1 , p2 , m), x2 (p1 , p2 , m) and the indirect utility. (b) For a given level of utility u solve the dual problem and find the compensated demand functions. (c) Verify that the indirect utility function and the expenditure function are inverses of one another. 7. Suppose an economy has 300 units of labour and 450 units of land. These can be used in the production of wheat and cotton. Each unit of wheat requires 2 of labour and 1 of land; each unit of cotton requires 1 of labour and 2 of land. A plan to produce x units of wheat and y units of cotton is feasible if its requirements of each factor of production are less than the available amount of that factor: 2x + y ≤ 300 x + 2y ≤ 450 The society wants to find the production plan that is feasible and that maximises a social welfare function W defined over the quantities of the two goods: W (x, y) = x1−a y a with 0 < a < 1. (a) State formally the constrained welfare maximisation problem for such a society. (b) Is the condition 0 < a < 1 sufficient for the Kuhn-Tucker theorem to apply? (c) Solve the problem by using the Kuhn-Tucker conditions. 8. Answer all parts of this question. (a) You are planning your finances for the next ten years. For the first four years, you save continuously at a rate of £200 over the year, and the interest rates are 2% per annum. For the next six years, your saving rate is increased to £240 over the year, and the interest rates are 3% per annum. The interest is compounded continuously. Denoting the current account balance at time t by y(t), the equations for the evolution of the bank balance in the two time periods are therefore: y˙ = 0.02y + 200 for 0 ≤ t ≤ 4 y˙ = 0.03y + 240 for 4 ≤ t ≤ 10 By solving the first-order differential equations, find your final balance y(10) at the end of 10 years, if the initial balance y(0) is £10,000.

219

A. Sample examination paper

(b) Pricing of financial derivatives sometimes involves solving the following second-order differential equation: f 00 (x) + bf 0 (x) = ax where f (x) denotes the value of the derivative, when the underlying asset price is x. a and b are constants. Find the general solution for the derivative price. If the property of the derivative is that its price f (x) becomes approximately x2 as x becomes large, what can you say about the parameters a and b? 9. In a habit-formation model of consumption, consumers suffer from reduction in consumption levels. The lifetime utility of an infinitely living representative agent is then given by: ∞ X δ t u(ct − γct−1 ) t=0

where u(·) is a strictly concave function, ct is the consumption at time t, δ ∈ (0, 1) is the discount factor and γ > 0 is the parameter of habit-formation. The consumer’s wealth wt accumulates according to the following transition equation: wt+1 = R(wt − ct ) where R is the constant gross interest rate R = 1 + r. (a) Noting that there are two state variables ct−1 and wt at time t, set up the Bellman equation V (ct−1 , wt , t). (b) Derive the Euler equation. (c) Assume now that u(ct − γct−1 ) = ln ct − γ ln ct−1 . By guessing that the value function is of the form V (ct−1 , wt , t) = A + B ln ct−1 + D ln wt , derive the policy function for ct in terms of δ, γ and wt . (d) Under what condition on the discount factor δ and the gross interest rate R would the agent consume a constant level c∗t = c¯ for all t? Give a brief economic interpretation for your answer. 10. Consider a consumer who expects to live from t = 0 to T . Let c(t) denote his consumption expenditure at time t and w(t) his wealth at time t. His income is constant at y. Assuming that the instantaneous rate of interest is constant at r for all 0 ≤ t ≤ T , the evolution of w(t) is given by: w(t) ˙ = rw(t) + y − c(t)

(∗)

The consumer maximises his lifetime inter-temporal utility, given by: Z

T

e−ρt u (c(t)) dt

0

where ρ > 0 is the consumer’s discount rate, and u0 (c) > 0, u00 (c) < 0 for all c > 0. (a) Determine the control and state variables. Hence set up the Hamiltonian. (b) Show that at the optimum,

220

i. the co-state variable is the discounted value of marginal utility: λ(t) = e−ρt u0 (c∗ (t)) ii. the marginal utility is given by: u0 (c∗ (t)) = λ(0)e(ρ−r)t iii. when ρ = r, the consumer will optimally smooth his consumption: c∗ (t) = c¯. (c) You are also given that the initial wealth is w(0) = w0 , and that the consumer intends to leave no inheritance, w(T ) = 0. By solving the dynamic constraint (*), show that the constant consumption level is given by: c¯ = y +

r w0 1 − e−rT

Why is this greater than y + rw0 , i.e. the sum of the constant income and the interest income from the initial wealth? What would c¯ be if w(T ) = w0 instead?

[END OF PAPER]

221

A. Sample examination paper

222

Appendix B Sample examination paper – Examiners’ commentary 1. Reading for this question Subject guide, Chapter 5. Approaching the question (a) A real valued function f defined on a convex subset U of Rn is concave if for all x and y in U and for all t ∈ [0, 1]: f (tx + (1 − t)y) ≥ tf (x) + (1 − t)f (y). (b) A production function f (K, L) has increasing returns to scale if, for any s > 1, f (sK, sL) > sf (K, L). (c) Take a concave production function f (k, L) with f (0, 0) = 0. Increasing returns would imply that for s > 1, f (sK, sL) > sf (K, L). However, by concavity:     1 1 f (K, L) = f (sK, sL) + 1 − (0, 0) s s   1 1 f (sK, sL) + 1 − ≥ f (0, 0) s s 1 = f (sK, sL). s Hence: f (sK, sL) ≤ sf (K, L), which implies either decreasing or constant returns to scale and rules out increasing returns to scale. 2. Reading for this question Subject guide, Chapter 4. Approaching the question (a) A function f (x1 , x2 , . . . , xn ) is homogeneous of degree d in (x1 , x2 , . . . , xn ) if for all (x1 , x2 , . . . , xn ) and all s > 0: f (sx1 , sx2 , . . . , sxn ) = sd f (x1 , x2 , . . . , xn ). In particular, a function which is homogeneous of degree zero is such that: f (sx1 , sx2 , . . . , sxn ) = f (x1 , x2 , . . . , xn ).

223

B. Sample examination paper – Examiners’ commentary

(b) The uncompensated demand x(p, m) is the solution to the consumer’s utility maximisation problem. It depends upon prices and income. The indirect utility function is the maximum value function for the consumer’s utility maximisation problem: v(p, m) = u(x(p, m)). (c) The uncompensated demand is homogeneous of degree zero in prices and income because if we multiply all prices and income by the same positive factor, the budget constraint is unaffected. In more detail: x(sp, sm) = arg max u(x) s.t. spx = sm x

= arg max u(x) s.t. px = m x

= x(p, m). Also: v(sp, sm) = u(x(sp, sm)) = u(x(p, m)) = v(p, m). 3. Reading for this question Subject guide, Chapter 9. Approaching the question (a) The Hamiltonian is: Ht = u(ct )e−θt + λt (f (kt ) − ct ). (b) The current value Hamiltonian is: Ht = u(ct ) − µt (f (kt ) − ct ), where µt = eθt λt . (c) Optimising the current value Hamiltonian in (b), the co-state equation which is a necessary condition is given by: µ˙ = −Hk + θµ where Hk is the derivative of the current value Hamiltonian with respect to capital k. (d) We can write: Ht = eθt Ht and hence optimality conditions are the same. (See page 180 of the subject guide.)

224

4. Reading for this question Subject guide, Chapter 8. Approaching the question (a) The roots of the characteristic polynomial are 1 and −8. Hence the family of solutions is: y(x) = Aex + Be−8x with first derivative: y 0 (x) = Aex − 8Be−8x . Consider now the initial conditions: y(0) = 0 ⇒ y(0) = A + B = 0 y 0 (0) = −2 ⇒ y 0 (0) = A − 8B = −2. A − 8(−A) = −2 2 A = − 9 2 B = . 9 Hence the particular solution is: 2 2 y(x) = − ex + e−8x . 9 9 (b) The roots of the characteristic polynomial are 5 and 5. Hence the family of solutions is: y(x) = Ae5x + Bxe5x with first derivative: y 0 (x) = 5Ae5x + Be5x (1 + 5x). Consider now the initial conditions: y(0) = 0 y 0 (0) = 1

⇒ ⇒

y(0) = A = 0 y 0 (0) = 5A + B = 1.

A = 0 B = 1. Hence the particular solution is: y(x) = xe5x . 5. Reading for this question Subject guide, Chapter 6. Approaching the question See page 100 of the subject guide.

225

B. Sample examination paper – Examiners’ commentary

6. Reading for this question Subject guide, Chapters 3 and 6. Approaching the question (a) In order to calculate the uncompensated demand functions we need to solve the utility maximisation problem: max − x1 ,x2

1 1 − x1 x2

s.t. p1 x1 + p2 x2 ≤ m, x1 ≥ 0, x2 ≥ 0.

We first note that the non-negativity constraints cannot bind because xi = 0 would give u(.) = −∞. Hence the problem reduces to: L=−

1 1 − − λ(p1 x1 + p2 x2 − m). x1 x2

The first-order conditions are: ∂L 1 = 2 − λp1 = 0 ∂x1 x1 1 ∂L = 2 − λp2 = 0 ∂x2 x2 ∂L = p1 x1 + p2 x2 − m = 0. ∂λ From the first two equalities we get:  2 x2 p1 = . x1 p2 Hence:

r x2 = x1

p1 . p2

By substituting in the budget constraint: r p1 p 1 x1 + p 2 x1 = m p2 x1 =

p1 +

m √

p1 p2

.

Hence the uncompensated demands are: m √ p1 + p1 p2 m x2 (p1 , p2 , m) = . √ p2 + p1 p2 x1 (p1 , p2 , m) =

The indirect utility is: V (p1 , p2 , m) = −

p1 +



p1 p 2

p2 +



− m m √ p1 + p 2 + 2 p 1 p2 = − . m

226

p1 p2

(b) The expenditure minimisation problem (or dual problem) for this consumer is: min p1 h1 + p2 h2

h1 ,h2

s.t. −

1 1 − ≥ u, h1 ≥ 0, h2 ≥ 0. h1 h2

First we notice that the non-negativity constraints cannot bind because this would give utility equal to −∞ which will not meet the first constraint. Hence the problem reduces to:   1 1 L = p1 h1 + p2 h2 − λ − − −u . h1 h2 The first-order conditions are: ∂L 1 = p1 − λ 2 = 0 ∂h1 h1 ∂L 1 = p2 − λ 2 = 0 ∂h2 h2 1 1 ∂L = − − − u = 0. ∂λ h1 h2 From the first two conditions we get: p1 h22 = 2 p2 h1 r p1 h2 = h1 . p2 By substituting in the utility constraint we find: r 1 p2 1 = u − − h1 h1 p1  r  1 p2 h1 (p1 , p2 , u) = − 1+ u p1  r r 1 p2 p1 h2 (p1 , p2 , u) = − 1+ u p1 p2  r  1 p1 = − 1+ . u p2 (c) The expenditure function is: e(p1 , p2 , u) = p1 h1 (p1 , p2 , u) + p2 h2 (p1 , p2 , u)   r  r  1 p2 1 p1 1+ p1 − p 2 1+ = −p1 u p1 u p2    r  r  1 p2 p1 = − p1 1 + + p2 1 + u p1 p2 1 √ = − (p1 + p2 + 2 p1 p2 ) . u

227

B. Sample examination paper – Examiners’ commentary

We verify that: e(p1 , p2 , V (p1 , p2 , m)) = m V (p1 , p2 , e(p1 , p2 , u)) = u. We have that: 1 √ e(p1 , p2 , u) = − (p1 + p2 + 2 p1 p2 ) u 1 √ (p1 + p2 + 2 p1 p2 ) e(p1 , p2 , V (p1 , p2 , m)) = − V (p1 , p2 , m) 1 √ = − p1 +p2 +2√p1 p2 (p1 + p2 + 2 p1 p2 ) − m = m. Also: V (p1 , p2 , m) = V (p1 , p2 , e(p1 , p2 , u)) = = =

√ p1 + p 2 + 2 p 1 p2 − m √ p1 + p2 + 2 p 1 p2 − e(p1 , p2 , u) √ p1 + p2 + 2 p 1 p2  − 1 √ − u p 1 + p2 + 2 p1 p 2 u

as required. 7. Reading for this question Subject guide, Chapter 2. Approaching the question (a) The constrained welfare maximisation problem for this society is: max W (x, y) = x1−a y a x,y

2x + y x + 2y x y

≤ ≤ ≥ ≥

s.t.

300 450 0 0.

The non-negativity constraint will never bind, because this would give welfare equal to zero. Hence we will only consider the two technology constraints. (b) The constraints are convex (linear). The objective function is concave if the Hessian is negative semi-definite:   −a(1 − a)x−a−1 y a a(1 − a)x−a y a−1 2 D Π= . a(1 − a)x−a y a−1 a(a − 1)x1−a y a−2 For the Hessian to be negative semi-definite, we need the two first-order principal minors to be non-positive: −a(1 − a)x−a−1 y a ≤ 0 −a(1 − a)x−a−1 y a a(a − 1)x1−a y a−2 − a(1 − a)x−a y a−1 a(1 − a)x−a y a−1 = 0 ≤ 0.

228

Hence the condition: 0≤a≤1 is sufficient for the Kuhn-Tucker theorem to apply. (c) The Lagrangian is: L = x1−a y a − λ(2x + y − 300) − µ(x + 2y − 450). The first-order conditions are: (1 − a)x−a y a − 2λ − µ = 0 ax1−a y a−1 − λ − 2µ = 0 2x + y − 300 ≤ 0 x + 2y − 450 ≤ 0, λ ≥ 0 and µ ≥ 0 with complementary slackness. The case λ = 0 and µ = 0 can be immediately ruled out. We are left with three cases. Case 1: λ = 0 and µ > 0. The first-order conditions become: (1 − a)x−a y a − µ ax1−a y a−1 − 2µ 2x + y − 300 x + 2y − 450

= = ≤ =

0 0 0 0.

By dividing through the first two first-order conditions, we find: y=

1 a x. 21−a

By substituting into the binding constraint: x+2

1 a x = 450 21−a x = 450(1 − a) 1 a y = 450(1 − a) = 225a. 21−a

This solution is internally consistent if the third first-order condition is satisfied: 2x + y − 300 ≤ 0 2 · 450(1 − a) + 225a − 300 ≤ 0 8 a ≥ . 9

229

B. Sample examination paper – Examiners’ commentary

Case 2: λ > 0 and µ = 0. The first-order conditions become: (1 − a)x−a y a − 2λ ax1−a y a−1 − λ 2x + y − 300 x + 2y − 450

= = = ≤

0 0 0 0.

By dividing through the first two first-order conditions, we find: a y=2 x. 1−a By substituting into the binding constraint: a 2x + 2 x = 300 1−a x = 150(1 − a) a y = 2 150(1 − a) = 300a. 1−a This solution is internally consistent if the third first-order condition is satisfied: x + 2y − 450 ≤ 0 150(1 − a) + 300a − 450 ≤ 0 2 . a ≤ 3 Case 3: λ > 0 and µ > 0. The first-order conditions become: (1 − a)x−a y a − 2λ − µ ax1−a y a−1 − λ − 2µ 2x + y − 300 x + 2y − 450

= = = =

0 0 0 0.

By solving the two binding constraints we find: x = 50 y = 200. This solution is internally consistent if λ > 0 and µ > 0. From the first first-order condition:  y a − 2λ = (1 − a)4a − 2λ. µ = (1 − a) x By replacing in the second first-order condition: a4a−1 − λ − 2(1 − a)4a + 4λ = 0 4a−1 (8(1 − a) − a) λ = 3 4a−1 (8(1 − a) − a) µ = (1 − a)4a − 2 3   1 1 = 22a a− . 2 3

230

Hence λ > 0 and µ > 0 iff: 8 9 2 . a > 3 a <

Hence the solution is: For a ≤ 32 For 23 < a < For a ≥ 98

8 9

x = 150(1 − a), y = 300a x = 50, y = 200 x = 450(1 − a), y = 225a

8. Reading for this question Subject guide, Chapters 7 and 8. Approaching the question (a) Consider the first four years. The complimentary function is yc = c1 e0.02t , where c1 is a constant. The particular integral is yp = −10,000. So the general solution is: y(t) = c1 e0.02t − 10,000. Applying the initial condition y(0) = 10,000: 10,000 = c1 − 10,000



c1 = 20,000.

Hence the solution is: y(t) = 20,000e0.02t − 10,000. In particular y(4) = 20,000e0.08 − 10,000 = 11,665.74. Similarly for the next six years: yc = c2 e0.03t and yp = −8,000, and so: y(t) = c2 e0.03t − 8,000. This time the initial condition is y(4) =11,665.74, and hence: 11,665.74 = c2 e0.12 − 8,000



c2 = (11,665.74 + 8,000) e−0.12 = 17,441.95.

Therefore the solution is: y(t) = 17,441.95e0.03t − 8,000. Thus after ten years, y(10) = 17,441.95e0.3 − 8,000 = 15, 544.17. Note that this can also be calculated by: Z 4 Z 0.02×4+0.03×6 0.02(4−t)+0.03×6 10,000e + 200e dt + 0

6

240e0.03(6−t) dt = 15,544.17.

0

(b) There are two ways of solving this.

231

B. Sample examination paper – Examiners’ commentary

i. Solve directly: the characteristic polynomial is r2 + br = 0, the roots of which are r1 = −b, r2 = 0. Then the complimentary function fc = A1 e−bx + A2 . For the particular integral, try fp = B1 x2 + B2 x1 + B3 . Then fp0 = 2B1 x + B2 and fp00 = 2B1 , and so substituting back: 2B1 +b(2B1 x+B2 ) = 2bB1 x+(2B1 +bB2 ) = ax



B1 =

a a , B2 = − 2 . 2b b

Hence: a a f (x) = A1 e−bx + A2 + x2 − 2 x1 + B3 2b b a a 2 −bx = A1 e + x − 2 x1 + A02 2b b where A02 = A2 + B3 . ii. Let u = f 0 and solve for u in the first-order differential equation u0 (x) + bu(x) = ax. The solution is u(x) = A1 e−bx + ab x − ba2 . Then integrating this: f (x) = A01 e−bx +

a 2 a x − 2 x + A2 , 2b b

where A01 = − Ab1 . Then limx→∞ f (x) =

a 2 x. 2b

Hence a = 2b.

9. Reading for this question Subject guide, Chapters 7 and 8. Approaching the question (a) With the two state variables ct−1 and wt , the Bellman equation is: V (ct−1 , wt , t) = max {u(ct − γct−1 ) + δV (ct , wt+1 , t + 1)} ct

s.t. wt+1 = R(wt − ct ). (b) From the first-order condition: ∂V (., t + 1) ∂wt+1 ∂V (., t + 1) ∂V (., t + 1) ∂V (., t + 1) −δ = δR −δ . ∂wt+1 ∂ct ∂ct ∂wt+1 ∂ct (B.1) Shift one period back: u0 (ct −γct−1 ) = −δ

u0 (ct−1 − γct−2 ) = δR

∂V (., t) ∂V (., t) −δ . ∂wt ∂ct−1

(B.2)

Also as V (ct−1 , wt , t) is an optimal value function w.r.t. ct given state variables ct−1 and wt , the envelope theorem states that for changes in the state variables, their effects on ct can be ignored, i.e.: ∂V (ct−1 , wt , t) ∂V (ct , wt+1 , t + 1) ∂wt+1 ∂V (ct , wt+1 , t + 1) = δ = δR (B.3) ∂wt ∂wt+1 ∂wt ∂wt+1 ∂V (ct−1 , wt , t) = −γu0 (ct − γct−1 ). (B.4) ∂ct−1

232

The Euler equation is derived by factoring out the V ’s. First, applying (B.4) to (B.1) and (B.2): ∂V (., t + 1) + δγu0 (ct+1 − γct ) ∂wt+1 ∂V (., t) u0 (ct−1 − γct−2 ) = δR + δγu0 (ct − γct−1 ). ∂wt u0 (ct − γct−1 ) = δR

Then ∂V∂w(.,t) and t equation:

∂V (.,t+1) ∂wt+1

can be factored out in (B.3) to yield the Euler

u0 (ct − γct−1 ) − δγu0 (ct+1 − γct ) 1 = . 0 0 u (ct−1 − γct−2 ) − δγu (ct − γct−1 ) δR

(c) With u(ct − γct−1 ) = ln ct − γ ln ct−1 : V (ct−1 , wt , t) = max {ln ct − γ ln ct−1 + δV (ct , wt+1 , t + 1)} . ct

Now try V (ct−1 , wt , t) = A + B ln ct−1 + D ln wt as suggested. A + B ln ct−1 + D ln wt = max {ln ct − γ ln ct−1 + δA + δB ln ct + δD ln wt+1 } ct

= max {ln ct − γ ln ct−1 + δA + δB ln ct + δD ln R(wt − ct )} ct

= max {δA + δD ln R − γ ln ct−1 + (1 + δB) ln ct + δD ln(wt − ct )} . ct

The first-order condition is: δD 1 + δB = . ∗ ct wt − c∗t Hence the policy function is: c∗t

 =

1 + δB 1 + δB + δD

 wt .

Now we need to replace the unknown parameters. To do this, substitute c∗t back in the Bellman equation to express it in terms of the state variables ct−1 and wt . A + B ln ct−1 + D ln wt 

1 + δB 1 + δB + δD





= δA + δD ln R − γ ln ct−1 + (1 + δB) ln wt     1 + δB +δD ln wt − wt 1 + δB + δD      1 + δB δRD = δA + (1 + δB) ln + δD ln 1 + δB + δD 1 + δB + δD −γ ln ct−1 + (1 + δB + δD) ln wt . Equating the coefficients: B = −γ D = 1 + δB + δD   (1 + δB)1+δB (δRD)δD A = δA + ln . (1 + δB + δD)1+δB+δD

233

B. Sample examination paper – Examiners’ commentary

Solving for D: 1 − δγ . 1−δ Though not required, solving for A gives:     (1 − δγ)1−δγ (δRD)δD (1 − δ)A = ln = (1 − δγ) ln (δR)δ/(1−δ) (1 − δ) 1−δγ+δD (1 − δγ + δD)   ⇔ A = ln (δR)δ/(1−δ) (1 − δ) . D=

Therefore the policy function is now, in terms of δ, γ and wt : " # 1 − δγ  wt = (1 − δ)wt . c∗t = 1 − δγ + δ 1−δγ 1−δ Thus the policy function is independent of γ in this case. (d) Substitute into wt+1 = R(wt − ct ): ∗ wt+1 = R(wt∗ − (1 − δ)wt∗ ) = Rδwt∗ ,

and hence: ∗ c∗t+1 = (1 − δ)wt+1 = (1 − δ)Rδwt∗ = Rδc∗t .

Hence the optimal policy is the constant consumption c∗t = c¯ ∀ t if δ = R1 . This is when the consumer’s discount rate equals the market interest rate, i.e. when the consumer is neither patient nor impatient. 10. Reading for this question Subject guide, Chapter 9. Approaching the question (a) c(t) is the control variable and w(t) is the state variable. The Hamiltonian is: H = e−ρt u(c) + λ(t)[rw(t) + y − c(t)]. (b) Maximum principle: the necessary conditions for the optimum are: ∂H = 0 ⇔ e−ρt u0 (c∗ (t)) = λ(t) ∂c(t) ∂H ˙ ˙ λ(t) = − ⇔ λ(t) = −rλ(t) ∂w(t) ∂H w˙ ∗ (t) = ⇔ w˙ ∗ (t) = rw∗ (t) + y − c(t). ∂λ(t) The first answers (i). From the second, λ(t) = λ(0)e−rt . Equating this with the first: λ(0)e−rt = e−ρt u0 (c∗ (t)) ⇔ u0 (c∗ (t)) = λ(0)e(ρ−r)t which answers (ii). Then, if ρ = r, u0 (c∗ (t)) = λ(0), i.e. constant. Given the strict concavity of u, therefore c∗ (t) is a constant c¯. This answers (iii).

234

(c) Returning to the dynamic constraint: w˙ ∗ (t) − rw∗ (t) = y − c¯. The solution for the first-order differentiation is:   y − c¯ ∗ rt w (t) = Ae − . r Given that w(0) = w0 :  A = w0 +

 y − c¯ , r

so: ∗



w (t) = w0 +



y − c¯ r



rt

e −



y − c¯ r



rt

= w0 e +



 y − c¯ (ert − 1). r

Now if at T , w(T ) = 0:   y − c¯ r r rT 0 = w0 e + w0 erT = y + w0 . (erT − 1) ⇔ c¯ = y + rT r (e − 1) (1 − e−rT ) c¯ > y + rw0 as the consumer intends to leave no inheritance. If w(T ) = w0 , then c¯ = y + rw0 .

235

Comment form We welcome any comments you may have on the materials which are sent to you as part of your study pack. Such feedback from students helps us in our effort to improve the materials produced for the International Programmes. If you have any comments about this guide, either general or specific (including corrections, non-availability of Essential readings, etc.), please take the time to complete and return this form. Title of this subject guide: Name Address Email Student number For which qualification are you studying? Comments

Please continue on additional sheets if necessary. Date: Please send your completed form (or a photocopy of it) to: Publishing Manager, Publications Office, University of London International Programmes, Stewart House, 32 Russell Square, London WC1B 5DN, UK.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF