Bayesian Statistical Methods
October 5, 2022  Author: Anonymous  Category: N/A
Short Description
Download Bayesian Statistical Methods...
Description
Bayes Ba yesian ian Stati Sta tistical stical Methods
CHAPMAN & HALL/CRC Texts in Statistical Science Series
Joseph K. Blitzstein, Harvard Blitzstein, Harvard University, USA Julian J. Faraway, University of Bath, UK Martin Tanner, anner, Northwestern Northwestern University, USA Jim Zidek, University of British Columbia, Canada Recently Published Titles Theory of Stochastic Objects Probability,, Stochastic Processes and Inference Probability Athanasios Christou Micheas Linear Models and the Relevant Distributions and Matrix Algebra David A. Harville An Introduction to Generalized Linear Models, Fourth Edition Annette J.J. Dobson and Adrian G. Barnett Graphics for Statistics and Data Analysis with R Kevin J. Keen Keen Statistics in Engineering, Second Edition With Examples in MATLAB and R Andrew Metcalfe, Metcalfe, David A. Green Green,, Tony Tony Greenfield, Mahayaudin Mahayaudin Mansor Mansor,, Andrew Smith, and Jonathan Tuke A Computational Approach to Statistical Learning Taylor Arnold, Michael Kane, and Bryan W. Lewis Introduction to Probability, Second Edition Joseph K. Blitzstein and Jessica Hwang A Computational Approach to Statistical Learning Taylor Arnold, Michael Kane, and Bryan W. Lewis Theory of Spatial Statistics A Concise Introduction M.N.M van Lieshout Bayesian Statistical Methods Brian J. Reich, Sujit Sujit K. Ghosh Time Series A Data Analysis Approach Using R Robert H. Shumway, Shumway, David S. Stoffer The Analysis of Time Series An Introduction, Seventh Edition Chris Chatfield, Haipeng Xing Probability and Statistics for Data Science Math + R + Data Norman Matloff Sampling Design and Analysis, Second Edition Sharon L. Lohr Practical Multivariate Analysis, Sixth Edition
Abdelmonem Afifi, Susanne May, May, Robi Robin n A. Donatello, Virginia A. Clark For more information information about this series, please visit: v isit: https://www.crcpress.com/go/textsseries
Bayes Ba yesian ian Stati Statistical stical Methods
Brian J. Reich Sujit K. Ghosh
CRC Press aylor & Francis Group 6000 600 0 Broken Sound Parkway NW NW,, Suite 30 300 0 Boca Raton, FL 334872742 © 2019 by aylor & Francis Group, LLC CRC Press is an imprint o aylor & Francis Group, an Inorma business No claim to original origi nal U.S. Government works Printed on acidree paper Version Date: 20190313 International Standard Book Number1 Number13: 3: 9780815378648 (Hardback) Tis book contains inormation obtained rom authentic and highly regarded sources. Reasonable eorts have been made to publish reliable data and iinormation, normation, but the author and publisher cannot assume responsibility or the validity valid ity o all materials or the consequences o their use. Te authors and publishers have attempted to trace the copyright holders o all material reproduced in this thi s publication and apologize to copyright holders i permission to publish in this th is orm has not been obtained. I any copyright material has not been acknowledged please write and let us know so we may rectiy in any uture reprint. Except as permitted under U.S. Copyright Law, no part o this book may be reprinted, reproduced, transmitted, or utilized in any in any orm by any ele elecctronic, mechanical, or other other means, means, now known or hereater invented, including photocopying, microflming, and recording, or in any inormation storage or retrieval system, without written permission rom the publishers. For permission to photocopy or use material electronically rom this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 9787508400. CCC is a notorproft organization that provides licenses and registration or a variety o users. For organizations that have been granted a photocopy license by the CCC, a separate system o payment h has as been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and
are used only or identifcation identifcation and explanation witho without ut intent to inringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com
To Michelle, Sophie, Swagata, and Sabita
Contents
Preface
xi
1 Basics of Bayesian inference 1.1 Probability background . . . . . . . . . . . . 1.1.1 Univariate distributions . . . . . . . . 1.1.1.1 Discrete distributions . . . . 1.1.1.2 Continuous distributions . . 1.1.2 Multivariate distributions . . . . . . . 1.1.3 Marginal and conditional dis isttributions 1.2 Bayes’ rule . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
1 1 2 2 6 9 10 14
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
16 18 21 24 25 25 27 31 34
2 From prior information to posterior inference 2.1 Conjugate priors . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Betabinomial mod odeel for a propor portion . . . . . . . . . 2.1.2 Poissongamma mod odeel for a rate . . . . . . . . . . . . 2.1.3 Normalnormal mod odeel for a mean . . . . . . . . . . . . 2.1. 2.1.44 No Norm rmal ali in nverse erse ga gamm mmaa mod odel el for for a varia arian nce . . . . . . 2.1.5 Natural conjugate priors . . . . . . . . . . . . . . . . . 2.1.6 Normalnormal mod odel el for a mea ean n vector . . . . . . . . 2.1. 2.1.77 No Norm rmal ali in nverse erse Wi Wish shar artt mod model el for for a co cov var aria ianc ncee ma matr trix ix 2.1.8 Mixtures of conjugate priors . . . . . . . . . . . . . . . 2.2 Improper priors . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Ob jective priors . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Jeﬀreys’ prior . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Reference priors . . . . . . . . . . . . . . . . . . . . . 2.3.3 Maximum entropy priors . . . . . . . . . . . . . . . . 2.3.4 Empirical Bayes . . . . . . . . . . . . . . . . . . . . .
41 42 42 45 47 48 50 51 52 56 58 59 59 61 62 62
1.3 1.4
1.5 1.6
1.2.1 Discrete example of Bayes’ rule . . 1.2.2 Continuous example of Bayes’ rule Introd odu uction to Bayesian inference . . . . Summarizing the p osterior . . . . . . . . 1.4.1 Point estimation . . . . . . . . . . 1.4.2 Univariate posteriors . . . . . . . . 1.4.3 Multivariate posteriors . . . . . . . The po possterior predictive distribution . . . Exercises . . . . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
vii
viii 2.4
Contents
2.3.5 Penalized complexity priors . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Computational approaches 3.1 Deterministic metho ds . . . . . . . . . . . . 3.1.1 Maximum a po possteriori estimation . . . 3.1.2 Numerical integration . . . . . . . . . 3.1.3 Bayesia ian n central limit theorem (CLT) 3.2 Markov chain Monte Carlo (MCMC) method odss 3.2.1 Gibbs sampling . . . . . . . . . . . . . 3.2.2 Metropo pollis–Hastings (MH) sampling . 3.3 MCM MCMC C sof softw tware are opt option ionss in R . . . . . . . . . 3.4 Diagnosing and improving convergence . . . 3.4.1 Selecting initial values . . . . . . . . . 3.4.2 Convergence diagnostics . . . . . . . . 3.4.3 Improving convergence . . . . . . . . . 3.4.4 Dealing with large datasets . . . . . . 3.5 Exercises . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
63 64
69 70 70 71 74 75 77 89 97 100 100 103 108 110 112
4 L 4.i1neaArnm alyosdiseolsf normal means . . . . . . . . . . . . . . . . . . . . 111290 4.1.1 Onesample/paired analysis . . . . . . . . . . . . . . . 120 4.1.2 Comparison of two normal means . . . . . . . . . . . . 121 4.2 Linear regression . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.2.1 Jeﬀreys prior . . . . . . . . . . . . . . . . . . . . . . . 125 4.2.2 Gaussian prior . . . . . . . . . . . . . . . . . . . . . . 126 4.2.3 Continuous shrinkage priors . . . . . . . . . . . . . . . 128 4.2.4 Predictions . . . . . . . . . . . . . . . . . . . . . . . . 129 4.2. 4.2.55 Ex Exam ampl ple: e: Fac acto tors rs th that at aﬀec aﬀectt a ho home me’s ’s mi micr crob obio iome me . . 13 1300 4.3 Generalized linear mo dels . . . . . . . . . . . . . . . . . . . . 133 4.3.1 Binary data . . . . . . . . . . . . . . . . . . . . . . . . 135 4.3.2 Count data . . . . . . . . . . . . . . . . . . . . . . . . 137 4.3.3 4.3 .3 Exa Exampl mple: e: Log Logist istic ic reg regres ressio sion n for NB NBA A clut clutch ch ffree ree th thro rows ws 138 4.3. 4.3.44 Ex Exam ampl ple: e: Beta Beta regr regres essi sion on for for mi micr crob obio iome me da data ta . . . . 14 1400 4.4 Random eﬀects . . . . . . . . . . . . . . . . . . . . . . . . . . 141 4.5 Flexible linear mo dels . . . . . . . . . . . . . . . . . . . . . . 149 4.5.1 Nonparametric regression . . . . . . . . . . . . . . . . 149 4.5.2 Heteroskedastic mo dels . . . . . . . . . . . . . . . . . 152 4.5.3 NonGaussian error mod odeels . . . . . . . . . . . . . . . 153 4.5.4 Linea inearr mod odel elss with correlated data . . . . . . . . . . . 153 4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Contents
ix
5 Model selection and diagnostics 5.1 Cross validation . . . . . . . . . . . 5.2 Hypo potthesis testing and Bayes factors 5.3 Stoc och hastic search variable selection . 5.4 Bayesian mo del averaging . . . . . . 5.5 Mo del selection criteria . . . . . . . 5.6 Go odnessofﬁt checks . . . . . . . . 5.7 Exercises . . . . . . . . . . . . . . .
. . . . . . .
163 . 164 . 166 . 170 . 175 . 176 . 186 . 192
6 Case studies using hierarchical modeling 6.1 Overview of hierarchical mod odeeling . . . . . . . . . . . . . . 6.22 Ca 6. Case se stud study y 1: Spe Speci cies es dist distri ribu buti tion on ma mapp ppin ingg via via da data ta fusi fusion on 6.3 Case study 2: Tyrannosaurid growth cu currves . . . . . . . . . 6.4 6.4 Ca Case se stu study 3: Ma Mara rath thon on an anal aly ysis sis wit ith h miss missin ingg data ata . . . . 6.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .
195 . 195 . 20 2000 . 203 . 21 2111 . 213
. . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
7 Statistical properties of Bayesian methods 7.1 Decision theory . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Frequentist prop erties . . . . . . . . . . . . . . . . . . . . . .
7.3 7.4
77..22..12 B vpatroiatinccse t.ra.de. o.ﬀ . Aisaysm Simulation studies . . . . . . Exercises . . . . . . . . . . .
Appendices A.1 Probability distributions . A.2 List of conjugate priors . . A.3 Derivations . . . . . . . . A.4 Computational algorithms A.5 Software comparison . . .
. . . . .
. . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
.. . . . . . . .
217 218 220
.. . .
222231 223 227
. . . . .
231 231 239 241 250 255
Bibliography
265
Index
273
Preface
Bayesian methods are standard in various ﬁelds of science including biology, engineering, ﬁnance and genetics, and thus they are an essential addition to an analyst’s toolkit. In this book, we cover the material we deem indispensable for a practicing Bayesian data analyst. The book covers the most common statistical methods including the ttest, multiple linear regression, mixed models and generalized linear models from a Bayesian perspective and includes many examples and code to implement the analyses. To illustrate the ﬂexibility of the Bayesian approach, the later chapters explore advanced topics such as nonparametric regression, missing data and hierarchical models. In addition to these important practical matters, we provide suﬃcient depth so that the reader can defend his/her analysis and argue the relative merits of Bayesian and classical methods. The book is intended to be used as a onesemester course for advanced undergraduate and graduate students. At North Carolina State University (NCSU) this book is used for a course comprised of undergraduate statistics majors, nonstatistics graduate students from all over campus (e.g., engineering, ecology, psychology, etc.) and students from the Masters of Science in Statistics Program. Statistics PhD students take a separate course that covers much of the same material but at a more theoretical and technical level. We hope this book and associated computing examples also serve as a useful resource to practicing data analysts. Throughout the book we have included case studies from several ﬁelds to illustrate the ﬂexibility and utility of Bayesian methods in practice. It is assumed that the reader is familiar with undergraduatelevel calculus including limits, integrals and partial derivatives and some basic linear algebra. Derivation of some key results are given in the main text when this helps to communicate ideas, but the vast majority of derivations are relegated to the Appendix for interested readers. Knowledge of introductory statistical concepts through multiple regression would also be useful to contextualize the material, but this background is not assumed and thus not required. Fundamental concepts are covered in detail but with references held to a minimum in favor of clarity; advanced concepts are described concisely at a high level with references provided for further reading. The book begins with a review of probability in the ﬁrst section of Chapof Chapter 1. A 1. A solid understanding of this material is essential to proceed through the book, butThe thisremainder section may be skipped for readers with the appropriate background. of Chapter 1 through Chapter 5 form the core xi
xii
Preface
of the book. Chapter 1 int introdu roduces ces the cen centra trall con concep cepts ts of and mot motiv ivati ation on for Bayesian inference. Chapter inference. Chapter 22 provides provides ways to select the prior distribution which is the genesis of a Bayesian analysis. For all but the most basic methods, advanced computing tools are required, and Chapter and Chapter 3 covers 3 covers these methods with the most weight given to Markov chain Monte Carlo which is used for the remainder of the book. Chapter book. Chapter 44 applies applies these tools to common statistical statis tical models includ including ing multiple linear regression, generalize generalized d linear models and mixed eﬀects models (and more complex regression models in Section 4.5 which may be skipped if needed). After cataloging numerous statistical methods in Chapter 4, Chapter 5 treats the problem of selecting an appropriate model for a given dataset and verifying and validating that the model ﬁts the data well. Chapter 6 introduces hierarchical modeling as a general framework to extend Bayesian methods to complex problems, and illustrates this approach using detailed case studies. The ﬁnal chapter investigates the theoretical properties of Bayesian methods, which is important to justify their use but can be omitted if the course is intended for nonPhD students. The choice of software is crucial for any modern textbook or statistics course. We elected to use R as the software platform due to its immense popularity in the statistics community, and access to online tutorials and assistance. Fortunately, there are now many options within R to conduct a Bayesian analysis and we compare several including JAGS, BUGS, STAN and NIMBLE. We selected the package JAGS as the primary package for no particularly strong reason other than we have found it works well for the courses taught at our university. In our assessment, JAGS provides the nice combination of ease of use, speed and ﬂexibility for the size and complexity of problems we consider. A repository of code and datasets used in the book is available at https://bayessm.org/. Throughout the book we use R/JAGS, but favor conceptual discussions over computational details and these concepts transcend software. The course webpage also includes latex/beamer slides. We wish to thank our NCSU colleagues Kirkwood Cloud, Qian Guan, Margaret Johnson, Ryan Martin, Krishna Paciﬁci and AnaMaria Staicu for providing valuable feedback. We also thank the students in Bayesian courses taught at NCSU for their probing questions that helped shape the material in the book. Finally, we thank our families and friends for their enduring support, as exempliﬁed by the original watercolor painting by Swagata Sarkar that graces the front cover of the book.
1 Basics of Bayesian inference
CONTENTS 1.1 1.1
Pr Prob obab abil ilit ity y ba bacckg kgro roun und d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . 1.1.1 1.1 .1 Univ Univari ariate ate dis distri tribut bution ionss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1. 1.1.1.1 1.1 Di Discr scret etee dist distri ribu buti tion onss . . . . . . . . . . . . . . . . . . . . . . . . . 1.1. 1.1.1.2 1.2 Co Con ntin tinuo uous us dist distri ribu buti tion onss . . . . . . . . . . . . . . . . . . . . . 1.1.2 1.1 .2 Multiv Multivari ariate ate dis distri tribut bution ionss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 1.1 .3 Margin Marginal al and con condit dition ional al dis distri tribut bution ionss . . . . . . . . . . . . . . . . . . Bayes’ rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2. 1.2.11 Di Discr scret etee ex exam ampl plee of Bay Bayes’ ru rule le . . . . . . . . . . . . . . . . . . . . . . . . .
1 2 2 6 9 10 14 16
1. 1.55 1.6
1.2.2 1.2.2 Contin Continuou uouss exa exampl mplee of Baye Bayes’ s’ rule rule . . . . . . . . . . . . . . . . . . . . . . In Intr trodu oduct ction ion to Ba Bay yesia esian n infe infere renc ncee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . Su Summ mmar ariz izin ingg th thee po post ster erio iorr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 1.4. 1.4.11 Poi oin nt esti estima mati tion on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 1.4. 1.4.22 Un Univ ivar aria iate te pos poste terio riors rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4. 1.4.33 Mu Mult ltiv ivar aria iate te pos poste teri rior orss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Th Thee pos poste teri rior or pr pred edic icti tiv ve dist distri ribu buti tion on . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 1. 1
Pr Prob obab abili ilitty ba bacckg kgro roun und d
1.2
1. 1.33 1.4 1.4
18 21 24 25 25 27 31 34
Understanding basic probability theory is essential to any study of statistics. Generally Genera lly speaking, the ﬁeld of proba probabilit bility y assume assumess a mathe mathematica maticall model for the process of interest and uses the model to compute the probability of events (e.g., what is the probability of ﬂipping ﬁve straight heads using a fair coin?). In contrast, the ﬁeld of statistics uses data to reﬁne the probability model and test hypotheses related to the underlying process that generated the data (e.g., given we observe ﬁve straight heads, can we conclude the coin is biased?). Therefore, probability theory is a key ingredient to a statistical analysis, and in this section we review the most relevant concepts of probability for a Bayesian analysis. Before developing probability mathematically, we brieﬂy discuss probability from a concep conceptual tual perspective perspective.. The ob objectiv jectivee is to compute the probabilit probability y of an event, , denoted Prob( ). For example, we may be interested in the
A
A
1
2
Bayesian Statistic Statistical al Methods
X (random variables are generally repprobability that the random variable variable X (random resented with capital letters) takes the speciﬁc value x (lowercase (lowercase letter), x), denoted Prob(X Prob(X = x ), or the probability that that X will will fall in the interval [a, [ a, b], denoted Prob(X Prob(X [a, b]). There are two leading interpretations of this statement: objective and subjective. An objective interpretation views Prob( ) as a purely mathematical statement. A frequentist interpretation is that if we repeated the experiment many times and recorded the sample proportion of the times occurred, this proportion would eventually converge to the number Prob( ) [0 [0,, 1] as the number of samples increases. A subjective interpretation is that Prob( ) represents an individual’s degree of belief, which is often quantiﬁed in terms of the amount the individual would be willing to wager that will occur. As we will see, these two conceptual interpretations of probability parallel the two primary statistical frameworks: frequentist and Bayesian. However, a Bayesian analysis makes use of both of these concepts.
∈ ∈
A A∈
A
A
A
1.1.1 1.1 .1
Univ Univariate ariate dis distri tribut bution ionss
S
∈ S ∈
The random variable variable X ’s ’s support is the smallest set so that X with X the probability one. For example, if if X the number of successes in n trials then = 0, 1,...,n . Probability equations diﬀer based on whether is a countable X is X is continuous set: X is a discrete random variable if is countable, and X otherwise. Discrete random variables can have a ﬁnite (rainy days in the year) or an inﬁnite (number lightning strikes in a year) number of possible outcomes as long as the number is countable, e.g., a random count count X = 0, 1, 2,... has an inﬁnite but countable number of possible outcomes and is thus discrete. An example of a continuous random variable is the amount of rain on a given day which can be any real nonnegative number and so = [0, [0, ).
S {
}
S
S S
∈ S { ∈
S
1.1.1. 1.1 .1.1 1
}
∞
Dis Discre crete te distri distribut bution ionss
For a discrete random variable, the probability mass function (PMF) f (x) assigns a probability to each element of X ’s ’s support, that is, x)) = f ( f (x). Prob(X = x Prob(X
(1.1)
f ((x) 0, and sum to one, A PMF is valid if all probabilit probabilities ies are nonn nonnegativ egative, e, f f (x) = 1. The PMF can also be used to compute probabilities of more x∈S f ( complex events by summing over the PMF. For example, the probability that X is X x1 , x2 , is is either either x1 or or x2 , i.e., i.e., X
≥
∈ { ∈
}
f (x1 ) + f ( f (x2 ). Prob(X = x 1 or Prob(X or X = x 2 ) = f ( Generally, the probability of the event that that X falls falls in a set over elements in , ) = X f ( f (x). Prob(X Prob(
S
∈ S
∈S
x
(1.2)
S ⊂ S is is the sum
(1.3)
Basics of Bayesian inference
3
Using this fact deﬁnes the cumulative distribution function (CDF)
≤ x) = ≤
F F ((x) = Prob(X Prob(X
f ( f (c).
(1.4)
≤
c x
X to the probability of events. It A PMF is a function from the support of X to is often useful to summarize the function using a few interpretable quantities such as the mean and variance. The expected value or mean value is X ) = E(X E(
xf (x) xf (
(1.5)
∈S
x
and measures the center of the distribution. The variance measures the spread around the mean via the expected squared deviation from the center of the distribution, 2
X ) = E [X Var(X Var(
X )] { − E( E(X )] } =
[x
∈S
x
f (x). − E (X ))]]2 f (
The variance is often converted to the standard deviation SD(X SD( X ) =
(1.6) X ) Var(X Var(
to express the variability on the same scale as the random variable. The central concept of statistics is that the PMF and its summaries (such as the mean) describe the population of interest, and a statistical analysis uses a sample from the population to estimate these functions of the population. For example, we might take a sample of size n size n from from the population. Denote the f (“ ” means “is distributed as”), and X 1 ,...,X n ith sample value as as X i f as the complete sample. We might then approximate the population mean ¯ = 1 n X i , the probability of an outcome X X ) with the sample mean X E(X E( i=1 n f f ((x) with the sample proportion of the the n observations that equal equal x, and the f (x) with a sample histogram. However, even for a large sample, entire PMF PMF f ( ¯ will likely not equal E(X X will X E(X ), ), and if we repeat the sampling procedure again ¯ while E(X X E(X ) does not change. The distribution of we might get a diﬀerent X ¯ X across afrom statistic, i.e., a summary thestatistic’s sample, such as X across random samples the population is calledofthe sampling distribution. A statistical analysis to infer about the population from a sample often proceeds under the assumption that the population belongs to a parametric family of distributions. This is called a parametric statistical analysis. In this type of analysis, the entire PMF is assumed to be known up to a few unknown unkno wn param parameters eters denot denoted ed θ = (θ1 ,...,θ p ) (or simply θ if there is only p = p = 1 parameter). We then denote the PMF as f ( f (x θ). The vertical bar “ ” f ((x θ) gives the probability that X X = x given is read as “given,” and so f the parameters θ . For example, a common parametric model for count data with = 0, 1, 2,... is the Poisson family. The Poisson PMF with unknown parameter θ 0 is parameter
∼
∼


S {
≥
}
f ((x θ ) = exp( θ)θ x . Prob(X = x θ ) = f Prob(X x!



−
(1.7)
4
Bayesian Statistic Statistical al Methods
θ = 2
5 2 . 0
θ = 4
0 2 . 0
F M P
5 1 . 0
0 1 . 0
5 0 . 0
0 0 . 0
0
2
4
6
8
10
12
x
FIGURE 1.1 x f (x θ) = exp(x−!θ)θ Poisson probability mass function. Plot of the PMF f ( for θ = 2 and for and θ = 4. The PMF is connected by lines for visualization, but the probabilities are only deﬁned for for x = 0, 1, 2,... .

{

}
∈ S
∞

f (x θ ) > 0 for all f (x θ ) = 1 Clearly f ( Clearly all x and it can be shown that x=0 f ( and so this is a valid PMF. As shown in Figure 1.1, changing the parameter θ changes the PMF and so the Poisson is not a single distribution but rather a family of related distributions indexed by by θ. A parametric assumption greatly simpliﬁes the analysis because we only have to estimate a few parameters and we can compute the probability of any x in . Of course, this assumption is only useful if the assumed distribution provides a reasonable ﬁt to the observed data, and thus a statistician needs a large catalog of distributions to be able to ﬁnd an appropriate family for a given analysis. Appendix A.1 provides a list of parametric distributions, and we discuss a few discrete distributions below.
S
X X is random , 1 , then X follows BernoulliA: If binary, i.e., = is0often then a Bernoulli(θ Bernoulli( distribution. binary variable used X to follows model the result ofθa) trial where a success is recorded as a one and a failure as zero. The parameter θ [0, θ , and to be a valid PMF [0, 1] is the success probability Prob(X Prob(X = 1 θ ) = θ, we must have the failure probability Prob(X Prob( X = 0 θ ) = 1 θ . These two cases can be written concisely as
S { }
∈

f f ((x θ ) = θ x (1


− θ)1−x .
−
(1.8)
This gives mean 1
X θ ) = E(X E(

x=0
xf (x θ ) = f (1 xf ( f (1 θ) = θ


(1.9)
θ ) = θ θ(1 θ ). andBinomial variance Var(X Var( (1distribution : TheX binomial is a generalization of the Bernoulli to

−
Basics of Bayesian inference
5
n = 10
n = 100
θ = 0.1
θ = 0.1
5 . 0
θ = 0.5
θ = 0.5 5 1 . 0
θ = 0.9 4 . 0
θ = 0.9
F M P
3 . 0
F M P
0 1 . 0
2 . 0
5 0 . 0
1 . 0
0
2
4
0 0 . 0
6
8
10
0 . 0
0
20
40
x
60
80
100
x
FIGURE 1.2 Binomial probability mass function. Plot of the PMF f PMF f ((x θ) = nx θ x (1 θ )n−x for combinations of the number of trials n and the success probability θ . The PMF is connected by lines for visualization, but the probabilities are only deﬁned for for x = 0, 1,...,n .

{
−
}
≥
the case of n 1 independent trials. Speciﬁcally, if X 1 ,...,X n are the binary results of the n independent trials each with success probability θ (so that n X i Bernoulli( ,...,n)) and X Bernoulli(θθ) for all i all i = = 1,...,n and X = i=1 X i is the total number of n, θ). The successes, then then X ’s ’s support is = 0, 1,...,n and and X Binomial( Binomial(n, PMF is n x f f ((x θ) = θ (1 θ )n−x , (1.10) x
∼
S {

}
∼ ∼
−
where nx is the binomial coeﬃcient. This gives mean and variance E(X E( X θ ) = nθ and Var(X nθ (1 θ ). It is certainly reasonable that the expected Var(X θ ) = nθ(1 number of successes in n trials is n times the success probability for each trial, and Figure 1.2 shows that the variance is maximized with θ = 0.5 when the outcome of each trial is the least predictable. Appendix A.1 provides other distributions with support = 0, 1,...,n including the discrete uniform and betabinomial distributions. Poisson: When the support is the counting numbers = 0, 1, 2,... , a common model is the Poisson PMF deﬁned above (and plotted in Figure 1.1). The Poisson PMF can be motivated as the distribution of the number of T if the events are independent events that occur in a time interval of length length T if θ/T events per unit of and equally likely to occur at any time with rate θ/T time. The mean and variance are E(X E(X θ ) = Var(X Var(X θ ) = θ . Assuming that the mean and variance are equal is a strong assumption, and Appendix A.1 provides alternatives with support = 0, 1, 2,... including the geometric


−
S {
}
S {

S {
and negative binomial distributions.

}
}
6
1.1.1.2 1.1. 1.2
Bayesian Statistic Statistical al Methods
Con Contin tinuous uous distr distributi ibutions ons
S
The PMF does not apply for continuous distributions with support that that is X is a subinterval of the real line. To see this, assume that X is the daily rainfall (inches) and can thus be any nonnegative real number, = [0, [0, ). What is X is exactly π/22 inches? Well, within some small range the probability that that X exactly π/ π/2, π/22 ,π/ ,π/22 + )) with say around π/ around 2, = (π/ say = 0.001, it seems reasonable
S
∞
T − X = x) = q to assume that all values in T are equ equally ally likel likely y, say say Pro Prob( b(X for all x ∈ T . But since there are an uncountable number of values in T when we sum the probability over the values in T we get inﬁnity and thus
q = 0. Therefore, for continuous random the probabilities are invalid unless q variables Prob(X Prob(X = x) = 0 for all all x and we must use a more sophisticated method for assigning probabilities. Instead of deﬁning the probability of outcomes directly using a PMF, for contin con tinuous uous rando random m variabl ariables es we deﬁne probabilit probabilities ies indirectly indirectly throu through gh the cumulative distribution function (CDF) F F ((x) = Prob(X Prob(X
≤ x). ≤
(1.11)
The CDF can be used to compute probabilities for any interval, e.g., in the π//2 + ) ) Prob( X < π π/ /2 ) = rain example Prob(X Prob(X ) = Prob(X Prob(X < π Prob(X F ( F (π/ π/22 + ) F F ( π/2 π/ F is F + ) ( 2 ), which converges to zero as as shrinks if is a conX falling tinuous function. Deﬁning the probability of X falling in an interval resolves the conceptual problems discussed above, because it is easy to imagine the proportion of days with rainfall in an interval converging to a nonzero value as the sample size increases. The probability on a small interval is
−
x Prob(x Prob(
∈ S −
−
−
F ((x + ) − F ( F (x − ) ≈ 2f f ((x) − < X < x + ) = F
(1.12)
f (x) is the derivative of F ((x) and is cal where f ( of F called led the probab probabilit ility y den densit sity y f ((x) is the PDF of X , then the probability of X [a, b] is function (PDF). If f the area under the density curve between between a and and b (Figure 1.3),
∈ ∈
b
a Prob(a Prob(
≤ X ≤ ≤ b) = F F ((b) − F ( F (a) =
f ( f (x)dx.
(1.13)
a
The distributions of random variables are usually deﬁned via the PDF. To ensure that the PDF produces valid probability statements we must have ∞ f ((x)dx dx = f f ((x) 0 for all = 1. Because Because f (x) is not a probability, but all x and −∞ f rather a function used compute probabilities via integration, the PDF can be greater than one for some some x so long as it integrates to one. The formulas for the mean and variance of a continuous random variable resemble those for a discrete random variable except that the sum over the
≥
Basics of Bayesian inference
7
5 2 . 0
0 2 . 0
) x ( f
5 1 . 0
0 1 . 0
5 0 . 0
0 0 . 0
−5
0
5
10
x
FIGURE 1.3 Computing probabilities using a PDF. The curve is a PDF f PDF f ((x) and the 5 f ((x)dx dx.. shaded area is Prob(1 < Prob(1 < X < 5) = 1 f
PMF is replaced with an integral over the PDF, E(X ) E(X
∞
=
xf (x)dx xf (
−∞
X ) Var(X Var(
2
{ − E (X )])] } =
= E [X
∞
−∞
[x
f (x)dx. − E( X ))]]2 f ( E(X
Another summary that is deﬁned for both discrete and continuous random variables (but is much easier to deﬁne in the continuous case) is the quantile τ ). For τ )) is the solution to function Q(τ ). function For τ [0 [0,, 1], 1], Q(τ
∈ ∈
Prob[X Prob[X
F [[Q(τ )] τ )] = τ . ≤ Q(τ τ )])] = F ≤
(1.14)
τ ) is the value so that the probability of X X being That is, is, Q(τ ) being no larger than Q(τ ) τ ) is τ . τ . The quantile function is the inverse of the distribution function, τ ), and gives the median Q(0. Q(τ ) τ ) = F −1 (τ ), (0.5) and a (1 α)% equaltailed interval [Q [Q(α/ α/2) 2),, Q(1 α/ α/2)] 2)] so that Prob[Q Prob[Q(α/ α/2) 2) X Q(1 α/ α/2)] 2)] = 1 α. Gaussian: As with discrete data, parametric models are typically assumed for continuous data and practitioners must be familiar with several parametric , ) is families. The most common parametric family with support = ( the normal (Gaussian) family. The normal distribution has two parameters, the mean E(X E(X θ) = µ and variance Var(X Var(X θ ) = σ 2 , and the familiar bellshaped PDF 1 1 f f ((x θ ) = exp (x µ)2 , (1.15) 2 2σ 2πσ
−
≤ ≤ ≤
−
−
−
S −∞ ∞



2
√
−
−
where θ = (µ, σ ). The Gaussian distribution is famous because of the central
8
Bayesian Statistic Statistical al Methods
b=1
b=5 0 . 3
a=1
a=5
a = 10
a = 10
5 . 2
6 . 0
F D P
a=1
a=5
8 . 0
0 . 2
F D P
4 . 0
5 . 1
0 . 1 2 . 0
5 . 0
0 . 0
0 . 0
0
5
10
15
0
20
1
2
3
4
x
x
FIGURE 1.4 f (x θ ) = Plots of the gamma PDF. Plots of the gamma density function function f ( ba a bx)) for several combinations of a and x exp( bx and b. Γ(a Γ(a)

−
limit theorem n Xi(CLT). The CLT applies to the distribution of the sample mean ¯ n = i=1 , where X where X 1 ,...,X n are independent samples from some distrin f (x). The CLT says that under fairly general conditions, for large n bution f ( bution ¯ n is approximately normal even if if f (x) is not. Therethe distribution of X fore, the Gaussian distribution is a natural model for data that are deﬁned as averages, but can be used for other data as well. Appendix A.1 gives other continuous distributions with = ( , ) including the double exponential and studentt distributions. Gamma: The gamma distribution has = [0, [0, ). The PDF is
S −∞ ∞ S ∞ b bx)) x ≥ 0 xa exp(−bx Γ( Γ(a a ) f f ((xθ) = a
0
x < 0
(1.16)
where Γ is the gamma function and a > 0 and and b > 0 are the two parameters in θ = (a, b). Beware that the gamma PDF is also written with b in the denominator of the exponential function, but we use the parameterization above. Under the parameterization in (1.16) the mean is a/b is a/b and and the variance is a/b is a/b 2 . As shown in Figure 1.4, a is the shape parameter and b is the scale. Setting a = 1 gives the exponential distribution with PDF f bx)) which PDF f ((x θ) = b exp( bx decays exponentially from the origin and large a large a gives gives approximately a normal b does distribution. Varying arying b does not change the shape of the PDF but only aﬀects its spread. Appendix A.1 gives other continuous distributions with = [0, [0, ) including the inversegamma distribution.

−
S
∞
Basics of Bayesian inference
5 . 3
a = 0.5, b = 0.5 a = 1, b = 1 a = 10, b = 10
0 . 3
9
a a a a
8
= = = =
1, b = 10 2, b = 5 5, b = 2 10, b = 1
5 . 2 6
F D P
0 . 2
F D P
5 . 1
4
0 . 1 2 5 . 0
0 . 0
0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
x
x
FIGURE 1.5 f (x θ ) = Plo lots ts of th the e be beta ta PDF. Plots Plots of the the bet betaa de dens nsit ity y func functi tion on f ( Γ(a,b Γ(a,b)) xa−1 (1 x)b−1 for several several a and and b. Γ(a Γ(a)Γ( )Γ(b b)

−
S = [0[0,, 1] and PDF Γ(a,b Γ(a,b)) x ∈ [0, xa−1 (1 − x)b−1 [0, 1] Γ(a Γ(a)Γ( )Γ(b b)
Beta: The beta distribution has f (x θ ) =

0
(1.17)
x < 0 or x > 1, 1 ,
where a > 0 and b > 0 are the two parameters in θ = (a, b). As shown in Figure 1.5, the beta distribution is quite ﬂexible and can be leftskewed, rightskewed, or symmetric. The beta distribution also includes the uniform f (x) = 1 for b = distribution f ( distribution for x [0 [0,, 1] by setting setting a = = b = 1.
∈
1.1.2 1.1 .2
Multi Multiv variate ariate dis distri tribut bution ionss
Most statistical analyses involve multiple variables with the objective of studying relationships between variables. To model relationships between variables we need multivariate extensions of mass and density functions. Let Let X 1 ,...,X p that at X j be p random variab ariables, les, j be th thee su supp ppor ortt of X j so th j , and vector. Table 1.1 describes 1.1 describes the joint distribuX = (X 1 ,...,X p ) be the random vector. Table tion between the the p = 2 variables: variables: X 1 indicates that the patient has primary health outcome and and X 2 indicates the patient has a side eﬀect. If all variables are discrete, then the joint PMF is
∈ S
S
f (x1 ,...,x p ). X 1 = x 1 ,...,X p = x p ) = f ( Prob(X Prob( f must be nonnegative f (x1 ,...,x p ) To be a valid PMF, PMF, f nonnegative f (
(1.18)
≥ 0 and sum to one
∈S 1 ..the xp ∈S 1 f (x1 ,...,x p ) = 1. ., univariate As in, ..., case, probabilities for continuous random variables are
x1
10
Bayesian Statistic Statistical al Methods
TABLE 1.1 Hypothetical joint PMF. This PMF f PMF f ((x1 , x2 ) gives the probabilities that a (X 1 = 0) primary health outcome patient has a positive (X (X 1 = 1) or negative (X ( X 2 = 0) a negative side eﬀect. and the patient having (X (X 2 = 1) or not having (X X 2 X 1 0 1 f 2 (x2 )
1 1 0.006 0.114 f 0(.2x0 ) 0.24 0.56 0.80 0.30 0.70
computed indirectly via the PDF f PDF f ((x1 ,...,x p ). In the univariate case, probabilities are computed as the area under the density curve. For p For p > 1, probabilities are computed as the volume under the density surface. For example, for p for p = = 2 and a2 < X 2 < b2 is random variables, the probability of a1 < X 1 < b1 and b1
b2
f f ((x1 , x2 )dx1 dx2 . a1
(1.19)
a2
This gives the probability of the random vector X = (X 1 , X 2 ) lying in the rectangle deﬁned by the endpoints a1 , b1 , a2 , and b2 . In general, the probabil ab ilit ity y of th thee rand random om vec ecto torr X falli falling ng in re regi gion on is the pdimensional f (x1 ,...,x p )dx1 ,...,dx p . As an ex integral A f ( exam ampl ple, e, cons consid ider er th thee biv bivar aria iate te f ((x1 , x2 ) = 1 f o r X with x1 PDF on the unit square with f [0, [0, 1] and x2 [0, f ((x1 , x2 ) = 0 otherwise. Then Prob(X Prob(X 1 < .5, X 2 < .1) = [0, 1] and f 0.5 0.1 f ( f (x1 , x2 )dx1 dx2 = 0.05. 0 0
∈ 1.1.3 1.1 .3
A
∈
Margin Marginal al and and condit condition ional al distr distribu ibutio tions ns
The marginal and conditional distributions that follow from a multivariate distribution are key to a Bayesian analysis. To introduce these concepts we assume discrete random variables, but extensions to the continuous case are straightforward by replacing sums with integrals. Further, we assume a bivariate PMF with with p = 2. Again, extensions to high dimensions are conceptually straightforward by replacing sums over one or two dimensions with sums over p 1 or or p dimensions. of X j if we The marginal distribution of of X j is simply the distribution of consider only a univariate analysis of X j and disregard the other variable. Denote f j (xj ) = Prob(X Denote Prob(X j = xj ) as the marginal PMF of X j . The marginal distribution is computed by summing over the other variable in the joint PMF
−
f 1 (x1 ) =
f f ((x1 , x2 ) and f 2 (x2 ) =
f ( f (x1 , x2 ).
x2
x1
(1.20)
These are referred to as the marginal distributions because in a twoway table
Basics of Bayesian inference
11
such as Table as Table 1.1, 1.1, the marginal distributions are the row and column totals of the joint PMF that appear along the table’s margins. As with any univariate distribution, the marginal distribution can be summarized with its mean and variance, xj f j (xj ) =
µj = E( X j ) = E(X xj
σj2
=
x1
∈S j
2
xj
x2
− µj ) f j (xj ) =
(xj
Var(X j ) =
xj f (x1 , x2 ) (xj
x1
x2
f (x1 , x2 ). − µj )2 f (
The ma The marg rgin inal al mean mean an and d var aria ianc ncee me meas asur uree th thee ce cen nte terr and and sp spre read ad of th thee marginal distributions, respectively, but do not capture the relationship between the two variables. The most common onenumber summary of the joint relationship is covariance, deﬁned as E[(X 1 Cov(X 1 , X 2 ) = E[(X
σ12 =
(x1
=
x1
x2
X 2 − µ2 )] − µ1)()(X
(1.21)
f (x1 , x2 ). − µ1)(x )(x2 − µ2 )f (
The covariance often hard to interpret because depends on the scale of X 1 we double X 2 ,isi.e., X 1 and theitcovariance. Correlation is if we double double X and X both X both a scalefree summary of the joint relationship, relationship, ρ12 = σσ112 σ2 . In vector notation, the mean of the random vector X is E(X) = (µ1 , µ2 )T , the covariance matrix σ12 σ12 1 ρ12 . , and the correlation matrix is Cor( X ) = is Cov(X) = σ12 σ22 ρ12 1 Generalizing for p for p > 2, the mean vector becomes E(X) = (µ1 ,...,µ p )T and the p matrix with diagonal elements covariance matrix becomes the symmetric p symmetric p p matrix σ12 ,...,σ p2 and (i, (i, j ) oﬀdiagonal element element σij . While the marginal distributions sum over columns or rows of a twoway table, the conditional distributions focus only on a single column or row. The conditional distribution of X 1 given that the random variables X 2 is ﬁxed at x is denoted f denoted f (x X = x ) or simply f simply f ((x x ). Referring to Table to Table 1.1, 1.1, the the 2 2 1 2 2 1 2 knowledge that1 X that 2 = x2 restricts the domain to a single column of the twoway table. However, the probabilities in a single column do not deﬁne a valid PMF because their sum is less than one. We must rescale these probabilities to sum to one by dividing the column total, which we have previously deﬁned as as f 2 (x2 ). Therefore, the general expression for the conditional distributions of X 1 X 2 = x 2 , and similarly similarly X 2 X 1 = x 1 , is
×




f 12 (x1 X 2 = x 2 ) =

f ( f (x1 , x2 ) f f ((x1 , x2 ) . (1.22) and f 21 (x2 X 1 = x 1 ) = f 1 (x1 ) f 2 (x2 )

Atlantic Atlan tic hur hurricane ricaness examp example le: T Table able 1.2 pr prov ovide idess the cou count ntss of At Atlantic lan tic hu hurri rrican canes es tha thatt mad madee lan landfa dfall ll bet betwe ween en 1990 1990 and 201 20166 tab tabula ulated ted by their th eirco inte in tens yese ca cate (1–5 (1–5) an and wh whet ethe her rions th they eyand hi th the US els elsew ewhe here re.,. Of cour urse se,nsit , ity th thes e tego are argory e ry on only ly samp sa)mple led propo pr oport rtio ns anhit d tno not te tr true ue or pr prob obab abil ilit ities ies,
12
Bayesian Statistic Statistical al Methods
TABLE 1.2 Table of the Atlantic hurricanes that made landfall between 1990 and 2016. The counts are tabulated by their maximum Saﬃr–Simpson intensity category and whether they made landfall in the US or elsewhere. The counts are downloaded from http://www.aoml.noaa.gov/hrd/hurdat/. (a) Counts 1 US 14 Not US 46 Total 60
Category 2 3 4 13 10 1 19 20 17 32 30 18
5 Total 1 39 3 105 4 144
(b) Sample proportions 1 0.0972
US
NoottaUS T l 00..43116974
Category 2 3 4 5 0.0903 0.0694 0.0069 0.0069
Total 0.2708
.13 00..1 2232129
01..70209020
00..21038839
00..11128510 00.02 .0 .0220788
but for but for th this is ex exam ampl plee we trea treatt Tab able le 1. 1.2b 2b as th thee join jointt PM PMF F of loc locat atio ion, n, X 1 1, 2, 3, 4, 5 . The marginal US,, Not US , and intensity category, US category, X 2 distribution of of X 1 is given in the ﬁnal column and is simply the row sums of the joint PMF. The marginal probability of a hurricane making landfall in the US is Prob(X Prob(X 1 = US) = 0. 0 .2708, which is the proportion calculation as if we had nev never er con consid sidere ered d the sto storms rms’’ cat catego egory ry.. Sim Similar ilarly ly,, the col column umn sums are the marginal probability of intensity averaging over location, e.g., X 2 = 5) = 0. 0 .0278. Prob(X Prob(
∈ {
∈ {
}
}
The conditional distributions tell us about the relationship between the two variables. For example, the marginal probability of a hurricane reaching category 5 is 0.0278, but given that the storm hits the US, the conditional disX 1 = tribution is slightly lower, lower, f 21 (5 US) = Prob(X Prob(X 1 = US US,, X 2 = 5)/ 5)/Prob( Prob(X US) = 0. 0.0069 0069//0.2708 = 0. 0.0255. By deﬁnition, the conditional probabilities sum to one,

 f 21 (3US) f 21 (5US)
f 21 (1 US)
0.0972 0.0903 = 0.3589, 3589, f 21 (2 US) = = 0.3334 0.2708 0.2708 0.0694 0.0069 = = 0.2562, 2562, f 21 (4 US) = = 0.0255 0.2708 0.2708 0.0069 = = 0.0255 0255.. 0.2708
=
 
Given that a storm hits the US, the probability of a category 2 or 3 storm
Basics of Bayesian inference
13
increases, while the probability of a category 4 or 5 storm decreases, and so there is a relationship between landfall location and intensity.
Independence examp Independence example le: Consider the joint PMF of the primary health outcome (X (X 1 ) and side eﬀect (X (X 2 ) in Table in Table 1.1. 1.1. In In this example, the marginal 0.80, probability of a positive primary health outcome is Prob(X Prob(X 1 = 1) = 0. /0.70 = 0. as is the conditional probability f 1 2 (1 X 2 = 1 ) = 0. 0.56 56/ 0.80 given the patient has the side eﬀect and the conditional probability f 12 (1 0) = /0.30 = 0. 0.24 24/ 0.80 given the patient does not have the side eﬀect. In other words, both with and without knowledge of side eﬀect status, the probability of a positive health outcome is 0.80, and thus side eﬀect is not informative about the primary health outcome. This is an example of two random variables that are independent. and X 2 are independent if and only if the joint PMF (or Generally, X 1 and PDF for continuous random variables) factors into the product of the marginal distributions, f f ((x1 , x2 ) = f 1 (x1 )f 2 (x2 ). (1.23)


independ ependen entt then then From rom th this is ex expr pres essio sion n it is cle clear ar th that at if X 1 and X 2 are ind f (x1 , x2 )/f 2 (x2 ) = f 1 (x1 ), and thus X f 1 2 (x1 X 2 = x 2 ) = f ( thus X 2 is not informative  X 1 . A special case of joint independence is if all marginal distributions about about f ((x) for all and X 2 are same, same, f j (x) = f all j and and x. In this case, we say that X 1 and

iid
∼
f . are independent and identically distributed (“iid”), which is denoted X denoted X j f . If variables are not independent then they are said to be dependent. Multinomial: Parametric families are also useful for multivariate distributions. A common parametric family for discrete data is the multinomial, which, n as the name implies, is a generalization of the binomial. Consider the case of n p possible outcomes (e.g., independent trials where each trial results in one of p p = p = 3 and each result is either win, lose or draw). Let X j 0, 1,...,n be the number of trails that resulted in outcome outcome j , and X = (X 1 ,...,X p ) be the vector of counts. If we assume that θ that θ j is the probability of outcome outcome j for each p Multinomial(n, θ) trial, with θ = (θ1 ,...,θ p ) and j =1 θj = 1, then X θ Multinomial(n, with n! f f ((x1 ,...,x p ) = θ1x1 ... θ pxp (1.24) x1 ! ... x p !
∈ {
}
 ∼ · ·
· ·
where n = pj =1 xj . If there are only where only p = 2 categories then this would be a n, θ2 ). binomial experiment experiment X 2 Binomial( Binomial(n, Multivariate normal: The multivariate normal distribution is a generalization of the normal distribution to to p > 1 random variables. For For p = 2, the 2 , σ2 , ρ bivariate normal has ﬁve parameters, θ = (µ1 , µ2 , σ1 2 ): a mean parameter for each variable E(X E(X j ) = µ j , a variance for each variable Var(X Var( X j ) = σ j2 > 0, and the correlation Cor(X Cor(X 1 , X 2 ) = ρ [ 1, 1]. The density function is
∼
∈−
f ( f (x1 , x2 ) = 2πσ 1 σ2
z12
1
− 1
ρ2 exp
−
2ρz1 z2 + z22
−2(1 − ρ2)
,
(1.25)
14
Bayesian Statistic Statistical al Methods
−
in Figure 1.6 1.6 the the density surface is ellipwhere zj = (xj µj )/σj . As shown in Figure where tical with center determined by µ = (µ1 , µ2 ) and shape determined by the σ12 σ1 σ2 ρ covariance matrix Σ = . σ1 σ2 ρ σ22 A co con nven enie ien nt fea featu ture re of th thee biv bivaria ariate te no norm rmal al dist distri ribu buti tion on is th that at th thee marginal and conditional distributions are also normal. The marginal distribution of X j is Gaussian with mean µj and variance σj2 . The conditional distribution, shown in Figure in Figure 1.7, 1.7, is
X 2 X 1 = x 1

∼
σ2 Normal µ2 + ρ (x1 σ1
− µ1), (1 − ρ2)σ22
.
(1.26)
If If ρ = 0 the then n the con condit dition ional al dis distri tribut bution ion is the mar margin ginal al dis distri tribut bution ion,, as expected. If ρ > 0 (ρ < 0) then the conditional mean increases (decreases) with x1 . Also, the conditional variance (1 ρ2 )σ22 is less than the marginal with for ρ near 1 or 1, and so conditioning on on X 1 reduces variance σ22 , especially for variance uncertainty in in X 2 when there is strong correlation. The multivariate normal PDF for p > 2 is most concisely written using matrix notation. The multivariate normal PDF for the random vector X with mean vector µ and covariance matrix Σ is
−
p/2 f ( f (X) = (2π (2π )− p/2 Σ −1/2 exp
 
−
1 (X 2
− µ)T Σ−1 (X − µ)
(1.27)
where A , AT and A−1 are the determinant, transpose and inverse, respectively, of the matrix A. From this expression it is clear that the contours of the log PDF are elliptical. All conditional and marginal distributions are normal, as are all linear combinations pj=1 wj X j for any any w1 ,...,w p .
 
1.2 1. 2
Bay Bayes’ ru rule le
As th thee na name me impl implie ies, s, Ba Bay yes’ es’ ru rule le (or (or Ba Bay yes’ theo theore rem) m) is fund fundam amen enta tall to Baye Ba yesia sian n sta statis tistic tics. s. Ho Howe weve ver, r, this this rul rulee is a gen genera erall res result ult fro from m pro probab babili ility ty and follows naturally from the deﬁnition of a conditional distribution. Consider two random variables variables X 1 and and X 2 with joint PMF (or PDF as the result f (x1 , x2 ). The holds for both discrete and continuous data) density function f ( /f ((x2 ) and f (x2 , x1 )/f f (x1 x2 ) = f ( deﬁnition of a conditional distribution gives gives f ( f ((x1 ). Combining these two expressions gives Bayes’ rule f (x1 , x2 )f f f ((x2 x1 ) = f (



f f ((x2 x1 ) =
f (x2 ) f f ((x1 x2 )f ( . f f ((x1 )

(1.28)


to X 2 X 1 , This result is useful as a means to reverse conditioning from X from X 1 X 2 to X and also indicates the need to deﬁne a joint distribution for this inversion to be valid.
Basics of Bayesian inference
15
(a)
(b)
4
4
2
2
2
X
0
2
X
0
2 −
2 −
4 −
4 −
−4
−2
0
2
4
−4
−2
0
X1
X1
(c)
(d)
4
4
2
2
2
X
2
X
2 −
4 −
4 −
0
4
2
4
0
2 −
−2
2
0
−4
2
4
X1
−4
−2
0
X1
FIGURE 1.6 Plots of the bivariate normal PDF. Panel (a) plots the bivariate normal and ρ = 0. The other panels modify Panel 1, σ2 = 1 and PDF for µ = (0 (0,, 0), 0), σ1 = 1, (1, 1) and and ρ = 0.8, (a) as follows: (b) has µ = (1 (1,, 1) and and σ1 = 2, (c) has µ = (1, and (d) has has µ = (1 (1,, 1) and and ρ = 0.8. The plots are shaded according to the PDF with white indicating the PDF near zero and black indicating the areas with highest PDF; the white dot is the mean vector µ.
−
16
Bayesian Statistic Statistical al Methods (b) Conditional distribution
(a) Joint distribution
x1 = −1 4
n o i t u b i r t s i d l a n o i t i d n o C
2
2
0
X
x1 = 1
6 0 . 0
2 −
x1 = 3
5 0 . 0
4 0 . 0
3 0 . 0
2 0 . 0
1 0 . 0 4 −
0 0 . 0
−4
−2
0
2
4
−4
0
−2
X1
2
4
x2
FIGURE 1.7 Plots of the joint and conditional bivariate normal PDF. Panel (a) and ρ = 0.8. The plots the bivariate normal PDF for µ = (1 (1,, 1), 1), σ1 = σ 2 = 1 and plot is shaded according to the PDF with white indicating the PDF near zero and black indicating the areas with highest PDF; the vertical lines represent x1 = 1, 1, x1 = 1 and and x1 = 3. The conditional distribution of of X 2 X 1 = x1 for these three values of x1 are plotted in Panel (b).
−
1.2. 1. 2.1 1

Disc Discre rete te exam exampl ple e of Bay Bayes’ es’ rul rule e
You have a scratchy throat and so you go to the doctor who administers a rapid strep throat test. Let Y 0, 1 be the binary indicator of a positive test, i.e., Y i.e., Y = = 1 if the test is positive for strep and Y and Y = = 0 if the test is negative. The test is not perfect. The false positive rate rate p [0, [0, 1] is the probability of testing positive if you do not have strep, and the false negative rate q [0, [0, 1] is the probability of testing negative given that you actually have strep. To express these probabilities mathematically we must deﬁne the true disease
∈ { }
∈
∈ ∈
status θ status 0, 1 , where where θ = 1 if you are truly infected and θ = 0 otherwise. This unknown unknown vari variable able we hope to estimate is called a param parameter. eter. Given these error rates and the deﬁnition of the model parameter, the data distribution can be written
∈ { }
Prob(Y = 1 θ = 0) = p Prob(Y = p and

Y = 0 θ = 1) = q Prob(Y = q..

(1.29)
Generally, the PMF (or PDF) of the observed data given the model parameters is called the likelihood function. To formally analyze this problem we must determine which components should be treated as random variables. Is the test result Y result Y a a random variable? Y is Before the exam, exam, Y is clearly random and (1.29) deﬁnes its distribution. This is aleatoric uncertainty because the results may diﬀer if we repeat the test. Y is determined and you must However, after the learning of the test results, results, Y is
Basics of Bayesian inference
17
Y at hand. In this sense, Y decide how to proceed given the value of Y at sense, Y is known and no longer random at the analysis stage. Is the true disease status status θ a random variable? Certainly θ is not a random variable in the sense that it changes from secondtosecond or minutetominute, and so it is reasonable to assume that the true disease status is a ﬁxed quantity for the purpose of this analysis. However, because our test is imperfect we do not know θ. This is epistemic uncertainty because θ is a quantity that we could theoretically know, but at the analysis stage we do not and cannot know know θ using only noisy data. Despite our uncertainty about θ , we have to decide what to do next and so it is useful to quantify our uncertainty using the language of probability. If the test is reliable and p and q q are are both small, then in light of a positive test we might conclude that θ is more likely to be one than zero. But how much more likely? Twice as likely? Three times? In Bayesian statistics we quantify uncertainty about ﬁxed but unknown parameters using probability theory by treating them as random variables. As (1.28) suggests, for formal inversion of conditional probabilities we would need to treat both variables as random. The probabilities in (1.29) supply the distribution of the test result given disease status, Y θ . However, we would like to quantify uncertainty in the

disease status given the test results, that is, we require the distribution of θ Y . Y . Since this is the uncertainty distribution after collecting the data this is referred to as the posterior distribution. As discussed above, Bayes’ rule can be applied to reverse the order of conditioning,

Y = Prob(θ = 1 Y Prob(θ = 1) =

Prob(Y = 1 θ = 1)Prob(θ Prob(Y 1)Prob(θ = 1) , Y = 1) Prob(Y Prob(

(1.30)
where the marginal probability Prob(Y Prob(Y = = 1) is 1
θ=0
f (1 f (1,, θ) = Prob(Y Prob(Y = 1 θ = 1)Prob(θ 1)Prob(θ = 1)+Prob(Y 1)+Prob(Y = 1 θ = 0)Prob(θ 0)Prob(θ = 0). 0).


(1.31) To apply Bayes’ rule requires specifying the unconditional probability of having strep throat, Prob(θ Prob(θ = 1) = π [0 [0,, 1]. Since this is the probability of infection before we conduct the test, we refer to this as the prior probability. We can then compute the posterior using Bayes’ rule,
∈

Y = Prob(θθ = 1 Y Prob( = 1) =
(1
−
−
(1 q )π q )π + p(1 + p (1
− π) .
(1.32)
To understand this equation consider a few extreme scenarios. Assuming the q are error rates rates p and and q are not zero or one, if π = 1 (π = 0) then the posterior Y . That is, if we have probability of θ = 1 (θ = 0) is one for any value of Y . no prior uncertainty then the imperfect data does not update the prior. Con(0, 1) the versely, if the test is perfect and and q = p = 0 then for any prior π (0, Y is posterior probability that that θ is is Y is one. That is, with perfect data the prior
∈
18
Bayesian Statistic Statistical al Methods
TABLE 1.3 Strep throat data. Number of patients that are truly positive and tested positive in the rapid strep throat test data taken from Table 1 of [26].
Children Adults Total
Trul ruly y positiv positive, e, test posi posittive ive 80 43 123
Trul ruly y positiv positive, e, test negative ive 38 10 48
Truly ruly negati negative ve,, test po possitive 23 14 37
Truly ruly negati negative ve,, test negat atiive 349 261 610
is irrelevant. Finally, if if p = q = 1/2, then the test is a random coin ﬂip and Y ) = π π.. the posterior is the prior Prob(θ Prob( θ = 1 Y ) For a more realistic scenario we use the data in Table 1.3 taken from [26]. /(37 + 610) = We plug in the sample error rates from these data for p = 37 37/ q = 48/ 0.057 and q 48/(48 + 123) = 0. 0.281. Of course these data represent only a sample and the sample proportions are not exactly the true error rates, but for illustration we assume these error rates are correct. Then if we assume prior probability of disease is π is π = 0.5, the posterior probab probabilities ilities are Prob( Prob(θθ = Y = 0) = 0. Y = 1 Y = 0 .230 and Prob(θ Prob(θ = 1 Y = 1) = 0. 0 .927. Therefore, beginning with a prior probability of 0.5, a negative test moves the probability down to 0.230 and a positive test increases the probability to 0.927. Of course, in reality the way individuals process test results is complicated and subjective. If you have had strep many times before and you went to the doctor because your current symptoms resemble previous bouts with the disease, then perhaps your prior is π = 0.8 and the posterior is Prob(θ Prob(θ = 1 Y Y = 11)) = 00..981. On the other hand, if you went to the doctor only at the urging of your friend and your prior probability is π = 0.2, then Prob(θ Prob(θ = Y = 1) = 0. 1 Y = 0.759. This simple example illustrates a basic Bayesian analysis. The objective is to compute the posterior distribution of the unknown parameters θ . The



 
posterior has two ingredients: the likelihood of the data given the parameters and the prior distribution. Selection of these two distributions is thus largely the focus of the remainder of this book.
1.2.2 1.2 .2
Contin Continuou uouss exampl example e of Ba Bayes’ rule rule
∈
Let θ [0, Let [0, 1] be the proportion of the population in a county that has health insurance. It is known that the proportion varies across counties following a a, b) distribution and so the prior is θ Beta( a, b). We take a sample Beta(a, Beta( Beta(a, of size size n = 20 from your county and assume that the number of respondents n, θ ). Joint with insurance, insurance, Y 0, 1,...,n , is distributed as as Y θ Binomial( Binomial(n,
∼
∈ {
}
 ∼
Basics of Bayesian inference
19
20
15
Y10
5
0
0.00
0.25
0.50
0.75
1.00
θ
FIGURE 1.8 f (θ, y ) for the Joint distribution for the betabinomial example. Plot of f ( example with with θ Beta(8 Beta(8,, 2) and and Y θ Binomial(20 Binomial(20,, θ). The marginal distri f ((θ) (top) and f butions f butions and f ((y ) (right) are plotted in the margins. The horizontal Y = 12 line. line is the the Y =
∼
 ∼
Y can be computed from probabilities for for θ and and Y

f ( f (θ, y ) = f f ((y θ )f (θ ) n y = y θ (1
n y
− θ) − = cθ y+a−1 (1 − θ)n−y+b−1
a, b) a 1 Γ( Γ(a, a)Γ( Γ(a Γ( )Γ(bb) θ − (1
b 1
− θ) −
a, b)/[Γ( a)Γ( where c = ny Γ( Γ(a, [Γ(a )Γ(bb)] is a constant that does not depend on θ . f ((θ, y ) and the marginal distributions for θ and Y . By the Figure 1.8 plots plots f and Y . way we have deﬁned the problem, the marginal distribution of of θ, f (θ ), is a Beta(a, Beta(a, b) PDF, which could also be derived by summing f (θ, y ) over y . Y plotted on the right of Figure 1.8 is f ( f (y ) = The marginal distribution of Y 1 f (θ, y )dθ dθ.. In this case the marginal distribution of Y of Y follows follows a betabinomial 0 distribution, but as we will see this is not needed in the Bayesian analysis. In this problem we are given the unconditional distribution of the disease rate (prior) and the distribution of the sample given the true proportion (likelihood), and Bayes’ rule gives the (posterior) distribution of the true proportion
20
Bayesian Statistic Statistical al Methods
Y = 12. The horizontal line in Figure given the sample. Say we observe Y in Figure 1.8 Y = 12). The conditional distributraces over the conditional distribution f distribution f ((θ Y = /n = tion is centered around the sample proportion proportion Y /n = 0.60 but has nontrivial mass from 0.4 to 0.8. More formally, the posterior is


f f ((θ y ) = =

f f ((y θ )f (θ ) f f ((y ) c θ y+a−1 (1 f f ((y )
− θ)n−y+b−1
= C θ A−1 (1
− θ)B−1 (1.33) c/f ((y ), y + where C = c/f where ), A = = y + a, and and B = n − y + b. f ((θy ) and the PDF of a Beta(A, We note the resemblance between f Beta( A, B ) density. Both include include θA−1 (1 − θ )B−1 but diﬀer in the normalizing constant, C for f ((θ y ) compared to Γ(A, A)Γ( B )] for the Beta(A, for f Γ(A, B )/[Γ( [Γ(A )Γ(B Beta(A, B ) PDF. Since f (θy ) and the Beta(A, both f ( both Beta(A, B ) PDF are proper, they both integrate to one, and thus 1
C θ − (1
θ ) − dθ dθ =
A 1
0
and so
1
C
0
−
0
1
B 1
θ − (1 − A 1
Γ(A, Γ( A, B )
θ A−1 (1
B) Γ(A)Γ( Γ(A )Γ(B
Γ(A, Γ(A, B ) θ)B−1 dθ dθ = A)Γ( B) Γ(A Γ( )Γ(B
θ )B−1 dθ dθ = 1
(1.34)
− 1
θA−1 (1
0
− θ)B−1 dθ
(1.35)
A, B )/[Γ( A)Γ( B )]. Therefore, f and thus C thus C = Γ( Γ(A, [Γ(A )Γ(B Therefore, f ((θ y ) is in fact the Beta(A, Beta(A, B ) Y = y Beta( PDF and and θ Y Beta(yy + a, n y + b). Dealing with the normalizing constant makes posterior calculations quite tedious. Fortunately this can often be avoided by discarding terms that do not in invo volv lvee the parame parameter ter of inter interest est and com compar paring ing the rem remain aining ing ter terms ms with known distributions. The derivation above can be simpliﬁed to (using

∝
∼
−

“ ” to mean “proportional to”) f f ((θ y ) f (y θ)f f ((θ )
∝ θ(y+a)−1(1 − θ)(n−y+b)−1 Y = y ∼ Beta( and immediately concluding that that θ Y Beta(yy + a, n − y + b). Figur Fig uree 1.9 1.9 p plot lotss th thee pos poste teri rior or dist distri ribu buti tion on for for two pr prio iors rs and and Y ∈ {0, 5, 10 10,, 15 15,, 20}. The plots illustrate how the posterior combines information  ∝

from the prior and the likelihood. In both plots, the peak of the posterior disY .. Comparing the plots shows that tribution increases with the observation Y Y = 0 successes, the prior also contributes to the posterior. When we observe Y = the posterior under the Beta(8,2) prior (left) is pulled from zero to the right by the prior (thick line). Under the Beta(1,1), i.e., the uniform prior, when Y = Y = 0 the posterior is concentrated around around θ = 0.
Basics of Bayesian inference
Beta(1,1) prior
Beta(8,2) prior Y=0 0 1
21
0 2
Y=5 Y = 10 Y = 15
8
Y = 20
5 1
Prior r o i r e t s o P
r o i r e t s o P
6
0 1
4
5 2
0
0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
θ
θ
FIGURE 1.9 Posterior distribution for the betabinomial example. The thick lines are the beta prior for success probability θ probability θ and the thin lines are the posterior Y . assuming Y θ Binomial(20 assuming Binomial(20,, θ) for various values of Y .
 ∼
1.3
In Introduc troductio tion n to Bay Bayesi esian an inf infere erence nce
A parametric statistical analysis models the random process that produced the data, Y = (Y 1 ,...,Y n ), in terms of ﬁxed but unknown parameters θ = (θ1 ,...,θ p ). The PDF (or PMF) of the data given the parameters, f (Y θ), is called the likelihood function and links the observed data with the unknown parameters. Statistical inference is concerned with the inverse problem of using the likelihood function to estimate θ. Of course, if the data are noisy then we cannot perfectly estimate θ, and a Bayesian quantiﬁes uncertainty about the unknown parameters by treating them as random variables. Treating θ

as a random variable requires specifying the prior distribution, π (θ), which represents our uncertainty about the parameters before we observe the data. If we view θ as a random variable, we can apply Bayes’ tule to obtain the posterior distribution p p((θ Y) =

f f ((Y θ )π (θ) f f ((Y θ)π (θ )dθ
 
f (Yθ)π (θ ). ∝ f (
(1.36)
The posterior is proportional to the likelihood times the prior, and quantiﬁes the uncertainty about the parameters that remain after accounting for prior knowledge and the new information in the observed data. Table 1.4 establishes 1.4 establishes the notation we use throughout for the prior, likelihood and posterior. We will not adhere to the custom (e.g., Section 1.1) that random variables are capitalized because in a Bayesian analysis more often
22
Bayesian Statistic Statistical al Methods
TABLE 1.4 Notation used throughout the book for distributions involving the parameter vector θ = (θ1 ,...,θ p ) and data vector Y = (Y 1 ,...,Y n ). Prior density of θ : Likelihood function of Y given θ : Marginal density of Y: Posterior density of θ given Y:
π (θ ) f (Y θ) m(Y) = f ( f (Y θ )π (θ)dθ p(θ Y) = f ( f (Y θ )π (θ )/m /m((Y)



than not it is the parameters that are the random variables, and capital Greek letters, e.g., Prob(Θ = θ ), are unfamiliar to most readers. We will however follow the custom to use bold to represent vectors and matrices. Also, assume independence unless otherwise noted. For example, if we say “the priors are θ1 Uniform(0 Gamma(1,, 1),” you should assume that θ1 and Uniform(0,, 1) and and θ2 Gamma(1 θ2 have independent priors. The Bayesian framework provides a logically consistent framework to use all av availa ailable ble inf inform ormati ation on to qua quan ntif tify y unc uncert ertain aintty abou aboutt model par parame ameter ters. s.
∼
∼
However, apply Bayes’ rule prior distribution and the How do werequires pick thespecifying prior? In likelihood to function. function. How Inthe many cases prior knowledge from experience, expert opinion or similar studies is available and can be used to specify an informative prior. It would be a waste to discard this information. In other cases where prior information is unavailable, then the prior should be uninformative to reﬂect this uncertainty. For instance, in the betabinomial example in Section 1.2 we might use a uniform prior that puts equal mass on all possible parameter values. The choice of prior distribution is subjective, i.e., driven by the analyst’s past experience and personal preferences. If a reader does not agree with your prior then they are unlikely to be persuaded by your analysis. Therefore, the prior, especially an informative prior, should be carefully justiﬁed, and a sensitivity analysis comparing the posteriors under diﬀerent priors should be presented. How do we pick the likelihood? The The likelihood function is the same as in a classical analysis, e.g., a maximum likelihood analysis. The likelihood function for multiple linear regression is the product of Gaussian PDFs deﬁned by the model Y i θ
indep
 ∼ Normal
p
β 0 +
j =1
X iijj β j , σ 2
(1.37)
observation ation and covari ariate ate for the ith observ thee val alu ue of th thee j th cov where X iijj is th 2 θ = (β 0 ,...,β p , σ ) are the unknown parameters. A thoughtful application of multiple linear regression must consider many questions, including
• Which covariates to include? • Are the errors Gaussian? Independent? Do they have equal variance?
Basics of Bayesian inference
23
• Should we include quadratic or interaction eﬀects? • Should we consider a transformation of the response (e.g., model log(Y log(Y i ))? • Which observations are outliers? Should we remove them? • How should we handle the missing observations? • What pvalue threshold should be used to deﬁne statistical signiﬁcance?
As with specifying the prior, these concerns are arguably best resolved using subjective subjectmatter knowledge. For example, while there are statistical methods to select covariates (Chapter ( Chapter 5), a more reliable strategy is to ask a subjectmatter expert which covariates are the most important to include, at least as an initial list to be reﬁned in the statistical analysis. As another example, it is hard to determine (without a natural ordering as in times series data) whether the observations are independent without consulting someone familiar with the data collection and the study population. Other decisions are made based on visual inspections of the data (such as scatter plots and histograms of the residuals) or ad hoc rules of thumb (threshold on outliers’ zscores or pvalues for statistical signiﬁcance). Therefore, in a typical statistical analysis there many subjective choices to be made, and the choice of prior is far from the most important. Bayesian statistical methods are often criticized as being subjective. Perhaps an objective analysis that is free from personal preferences or beliefs is an ideal we should strive for (and this is the aim of objective Bayesian methods, see Section 2.3), but it is hard to make the case that nonBayesian methods are objective, and it can be argued that almost any scientiﬁc knowledge and theories are subjective in nature. In an interesting article by Press and Tanur (2001), the authors cite many scientiﬁc theories (mainly from physics) where subjectivity played a major role and they concluded “Subjectivity “ Subjectivity occurs, and should occur, in the work of scientists; it is not just a factor that plays a minor role that we need to ignore as a ﬂaw...” ﬂaw...” and they further added that “Total “ Total objectivity in science is parts.” a myth. Good scienceinferential inevitably framework involves a mixture subjective and objective parts. ” The Bayesian providesof a logical foundation to accommodate objective and subjective parts involved in data analysis. Hence, a good scientiﬁc practice would be to state upfront all assumptions and then make an eﬀort to validate such assumptions using the current data or preferable future test cases. There is nothing wrong to have a subjective but reasonably ﬂexible model as long as we can exhibit some form of sensitivity analysis when the assumptions of the model are mildly violated. In addit addition ion to explici explicitly tly ack ackno nowledgi wledging ng subjectivit subjectivity y, anoth another er importan importantt diﬀerence between Bayesian and frequentist (classical) methods is their notion of uncertainty. While a Bayesian considers only the data at hand, a frequentist views uncertainty as arising from repeating the process that generated the data many times. That is, a Bayesian might give a posterior probability that the µ (a population populat ion mean mean µ (a parameter) is positive given the data we have observed,
24
Bayesian Statistic Statistical al Methods
¯ (a Y whereas a frequentist would give a probability that the sample mean Y statistic) exceeds a threshold given a speciﬁc value of the parameters if we repeated the experiment many times (as is done when computing a pvalue). The frequentist view of uncertainty is wellsuited for developing procedures that have desirable error rates when applied broadly. This is reasonable in many man y set settin tings. gs. For ins instan tance, ce, a reg regula ulator tory y age agency ncy mig might ht wan antt to adv advocate ocate statistical procedures that ensure only a small proportion of the medications made available to the public have adverse side eﬀects. In some cases however it is hard to see why repeating the sampling is a useful thought experiment. For example, [14] study the relationship between a region’s climate and the type of religion that emerged from that region. Assuming the data set consists of the complete list of known cultures, it is hard to imagine repeating the process that led to these data as it would require replaying thousands of years of human history. Bayesians can and do study the frequentist properties. This is critical to build trust in the methods. If a Bayesian weather forecaster gives the posterior predictive 95% interval every day for a year, but at the end of the year these intervals included the observed temperature only 25% of the time, then the forecaster would lose all credibility. It turns out that Bayesian methods often have desirable properties, and and Chapter 7 examines 7uentis examines these properties. Develo Dev elopin pinggfrequentist Ba Bay yesia esian n met methods hods wit with h Chapter good freq frequen tistt pro propert perties ies is oft often en called calibrated Bayes (e.g., [52]). According to Rubin [52, 71]: “The “ The applied statistician should be Bayesian in principle and calibrated to the real world in pract practic icee  appr appropr opriat iatee fre freque quency ncy calc alcula ulatio tions ns hel help p to deﬁ deﬁne ne suc such h a tie tie... ... frequency fre quency calcu calculations lations are useful for making Bayesian statements scientiﬁc, scientiﬁc in the sense of capable of being shown wrong by empirical test; here the technique is the calibration of Bayesian probabilities to the frequencies of actual events.” events.”
1.4 1. 4
Sum Summa mari rizi zing ng the post poster erio iorr
The ﬁnal output of a Bayesian analysis is the posterior distribution of the model parameters. The posterior contains all the relevant information from thee da th data ta an and d th thee pr prio ior, r, an and d th thus us al alll stat statis isti tica call infer inferen ence ce sh shou ould ld be ba base sed d on the post posterio eriorr dis distri tribut bution ion.. Ho Howe weve ver, r, whe when n the there re are man many y par parame ameter ters, s, the posterior distribution is a highdimensional function that is diﬃcult to display graphically, and for complicated statistical models the mathematical form of the posterior may be challenging to work with. In this section, we discuss some methods to summarize a highdimensional posterior with lowdimensional summaries.
Basics of Bayesian inference
1.4. 1. 4.1 1
25
Poin oint esti estima mati tion on
One approach to summarizing the posterior is to use a point estimate, i.e., a single value that represents the best estimate of the parameters given the data and (for a Bayesian analysis) the prior. The posterior mean, median and mode are all sensible choices. Thinking of the Bayesian analysis as a procedure that can be applied to any dataset, the point estimator is an example of an estimator , i.e., a function that takes the data as input and returns an estimate of the parameter of interest. Bayesian estimators such as the posterior mean can then be seen as competitors to other estimators such as the sample mean estimator for a population mean or a sample variance for a population variance, or more generally as a competitor to the maximum likelihood estimator. We study the properties of these estimators in Chapter in Chapter 7. A common point estimator is the maximum a posteriori (MAP) estimator, deﬁned as the value that maximizes the posterior (i.e., the posterior mode), ˆ M AP = arg p(θ Y)] = arg f (Y θ)] + log[π θ arg ma max x log[ p( arg ma max x log[ log[f log[π (θ )] )].. (1.38) θ θ


The second equality holds because the normalizing constant m(Y) does not depend on the parameters and thus does not aﬀect the optimization. If the prior is uninformative, i.e., mostly ﬂat as a function of the parameters, then the MAP estimator should be similar to the maximum likelihood estimator (MLE) ˆ M LE = arg ma f ((Y θ )] θ max x log[ log[f )].. (1.39) θ

In fact, this relationship is often used to intuitively justify maximum likelihood estimation. The addition of the log prior log[π log[π (θ)] in (1.38) can be viewed a regularization or penalty term to add stability or prior knowledge. Point estimators are often useful as fast methods to estimate the parameters for purpose of making predictions. However, a point estimate alone does not quantify uncertainty about the parameters. Sections 1.4.2 and 1.4.3 provide more thorough summaries of the posterior for univariate and multivariate problems, respectively.
1.4.2 1.4 .2
Univ Univariate ariate post posteri eriors ors
A univariate posterior (i.e., from a model with p = 1 parameter) is best summarized with a plot because this retains all information about the parameter. Figure 1.10 shows 1.10 shows a hypothetical univariate posterior with PDF centered at 0.8 and most of its mass on on θ > 0 0..4. Point estimators such as the posterior mean or median summarize the center of the posterior, and should be accompanied by a posterior variance or standard deviation to convey uncertainty. The posterior standard deviation resembles a frequentist standard error in that if the posterior is approximately Gaussian then the posterior probability that the parameter is within two posterior standard deviation units of the posterior mean is roughly 0.95. However,
26
5 . 3
Bayesian Statistic Statistical al Methods
5 . 3
Posterior mean Posterior median
r o i r e t s o P
0 . 3
0 . 3
5 . 2
5 . 2
r o i r e t s o P
0 . 2
5 . 1
0 . 2
5 . 1
0 . 1
0 . 1
5 . 0
5 . 0
0 . 0
0 . 0
P(H1Y) = 0.98 0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
θ
θ
FIGURE 1.10 Summar Sum maries ies of a uni univ vari ariate ate post posteri erior. or. The plot on the left gives the posterior mean (solid vertical line), median (dashed vertical line) and 95% equaltailed interval (shaded area). The plot on the right shades the posterior 0..5. probability of the hypothesis hypothesis H 1 : θ > 0
the standard error is the standard deviation of the estimator (e.g., the sample mean) if we repeatedly sample diﬀerent data sets and compute the estimator for each data set. In contrast, the posterior standard deviation quantiﬁes uncertainty about the parameter given only the single data set under consideration. Y ) is an example of a credible interval. A The interval E(θ E(θ Y) 2SD( 2SD(θθ Y ) (1 α)% credible interval is any interval (l, ( l, u) so that Prob(l Prob(l < θ < u Y) = 1 α. There are inﬁnitely many intervals with this coverage, but the easiest u set α/22 to compute is the equaltailed interval with l with l and and u set to the α/ the α/22 and 1 α/ posterior quantiles. An alternative is the highest posterior density interval which searches over all l and u to minimize the interval width u l while maintaining the appropriate posterior coverage. The HPD thus has the highest average posterior density of all intervals of the form (l, ( l, u) that have the nominal posterior probability. As opposed to equaltailed intervals, the HPD requires an additional optimization step, but this can be computed using the R package HDInterval. Interpreting a posterior credible interval is fairly straightforward. If (l, (l, u) is is a posterior 95% interval, this means “given my prior and the observed data, I u.” am 95% sure that θ that θ is is between l between l and and u .” In a Bayesian analysis we express our subjective uncertainty about unknown parameters by treating them as random variabl ariables, es, and in this subjectiv subjectivee sense it is reasonable to assign probab probabilities ilities to θ . This is in contrast with frequentist conﬁdence intervals which have a more nuanced interpretation. A conﬁdence interval is a procedure that deﬁnes an
− −
 ±

 −
−
Basics of Bayesian inference
27
interval for a given data set in a way that ensures the procedure’s intervals will include the true value 95% of the time when applied to random datasets. The posterior distribution can also be used for hypothesis testing. Since the hypotheses are functions of the parameters, we can assign posterior probabilities to each hypothesis. Figure hypothesis. Figure 1.10 1.10 (right) (right) plots the posterior probability of the null hypothesis that θ < 0.5 (white) and the posterior probability of the alternative hypothesis that that θ > 0 0..5 (shaded). These probabilities summarize the weight of evidence in support of each hypothesis, and can be used to guide future decisions. Hypothesis testing, and more generally model selection, is discussed in greater detail in Chapter in Chapter 5. Summarizing a univariate posterior using R: We have seen that if Y θ Binomial(n, Binomial(n, θ ) and θ Beta(a, Beta(a, b), then θ Y Beta(A, Beta(A, B ), where A = Y + a and B = n Y + b. Listing 1.1 speciﬁes a data set with Y = Y and B with Y = = 40 and n = 100 and summarizes the posterior using R. Since the posterior is the beta density, the functions dbeta, pbeta and qbeta can be used to summarize the posterior. The posterior median and 95% credible set are 0.401 and (0.309, 0.498). Monte Carlo sampling: Although univariate posterior distributions are best summarized by a plot, higher dimensional posterior distributions call
 ∼
∼
−
 ∼
for fo r ot othe herr approach me meth thods ods su such ch as Mon nte Ca Carl rlo o (M (MC) C) sa samp mplin linggfrom an and dthe so posterior, we intr introo S S samples duce this here. MCMo sampling draws θ (1) ,...,θ (S ) , and uses these samples to approximate the posterior. For examS samples, ple, the posterior mean is approximated using the mean of the S S (s) /S , the posterior 95% credible set is approximated using E(θθ Y) E( s=1 θ S samples, etc. Listing 1.1 provides an the 0.025 and 0.975 quantiles of the S example of using MC sampling to approximate the posterior mean and 95% credible set.
 ≈
1.4.3 1.4 .3
Multi Multiv variate ariate post posteri eriors ors
Unlike the univariate case, a simple plot of the posterior will not suﬃce, especially for large p, because plotting in high dimensions is challenging. The typical remedy for this is to marginalize out other parameters and summarize univariate marginal distributions with plots, point estimates, and credible sets, and perhaps plots of a few bivariate marginal distributions (i.e., integrating over the other other p 2 parameters) of interest. iid µ, σ2 ) with independent priors Consider the model model Y i µ, σ Normal( Normal(µ, priors µ 2 Normal(0,, 100 ) and σ Normal(0 and σ Uniform(0 Uniform(0,, 10) (other priors are discussed in Section 4.1.1). The likelihood
−
∼
n

f ( f (Y µ, σ )

− ∝ 1 exp σ =1
i
∼
−
(Y i µ) 2σ 2
∼
2
∝
σ −n exp
n i=1 (Y i 2σ 2
−
− µ)2
(1.40)
factors as the product of n terms because the observations are assumed to be f ((µ, σ ) = f f ((µ)f f ((σ ) because independent. The prior is is f because µ and and σ have indepen
28
Bayesian Statistic Statistical al Methods
Listing 1.1 Summarizing a univariate posterior in R. 1 2 3 4 5 6 7
# Loa Load d the da data ta > n Y a b A B the theta ta pdf plot(theta,pdf,type="l",ylab="Posterior",xlab=expression(theta)) plot(theta,pdf,type="l",ylab="Posterior",xlab=expressio n(theta))
17 18 19 20
# Poste Posterior rior me mean an > A A/ /(A + B B) ) [1] 0.40 0.40196 19608 08
21 22 23 24
# Pos Poster terior ior media median n (0.5 quantile quantile) ) > qbeta(0.5,A,B qbeta(0.5,A,B) ) [1] 0.40 0.40131 13176 76
25 26 27 28
# Posterior probabi probability lity P(theta qbeta(c(0.025 qbeta(c(0.025,0.975) ,0.975),A,B) ,A,B) [1] 0 0.309 .309308 3085 5 0.49 0.49825 82559 59
33 34 35 36 37 38 39 40 41
# Mon Monte te Carlo appro approxima ximatio tion n > S sam samples ples < mean(samples) mean(samples) [1] 0.40 0.40218 2181 1 > qua quantil ntile(s e(samp amples, les,c(0 c(0.02 .025,0. 5,0.975 975)) )) 2.5% 97.5% 0.3092 0.3092051 051 0.4973 0.4973871 871
Basics of Bayesian inference
29
TABLE 1.5 Bivariate posterior distribution. Summaries of the marginal posterior disiid µ, σ 2 ), priors µ priors µ Normal(0 Normal(0,, 1002 ) Normal(µ, tributions for the model with Y with Y i Normal( and σ Uniform(0, Uniform(0, 10) 10),, and n = 5 obs observ ervati ations ons Y 1 = 2.68, Y 2 = 1.18, Y 3 = 0.97, 97, Y 4 = 0.98, 98, Y 5 = 1.03.
∼ −
∼
∼ −
−
P aram amet eter er Po Post ster erio mean an Po Post ster 95% cred µ ar 0ior .1r7 me 1erio .3ior 1r SD 95 (% 2.cr 49edib , ible 2.8le3)set σ 2.57 1.37 ( 1.10, 6.54)
f ((σ) = 1/10 for all dent priors and since since f all σ π (µ, σ )
∝ exp
−
∈ [0, [0, 10], the prior becomes µ2 2 1002
·
(1.41)
∈
f ((µ, σ ) = 0 otherwise. The posterior is proportional to the for σ [0, for [0, 10] and and f likelihood times the prior, and thus
 ∝ σ−n exp
p( p(µ, σ Y)
−
n (Y i=1 i 2σ 2
− µ)2
− exp
µ2 2 1002
·
(1.42)
for σ [0, for [0, 10]. Figure 1.11 plots this bivariate posterior assuming there are n = 5 observations: Y 1 = 2.68, Y 2 = 1.18, Y 3 = 0.97, Y 4 = 0.98, Y 5 = 1.03. The tw twoo par parame ameter terss in Figu Figure re 1.1 1.111 depend on each other. If If σ = 1.5 (i.e., the conditional distribution traced by the horizontal line at at σ = 1.5 in Figure 1.11) then 1.11) then the posterior of µ concentrates between 1 and 1, whereas if σ = 3 the posterior of µ spreads from 3 to 3. It is diﬃcult to describe this complex bivariate relationship, so we often summarize the univariate marginal distributio distri butions ns instead. instead. The margin marginal al distr distributi ibutions ons
∈
−
−
−
10

p( p(µ Y) =
0


p p((µ, σ Y)dσ and p(σ Y) =
∞
−∞

p( p(µ, σ Y)dµ. (1.43)
are plotted on the top (for (for µ) and right (for (for σ ) of Figure Figure 1.11; they are the row and columns sums of the joint posterior. By integrating over the other parameters, the marginal distribution of a parameter accounts for posterior uncertainty in the remaining parameters. The marginal distributions are usually summarized with point and interval estimates as in Table 1.5. The marginal distributions and their summaries above were computed by evaluating the joint posterior (1.42) for values of (µ, ( µ, σ ) that form a grid (i.e., pixels in Figure in Figure 1.11) 1.11) and and then simply summing over columns or rows of the grid. This is a reasonable approximation for p = 2 variables but quickly becomes unfeasible as p increases. Thus, it was only with the advent of more
30
Bayesian Statistic Statistical al Methods
10.0
7.5
σ 5.0
2.5
0.0 −4
−2
0
2
4
6
µ
FIGURE 1.11 Bivariate posterior distribution. The bivariate posterior (center) and univariate marginal posteriors (top for µ and right for σ ) for the model with iid µ, σ 2 ), priors Y i Normal( Normal(µ, priors µ Normal(0 Normal(0,, 1002 ) and and σ Uniform(0 Uniform(0,, 10), and Y 5 = 1.03. Y 4 = 0.98, Y 3 = 0.97, Y 2 = 1.18, n = 5 observations Y 98, Y 97, Y 18, Y 68, Y observations Y 1 = 2.68,
∼
∼
−
∼
−
−
Basics of Bayesian inference
31
eﬃcient computing algorithms in the 1990s that Bayesian statistics became feasible for even mediumsized applications. These exciting computational developments are the subject of Chapter Chapter 33..
1.5 1. 5
Th The e post poster erio iorr predic predicti tiv ve dis distri tribut butio ion n
Often the objective of a statistical analysis is to build a stochastic model that can be used to make predictions of future events or impute missing values. Let Y ∗ be the future observation we would like to predict. Assuming that the observations are independent given the parameters and that Y ∗ follows the same model as the observed data, then given θ we hav havee Y ∗ f (y θ) and prediction is straightforward. Unfortunately, we do not know θ exactly, even after observing Y. A remedy for this is to plug in a value of θ, say, the ˆ = E(θ Y), and then sample Y ∗ f ( ˆ ). However, this posterior mean θ f (Y θ ignores uncertainty about the unknown parameters. If the posterior variance of θ is small then its uncertainty is negligible, otherwise a better approach is needed. For the sake of prediction, the parameters are not of interest themselves, but rather they serve as vehicles to transfer information from the data to the predictive model. We would rather bypass the parameters altogether and simply use the posterior predictive distribution (PPD)
∼

∼


 ∼ f ∗(Y ∗Y).
Y ∗ Y
(1.44)
The PPD is the distribution of a new outcome given the observed data. In a parametric model, the PPD naturally accounts for uncertainty in the model parameters; this an advantage of the Bayesian framework. The PPD accounts for parametric uncertainty because it can be written f ∗ (Y ∗ Y) =

f ∗ (Y ∗ , θ Y)dθ =

p(θ Y)dθ, f ( f (Y ∗ θ) p(

(1.45)

f is the likelihood density (here we assume that the observations are where f where f (Y ∗ θ , Y), and f (Y ∗ θ ) = f ( and p is the independent given the parameters and so so f ( posterior density. To further illustrate how the PPD accounts for parameteric uncertainty, we consider how to make a sample from the PPD. If we ﬁrst draw posterior p(θ Y) and then a prediction from the likelihood, Y ∗ θ ∗ sample θ ∗ f f ((Y θ∗ ), then Y ∗ follows the PPD. A Monte Carlo approximation (Section 3.2) repeats these step many times to approximate the PPD. Unlike the plugin predictor, each predictive uses a diﬀerent value of the parameters and thus accurately reﬂects parametric uncertainty. It can be shown that Var(Y Var(Y ∗ Y) Y ∗ Y, θ) with equality holding only if there is no posterior uncertainty Var(Y Var( in the mean of Y ∗ Y, θ .
∼




 ∼  ≥


32
Bayesian Statistic Statistical al Methods
3 . 0
0 2 . 0
Plug−in PPD
5 1 . 0
. F 2 0 M P
F M P
0 1 . 0
1 . 0
5 0 . 0
0 . 0
0
1
2
3
4
0 0 . 0
0
5
5
10
15
20
y
y
FIGURE 1.12 Posterior predictive distribution for a betabinomial example. Plots of th thee po post ster erio iorr pr pred edic icti tiv ve dist distri ribu buti tion on (PPD (PPD)) fr from om th thee mod model el Y θ n, θ ) and Binomial(n, Binomial( and θ Beta(1 Beta(1,, 1). The “plugin” PMF is the binomial denf (y θˆ). This is compared with the sity evaluated at the posterior mean θˆ, f ( full PPD for Y Y = 1 success in n = 5 trials (left) and Y Y = 4 successes in n = 20 trials (right). The PMFs are connected by lines for visualization, but the probabilities are only deﬁned for for y = 0, 1,...,n .
 ∼
∼

{
}
 ∼ ∈ {
∼
As an exam exampl ple, e, co cons nsid ider er the the mod model el Y θ Binomial(n, Binomial(n, θ ) and θ Beta(1,, 1). Given the data we have observed (Y Beta(1 (Y and and n) we would like to pre∗ dict the outcome if we repeat the experiment, Y experiment, Y 0, 1,...,n . The posterior θ is θ Y Beta( Y + 1, n +1) and the posterior mean is θˆ = (Y + 1)/ 1)/(n +2). of θ is θ Beta(Y n, ˆ θ) Binomial(n, The solid lines in Figure 1.12 show the plugin prediction Y prediction Y ∗ Binomial( versus the full PPD (Listing 1.2) that accounts for uncertainty in θ (which is Y + 1, n + 1) distribution). For both n = 5 and n = 20, a BetaBinomial(n, BetaBinomial(n, Y
 ∼
}
∼
the PPD is considerably wider than the plugin predictive distribution, as expected.
Basics of Bayesian inference
Listing 1.2 Summarizing a posterior predictive distribution (PPD) in R. 1 2 3 4 5 6 7
> # Load Load the the da data ta > n Y a b A B < # Plugin > th thet eta_ a_ha hat t y PPD na name mes( s(PP PPD) D) < rou round nd(P (PPD PD,2) ,2) 0 1 2 3 4 5 0. 0.19 19 0.37 0.3 0.30 0 0. 0.12 12 0.02 0.0 0.00 0
17 18 19 20 21 22 23 24 25
Draws s fr from om the the PP PPD, D, Y _star[i]~Binomial(n,theta _star[i]) > # Draw > S the theta ta_s _sta tar r Y_ Y_s star PPD rou round nd(P (PPD PD,2) ,2) 0 1 2 3 4 5 0. 0.27 27 0.30 0.2 0.23 3 0. 0.13 13 0.05 0.0 0.01 1
33
34
1.6 1. 6
Bayesian Statistic Statistical al Methods
Exer Exerci cise sess
∈ S ∈
∞ −
1. If X has has support X support X = [1 [1,, ], ﬁnd the constant c constant c (as a function f ((x) = c exp( x/θ x/θ)) a valid PDF. of θ ) that makes makes f
∼ Uniform( ∼ a, b) so the support is S = [a, b] and the Uniform(a, f ((x) = 1/(b − a) for any PDF is is f any x ∈ S .
X 2. Ass Assum umee th that at X
(a) Pro Prove ve tha thatt this is a vali alid d PDF. (b) Der Deriv ivee the mea mean n and va varia riance nce of X .
3. Exper Expertt know knowled ledge ge dicta dictates tes that a par parame ameter ter mu must st be posit positiv ivee and that its prior distribution should have the mean 5 and variance 3. Find a prior distribution that satisﬁes these constraints. 4. X 1 and and X 2 have joint PMF X 1 = x 1 , X 2 = x 2 ) x1 x2 Prob( Prob(X 0 0 0.15 1 0 0.15 2 0 0.15 0 1 0.15 1 1 0.20 2 1 0.20 (a) (b) (c) (d) (e (e))
Compute Compute the margin marginal al distribution distribution of X 1 . Compu Compute te the margin marginal al distribution distribution of X 2 . Compu Compute te the conditi conditional onal distr distributio ibution n of X 1 X 2 . Compu Compute te the conditi conditional onal distr distributio ibution n of X 2 X 1 . Ar Aree X 1 and and X 2 independent? Justify your answer.
 
Var(X 1 ) = E(X 2 ) = 0, Var(X E( X 1 ) = E(X 5. If (X 1 , X 2 ) is bivariate normal with E(X X 2 ) = 1, and Cor(X ρ:: Var(X Var( Cor( X 1 , X 2 ) = ρ (a) Derive Derive the margina marginall distribution distribution of X 1 . (b) Deriv Derivee the condition conditional al distribution distribution of X 1 X 2 .

6. As Assu sume me (X 1 , X 2 ) have bivariate PDF f f ((x1 , x2 ) =
1 1 + x21 + x22 2π
−3/2
.
(a) (a) Plo Plott th thee co cond ndit itio iona nall dist distri ribu buti tion on of of X 1 X 2 = x2 for x2 3, 2, 1, 0, 1, 2, 3 (preferably on the same plot). and X 2 appear to be correlated? Justify your answer. (b)) Do (b Do X 1 and (c) Do Do X 1 and and X 2 appear to be independent? Justify your answer.
{− − −
}

∈
Basics of Bayesian inference
35
7. Acc Accor ordi ding ng to insurance.com, the 2017 auto theft rate was 135 per 10,000 residents in Raleigh, NC compared to 214 per 10,000 residents in Durham/Chapel Hill. Assuming Raleigh’s population is twice as large as Durham/Chapel Hill and a car has been stolen somewhere in the triangle (i.e., one of these two areas), what the probability it was stolen in Raleigh? 8. Your daily comm commute ute is dis distri tribut buted ed uni unifor formly mly betw between een 15 and 20 minutes min utes if there no con conven vention tion dow downto ntown. wn. How However ever,, con conven ventions tions are scheduled for roughly 1 in 4 days, and your commute time is distributed uniformly from 15 to 30 minutes if there is a convention. Y be your commute time this morning. Let Y be Let (a) What is the probabi probabilit lity y that there was a con conven vention tion dow downto ntown wn Y = 18? given Y = given (b) What is the probabi probabilit lity y that there was a con conven vention tion dow downto ntown wn Y = 28? given Y = given 9. For this prob proble lem m pret preten end d we are are de deal alin ingg wi with th a lang langua uage ge wi with th a sixword dictionary
{fun, sun, sit, sat, fan, for}.
An extensive study of literature written in this language reveals that all words are equally likely except that “for” is is α times as likely as the other words. Further study reveals that: i. Each keystroke is an error with probability θ. ii. All letters are equally likely to produce errors. iii. Given that a letter is typed incorrectly it is equally likely to be any other letter. iv. Errors are independent across letters. For example, the probability of correctly typing “fun” (or any other word) is (1 θ )3 , the probability of typing “pun” or “fon” when intending to type is “fun” is θ is θ(1 (1 θ )2 , and the probability of typing “foo” or “nnn” when intending to type “fun” is θ is θ 2 (1 θ). Use Bayes’ rule to develop a simple spell checker for this language. For each of the typed words “sun”, “the”, “foo”, give the probability that each word in the dictionary was the intended word. Perform this for the parameters below:
−
−
−
(a) α = 2 and and θ = 0.1. (b) α = 50 and and θ = 0.1. (c) α = 2 and and θ = 0.95. Comment on the changes you observe in these three cases.
36
∼
Bayesian Statistic Statistical al Methods
10. Let X 1 Bernoulli(θ Bernoulli(θ) be the indicator that a tree species occupies a forest and and θ [0 [0,, 1] denote the prior occupancy probability. Thee re Th resea searc rche herr ga gath ther erss a samp sample le of n trees from the forest and X 2 belong to the species of interest. The model for the data is X 2 X 1 Binomial( n,λX 1 ) where Binomial(n,λX where λ [0, [0, 1] the probability of detecting the species given it is present. Give expressions in terms of n, θ and and λ for the following joint, marginal and conditional proba
∈
 ∼
∈
bilities: (a) (a) Pr Prob ob((X 1 = X 2 = 0). (b)) Pr (b Prob ob((X 1 = 0). (c (c)) Pr Prob ob((X 2 = 0). (d)) Pr (d Prob ob((X 1 = 0 X 2 = 0). (e (e)) Pr Prob ob((X 2 = 0 X 1 = 0). (f) Pr Prob ob((X 1 = 0 X 2 = 1). (g) (g) Pr Prob ob((X 2 = 0 X 1 = 1). (h) Pro Provide vide intu intuition ition for how (d)( (d)(g) g) chang changee with with n, θ and and λ. (i) Assumin Assumingg θ = 0.5, 5, λ = 0.1, and and X 2 = 0, how large must n be before we can conclude with 95% conﬁdence that the species does not occupy the forest?
   
11. In a stu study dy that use usess Bay Bayesi esian an meth methods ods to foreca forecast st the nu numbe mberr of species that will be discovered in future years, [24] report that the number of marine bivalve species discovered each year from 20102015 was 64, 13, 33, 18, 30 and 20. Denoting Y t as the number of iid species discovered in year t and assuming Y t λ Poisson(λ Poisson(λ) and λ Uniform(0 Uniform(0,, 100), plot the posterior distribution of λ.
 ∼
∼
Y )) follow the bivariate normal distribution and 12.. As 12 Assu sume me ttha hatt (X, (X, Y X and Y Y have marginal mean zero and marginal varithat both X ance one. We observe six independent and identically distributed data points: (3.3, 2.6), (0.1, 0.2), (1.1, 1.5), (2.7, 1.5), (2.0, 1.9) and (0.4, 0.3). Make a scatter plot of the data and, assuming the correlation parameter parameter ρ has a Uniform( 1, 1) prior, plot the posterior distribution of ρ.
−
13. The norma normalize lized d diﬀ diﬀeren erence ce ve veget getati ation on ind index ex (ND (NDVI) VI) is com common monly ly used to classify land cover using remote sensing data. Hypothetically, say that NDVI follows a Beta(25, Beta(25 , 10) distribution for pixels in a rain forest, and a Beta(10, Beta(10 , 15) distribution for pixels in a deforested area now used for agriculture. Assuming about 10% of the rain forest has been deforested, your objective is to build a rule to classify individual pixels as deforested based on their NDVI. (a) Plot the PDF of ND NDVI VI for for forest ested ed and defor deforest ested ed pixel pixels, s, and the marginal distribution of NDVI averaging over categories.
Basics of Bayesian inference
37
(b) Giv Givee an expression for th thee probabi probabilit lity y that a pixel is deforest deforested ed given its NDVI value, and plot this probability by NDVI. (c (c)) You will clas classi sify fy a pixe pixell as de defo fore rest sted ed if you are at leas leastt 90 90% % sure it is deforested. Following this rule, give the range of NDVI that will lead to a pixel being classiﬁed as deforested. 14. Let n be the unknown number of customers that visit a store on the day of a sale. The number of customers that make a purchase is Y n Binomial( n, θ) where Binomial(n, where θ is the known probability of making a purchase given the customer visited the store. The prior is n Poisson(5). Assuming Assuming θ is known and and n is the unknown parameter, plot the posterior distribution of of n for all combinations of of Y Y and 0, 5, 10 and and θ 0.2, 0.5 and comment on the eﬀect of Y and θ on the posterior.
 ∼
{
}
∈ {
∼ ∈
}
15.. La 15 Last st sp spri ring ng your our lab plan plante ted d ten ten se seed edlin lings gs an and d two su surv rviv ived ed th thee winter. Let θ Let θ be the probability that a seedling survives the winter. (a) Assu Assumin mingg a uni unifor form m pri prior or dis distri tribut bution ion for for θ, compute its posterior mean and standard deviation. (b) Assu Assumin mingg the sam samee pri prior or as in (a), comp compute ute and comp compare are the equaltailed and highest density 95% posterior credible intervals. (c) If you plant another 10 seedlings next year, what is the posterior predictive probability that at least one will survive the winter? and X 2 are binary indicators of failure for two parts of a ma16. X 1 and /2) and Bernoulli(1/ chine. Independent tests have shown that that X 1 Bernoulli(1 Y 2 are binary indicators of two system Y 1 and X 2 Bernoulli(1 and Y Bernoulli(1//3). 3). Y failures. We know that Y 1 = 1 if both X 1 = 1 and X 2 = 1 and Y 1 = 0 otherwise, and Y 2 = 0 if both X 1 = 0 and X 2 = 0 and Y 2 = 1 otherwise. Compute the following probabilities:
∼
∼
given Y 1 = 1. and X 2 = 1 given (a) The probabi probabilit lity y that that X 1 = 1 and given Y 2 = 1. and X 2 = 1 given (b) The pr probabi obabilit lity y that that X 1 = 1 and given Y 1 = 1. (c) The pr probabi obabilit lity y that that X 1 = 1 given (d) The pr probabi obabilit lity y that that X 1 = 1 given given Y 2 = 1. 17. The tab table le below has the ov overa erall ll free thro throw w proport proportion ion and resu results lts of fr free ee thro throws ws tak taken in pr pres essu sure re si situ tuat atio ions ns,, de deﬁn ﬁned ed as “cl “clut utcch” (https://stats.nba.com/ ), for ten National Basketball Association players (those that received the most votes for the Most Valuable Player Award) for the 2016–2017 season. Since the overall proportion is computed using a large sample size, assume it is ﬁxed and analyze the clutch data for each player separately using Bayesian methods. Assume a uniform prior throughout this problem.
38
Player Russell Westbro ok James Harden Kawhi Leonard LeBron James Isaiah Thomas Stephen Curry Giannis Antetokounmpo John Wall Anthony Davis Kevin Durant
Bayesian Statistic Statistical al Methods Overall prop ortion 0.845 0.847 0.880 0.674 0.909 0.898 0.770 0.801 0.802 0.875
Clutch Clutch makes attempts 64 75 72 95 55 63 27 39 75 83 24 26 28 41 66 82 40 54 13 16
(a) Describe your your model for studying the clutc clutch h success probab probabilit ility y including the likelihood and prior. (b) Plot the posterior posteriorss of the clutch succes successs probabil probabilities. ities. (c) Sum Summar marize ize the poste posterio riors rs in a table table.. (d) Do you ﬁnd evi eviden dence ce tha thatt an any y of the pla playe yers rs hav havee a diﬀ diﬀere erent nt clutch percentage than overall percentage? (e) Are the results sensitiv sensitivee to your your prior? That is, do small changes changes in the prior lead to substantial changes in the posterior? 18. In the early tw twentieth entieth century century,, it w was as generally agreed that Hamilton and Madison (ignore Jay for now) wrote 51 and 14 Federalist Papers, respectively. There was dispute over how to attribute 12 other papers between these two authors. In the 51 papers attributed to Hamilton the word “upon” was used 3.24 times per 1,000 words, compared to 0.23 times per 1,000 words in the 14 papers attributed to Madison (for historical perspective on this problem, see [58]). (a) (a) If th thee word ord “u “upon pon”” is us used ed thre threee ti times mes in a disp disput uted ed text of length 1, 1, 000 words and we assume the prior probability 0.5, what wh at is th thee pos poste teri rior or pr prob obab abil ilit ity y th thee pa paper per was wr writ itte ten n by Hamilton? (b) Giv Givee one assu assumpt mption ion you are makin makingg in (a) tha thatt is likel likely y unreasonable. Justify your answer. (c) In (a), if we ch changed anged the n number umber of insta instances nces of “upon” to one, do you expect the posterior probability to increase, decrease or stay the same? Why? (d) In (a), if w wee chan changed ged the te text xt lengt length h to 10 10,, 000 words and number of instances of “upon” to 30, do you expect the posterior probability to increase, decrease or stay the same? Why? Y be (e (e)) Le Lett Y be the number of observed number of instances of “upon” in 1,000 words. Compute the posterior probability the paper 0, 1,..., 20 , plot these was written by Hamilton for each each Y
∈ {
}
Basics of Bayesian inference
Y and give a rule for the numposterior probabilities versus Y ber of instances of “upon” needed before the paper should be attributed to Hamilton.
39
2 From prior information to posterior inference
CONTENTS 2.1 2.1
2.2 2.3 2.3
2.4
Con onju juga gate te pr prio iorrs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1. 2.1.11 Be Beta tab bin inom omia iall mod model el fo forr a pr propo oport rtio ion n . . . . . . . . . . . . . . . . . . . 2.1. 2.1.22 Po Pois isso sonnga gamm mmaa mod model el fo forr a ra rate te . . . . . . . . . . . . . . . . . . . . . . . . 2.1. 2.1.33 No Norm rmal aln nor orma mall model model fo forr a me mean an . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 2.1 .4 Normal Normalin inve verse rse gam gamma ma mode modell for a vari arianc ancee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1. 2.1.55 Na Natu tura rall co conj njug ugat atee pr prio iors rs . 2.1. 2.1.66 No Norm rmal aln nor orma mall m mode odell for for a m mea ean n vect ector or . . . . . . . . . . . . . . . .
42 42 45 47 48 50 51
2.1.7 2.1.7 Normal Normalin inve verse rse Wis Wishar hartt m model odel for a co cov vari arianc ancee matri matrix x . 2.1. 2.1.88 Mix Mixtu ture ress of conj conjug ugat atee pr prio iors rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . Imprope perr priors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Obj Objec ecttiv ivee prior riorss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . 2.3. 2.3.11 Je Jeﬀr ﬀrey eys’ s’ pr prio iorr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 2.3. 2.3.22 Re Refe fere renc ncee pr prio iors rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3. 2.3.33 Ma Maxi xim mum en entr trop opy y pr prio iors rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3. 2.3.44 Em Empi piri rica call Ba Bay yes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3. 2.3.55 Pe Pena naliz lized ed co comp mplex lexit ity y pr prio iors rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52 56 58 59 59 61 62 62 63 64
One of the most controversial and yet crucial aspects of Bayesian model is the construction of a prior distribution. Often a user is faced with the questions from? or What is the true or like Where does the prior distribution come from? correct prior distribution? and distribution? and so on. There is no concept of the “true, correct or best” prior distribution, but rather the prior distribution can be viewed as an initialization of a statistical (in this case a Bayesian) inferential procedure that gets updated as the data accrue. The choice of a prior distribution is necessary (as you would need to initiate the inferential machine) but there is no notion of the ‘optimal’ prior distribution. Choosing a prior distribution is similar in principle to initializing any other sequential procedure (e.g., iterative optimization methods like Newton–Raphson, EM, etc.). The choice of such initialization can be good or bad in the sense of the rate of convergence of the procedure to its ﬁnal value, but as long as the procedure is guaranteed to converge, the choice of prior does not have a permanent impact. As discussed 41
42
Bayesian Statistic Statistical al Methods
in Section 7.2, the posterior is guaranteed to converge to the true value under very general conditions on the prior distribution. In this chapter we discuss several general approaches for selecting prior distributions. We begin with conjugate priors. Conjugate priors lead to simple expressions for the posterior distribution and thus illustrate how prior information aﬀects the Bayesian analysis. Conjugate priors are useful when prior information is available, but can also be used when it is not. We conclude with objective Bayesian priors that attempt to remove the information subjectivity in prior selection through conventions to be adopted when prior is not available.
2.1 2. 1
Co Conj njug ugat ate e prio priors rs
Conjugate priors are the most convenient choice. A prior and likelihood pair are conjugate if the resulting posterior is a member of the same family of distributions as the prior. Therefore, conjugacy refers to a mathematical relationship between two distributions and not to a deeper theme of the appropriate way to express prior beliefs. For example, if we select a beta prior then Figure 1.5 shows 1.5 shows that by changing the hyperparameters (i.e., the parameters that deﬁne the prior distribution, in this case a and b) the prior could be concentrated around a single value or spread equally across the unit interval. Because both the prior and posterior are members of the same family, the update from the prior to the posterior aﬀects only the parameters that index the family. This provides an opportunity to build intuition about Bayesian learning through simple examples. Conjugate priors are not unique. For example, the beta prior is conjugate for both the binomial and negative binomial likelihood and both a gamma prior and a Bernoulli prior (trivially) are con jugate for a Poisson likelihood. Also, conjugate priors are somewhat limited because not all likelihood functions have a known conjugate prior, and most conjugacy pairs are for small examples with only a few parameters. These limitations are abated through hierarchical modeling (Chapter (Chapter 6) 6) and and Gibbs sampling (Chapter (Chapter 3) 3) which which provide a framework to build rich statistical models by layering simple conjugacy pairs. Below we discuss several conjugate priors and mathematical tools needed to derive the corresponding posteriors. Detailed derivation of many of the posterior distributions are deferred to Appendix A.3, and Appendix A.2 has an abridged list of conjugacy results.
2.1.1 2.1 .1
Betab Betabino inomia miall model model for a propo proporti rtion on
Retu Re turn rnin ingg to th thee bet betaabi bino nomi mial al ex exam ampl plee in Se Sect ctio ion n 1.2, 1.2, th thee da data ta Y
∈
{0, 1,...,n} is the number of successes in in n independent trials each with suc
From prior information to posterior inference
43
 ∼
n, θ ). Since we are cess probability probability θ. The likelihood is then Y θ Binomial( Binomial(n, only interested in terms in the likelihood that involve θ , we focus only on its kernel, i.e., the terms that involve involve θ. The kernel of the binomial PMF is
 ∝ θY (1 − θ)n−Y .
f f ((Y θ)
(2.1)
If we view the likelihood as a function of θ, it resembles a beta distribution, and so we might suspect that a beta distribution is the conjugate prior for a, b), then (as shown the binomial likelihood. If we select the prior θ Beta( Beta(a, in Section 1.2) this combination of likelihood and prior leads to the posterior distribution θ Y Beta( A, B ), Beta(A, (2.2)
∼
 ∼
Y + a and Y + b. Since both where the updated parameters are are A = = Y and B = n Y the prior and posterior belong to the beta family of distributions, this is an example of a conjugate prior. A/(A + B), The Beta(A, Beta(A, B ) distribution has mean A/( + B ), and so the prior and posterior means are
−
a E(θθ) = θˆ0 = E( an d a+b
Y Y + a Y ) = θˆ1 = . E(θ Y ) n+a+b

(2.3)
The prior and posterior means are both estimators of the population proportion, and thus we denote them as θˆ0 and θˆ1 , respectively. The prior mean θˆ0 is an estimator of θ before observing the data, and this is updated to θˆ1 by the observed data. A natural estimator of the population proportion proportion θ is the /n,, which is the number of successes divided by the sample proportion θˆ = Y /n number nu mber of trials trials.. Compa Comparing ring the sample proporti proportion on to the posterior mean, the posterior mean adds adds a to the number of successes in the numerator and and a + b to the number of trials in the denominator. Therefore, we can think of of a as the prior number of successes, successes, a + b as the prior number of trials, and thus thus b as the prior number of failures (see Section 2.1.5 for a tie to natural conjugate priors). Viewing the hyperparameters the prior number successes andinformafailures provides a means of balancing theasinformation in the of prior with the tion in the data. For example, the Beta(0. Beta(0 .5, 0.5) prior in Figure in Figure 1.5 1.5 has has one prior observation and the uniform Beta(1, Beta(1, 1) prior contributes one prior success and one prior failure. If the prior is meant to reﬂect prior ignorance then we should select small small a and and b, and if there is strong prior information that that θ a/((a + b) and select a and and b so that that θ0 = a/ is approximately approximately θ0 , then we should select a + b is large. The posterior mean can also be written as θˆ1 = (1
− wn)θˆ0 + wnθˆ
(2.4)
n/((n + a + b)) is the weight given to the sample proportion and + a + b where wn = n/ where 1
− wn is the weight given to the prior mean. This conﬁrms the intuition that
44
Bayesian Statistic Statistical al Methods
Y Y Y Y Y
5 1
r o i r e t s o P
= 0, n = 0 = 3, n = 10 = 12, n = 50 = 21, n = 100 = 39, n = 200
0 1
5
0
0.0
0.2
0.4
0.6
0.8
1.0
θ
FIGURE 2.1 Posterior distributions from the betabinomial model. Plot of the pos θ from the model Y n, θ ) and θ terior of θ from model Y θ Binomial( Binomial(n, and θ Beta(1 Beta(1,, 1) for various n and Y . and Y .
 ∼
∼
for any prior (a (a and b), if the sample size n is small the posterior mean is approximately the prior mean, and as the sample size increases the posterior 1, so mean the sample proportion. as a + b mean 0, w 0, w ncoincides as we becomes make thecloser prior to vague with large variance,Also, the posterior with the sample proportion for any sample size size n. Y . This plot is meant to Figure 2.1 plots the posterior for various n and and Y . illustrate a sequential analysis. In most cases the entire data set is analyzed in a single analysis, but in some cases data are analyzed as they arrive and a Bayesian analysis provides a framework to analyze data sequentially. In Figure 2.1, before any data are collected θ collected θ has has a uniform prior. After 10 observations, there are three successes and the posterior concentrates below 0.5. After an additional 40 samples, the posterior centers on the sample proportion 12/50. As data accrues, the posterior converges to to θ 0.2 and the posterior variance decreases.
→
≈
→
From prior information to posterior inference
2.1. 2. 1.2 2
45
Poiss oisson ong gam amma ma mode modell for for a rate rate
The Poissongamma conjugate pair is useful for estimating the rate of events per unit of observation eﬀort, denoted θ denoted θ.. For example, an ecologist may survey N acres of a forest and observe Y N 0, 1,... individuals of the species of Y employee injuries in N N personhours interest, or a company may observe Y N θ) on the job. For these data, we might assume the model Y θ Poisson( Poisson(N N is where N where is known sampling eﬀort and and θ > 0 is the unknown event rate. The likelihood is exp( N θ)(N )(N θ)Y f f ((Y θ ) = exp( N θ )θY . (2.5) Y Y !! The kernel of the likelihood resembles a Gamma(a, Gamma( a, b) distribution
∈ {
}
 ∼
−

∝
ba π (θ ) = bθ))θ a exp( bθ a) Γ(a Γ(
−
−
bθ)θa ∝ exp(−bθ)
(2.6)
in the sense that θ that θ appears appears in the PDF raised to a power and in the exponent. Combining the likelihood and prior gives posterior Bθ))θA , bθ)θ a ] ∝ exp(−Bθ  ∝ [exp(−N θ)θY ] · [exp(−bθ)
p( p(θ Y ) Y )
(2.7)
Y + a and B = N N + b. Th where A = Y Thee po post ster erio iorr of θ is th thus us θ Y A, B ), and the gamma prior is conjugate. Gamma(A, Gamma( A simple estimate of the expected number of events per unit of eﬀort N .. The mean and variance of a Gamma(A, Gamma( A, B ) is sample event rate θˆ = Y /N 2 A/B , respectively, and so the posterior mean under distribution are A/B are A/B and and A/B the Poissongamma model is
 ∼

Y )) = E(θθ Y E(
Y Y + a . N + b
(2.8)
Therefore, compared to the sample event rate we add a events to the numerator and b and b units units of eﬀort to the denominator. As in the betabinomial example, the posterior mean can be written as a weighted average of the sample rate and the prior mean,

E (θ Y Y )) = (1
Y − wN ) ab + wN N ,
(2.9)
−
N//(N N + + b) is the weight given to the sample rate Y rate Y /N and and 1 wN where w N = N where w a/b. As is the weight given to the prior mean a/b. As b 0, 0, wn 1, and so for a vague prior with large variance the posterior mean coincides with the sample rate. A common setting for the hyperparameters is a = b = for some small value , which gives prior mean 1 and large prior variance 1/ value 1 /.. NFL concussions example: Concussions are an increasingly serious concern in the National Football League (NFL). The NFL has 32 teams and N = 32 each team plays 16 regularseason games per year, for a total of of N
→
→
/2 = 256 games. According to Frontline/PBS (http://apps.frontline. 16/ 16
·
46
Bayesian Statistic Statistical al Methods
2012 2013 8
2014 2015
6
r o i r e t s o P
4
2
0
0.4
0.6
0.8
1.0
θ
FIGURE 2.2 Posterior distributions for the NFL concussion example. Plot of the posterior of of θ from the model Y θ Gamma(N Gamma(N θ ) and θ Gamma(a, Gamma(a, b), N = 256 is the number of games played in an NFL season, Y Y is the where N where number of concussions in a year, and a = b = 0.1 are the hyperparameters. The plot below gives the posterior for years 2012–2015, which had 171, 152, 123, and 199 concussions, respectively.
 ∼
∼
org/concussionwatch) there were were Y 1 = 171 concussions in 2012, Y 2 = 152
concussions in 2013, 2013, Y 3 = 123 concussions in 2014, and Y 4 = 199 concussions in 2015. Figure 2.2 plots the posterior of the concussion rate for each year assuming a = b = 0.1. Comparing only 2014 with 2015 there does appear to assuming be some evidence of an increase in the concussion rate per game; for 2014 the posterior mean and 95% interval are 0.48 (A/B ( A/B)) and (0.40, 0.57) (computed using the Gamma(A, Gamma(A, B ) quantile function), compared to 0.78 and (0.67, 0.89) for 2015. However, 2015 does not appear to be signiﬁcantly diﬀerent than 2012 or 2013.
From prior information to posterior inference
2.1. 2. 1.3 3
47
Norm Normal aln nor orma mall model model for for a m mea ean n
Gaussian responses play a key role in applied and theoretical statistics. The ttest, ANalysis Of VAriance (ANOVA), and linear regression all assume Gaussian responses. We develop these methods in Chapter 4, but here we dis iid cuss cu ss co conj njug ugat atee pr prio iors rs fo forr th thee sim simpl pler er mod model el Y i µ, σ2 Normal(µ, Normal(µ, σ2 ) for i = 1,...,n ,...,n.. This model has two parameters, the mean µ and the variance σ 2 . In this subsection we assume that σ2 is ﬁxed and focus on estimating µ; given µ, and Chapter and Chapter 44 derives derives the in the following subsection we analyze analyze σ 2 given joint posterior of both parameters. Assuming σ2 to be ﬁxed, a conjugate prior for the unknown mean µ is µ Normal(µ /m). Normal(µ0 , σ 2 /m ). The prior variance is proportional to σ 2 to express prior uncertainty on the scale of the data. The hyperparameter m hyperparameter m > 0 controls the strength of the prior, with small small m giving large prior variance and vice versa. Appendix A.3 shows that the posterior is

∼
∼
 ∼N
µY
¯ + (1 wY Y
−
σ2 w)µ0 , n+m
(2.10)
¯ = Y ∈ [0 [0,, 1] is the weight given to the sample mean Y n i=1 Y i /n /n.. mean w n → increases. 1 so the posterior mean coincides with Letting Letting m 0 gives w gives the sample as m the→prior variance √ The posterior standard deviation deviation √ is σ is σ// n + m which is less than the standard error of the sample mean, σ/ n. The prior with hyperparameter m
n/((n + m)) where w = n/ where + m
reduces the posterior standard deviation by the same amount as adding an additional m observations, and this stabilizes a Bayesian analysis. For Gausadditional sian data analyses such as ANOVA or regression the prior mean µ mean µ 0 is often set to zero. In the normalnormal model with µ with µ 0 = 0 the posterior mean estimator ¯ µ0 ) + µ0 . This is an exam¯ . More generally, E(µ Y . w (Y is E(µ E(µ Y) = w Y E(µ Y) = w( ple of a shrinkage a shrinkage estimator estimator because the sample mean is shrunk towards the prior mean by the shrinkage factor factor w [0 [0,, 1]. As will be discussed in Chapter in Chapter 4, shrinkage 4, shrinkage estimators have advantages, particularly in hard problems such as regression with many predictors and/or small sample size. Blood alcohol concentration (BAC) example: The BAC level is percent of your blood that is concentrated with alcohol. The legal limit for opY be the measured erating a vehicle is BAC 0.08 in most US states. Let Y BAC and and µ be your true BAC. Of course, the BAC test has error, and the error standard deviation for a sample near the legal limit has been established in (hypothetical) laboratory tests to be σ = 0.01, so that the likelihood of µ, 0.012 ). Your BAC is measured to be Y Y = 0.082, the data is is Y µ Normal( Normal(µ, which is above the legal limit. Your defense is that you had two drinks, and that the BAC of someone your size after two drinks has been shown to follow a Normal(0. Normal(0 .05 05,, 0.022 ) distribution depending on the person’s metabolism and the timing of the drinks. Normal(0..05 05,, 0.022 ), i.e., with with m = 0.25. The Figure 2.3 2.3 plots the prior prior µ Normal(0

 ∈
− −
≤
 ∼
∼
prior probability that your BAC exceeds 0.08 is 0.067. The posterior distribu
48
Bayesian Statistic Statistical al Methods
Prior Posterior
0 4
0 3
y t i s n e D
0 2
0 1
0
0.00
0.02
0.04
0.06
0.08
0.10
True BAC level (%)
FIGURE 2.3 Posterior distributions for the blood alcohol content example. Plot of the prior and posterior PDF, with the prior (0.067) and posterior (0.311) probabilities of exceeding the legal limit of 0.08 shaded.
Y = 0.082, tion with with n = 1, 1, Y 082, σ = 0.01 and and m = 0.25 is µ Y
Normal(0..0756 Normal(0 0756,, 0.00892 ).
(2.11)
  ∼
Y )) = 0.311 and so there is considerable uncertainty Therefore, Prob(µ Prob(µ > 0 0..08 Y about whether your BAC exceeds the legal limit and in fact the posterior odds that your BAC is below the legal limit,
≤
 
Y ) Prob(µ 0.08 Y ) Prob(µ , Prob(µ Prob( µ > 0 0..08 Y ) Y ) are greater than two.
2.1.4 2.1 .4
Normal Normalin inve verse rse gamm gamma a model model for a vari varianc ance e
Next we turn to estimating a Gaussian variance assuming the mean is ﬁxed. iid
Y i As before, b efore, the sampling densit density y is is Y
,...,n.. With µ, σ 2 ) for i for i = = 1,...,n Normal(µ,  ∼ Normal( σ2
From prior information to posterior inference
49
the mean ﬁxed, the likelihood is n
f ( f (Y σ 2 )

∝ − − 1 exp σ i=1
(Y i µ) 2σ 2
−
∝
− S S E 2σ 2
2 n/2 (σ 2 )−n/ exp
(2.12)
S E = ni=1 (Y i µ)2 . The likelihood has σ where SS where S has σ 2 raised to a negative power and σ 2 in the denominator of the exponent. Of the distributions in Appendix and A.1 with support [0, [0, ), only the inverse gamma PDF has these properties. a, b) gives InvGamma(a, Taking the prior prior σ 2 InvGamma(
∼∞
π (σ 2 )
∝ (σ2)−(a+1) exp
− b σ2
.
(2.13)
Combining the likelihood and prior gives the posterior
 ∝ f (Yσ2)π(σ2 ) ∝ (σ2 )−(A+1) exp
p( p(σ 2 Y)
− B σ2
,
(2.14)
where A = n/ n/22 + a and B = SSE/2 SSE/2 + b are the updated parameters, and therefore σ2 Y InvGa InvGamma mma (A, B ) . (2.15)
 ∼
n/22 + a > 1) is The posterior mean (if n/ E (σ 2 Y) =

SSE/2 SSE/2 + b S S E + + 2b . = n/22 + a 1 n/ n 1 + 2a 1
−
−
−
(2.16)
m/22 and b /22 for Therefore, if we take the hyperparameters to be a be a = = 1/2 + m/ and b = = / m and ,, then the posteriormean estimator is E(σ small m small and E( σ 2 Y) = (S S E +)/(n m)) and compared to the usual sample variance SSE/(n 1) the small 1 + m variance SSE/( values and values and m are added to the numerator and denominator, respectively. In this sense, the prior adds an additional m degrees of freedom for estimating the variance, which can stabilize the estimator if n is small. Conjug Con jugate ate pri prior or for a pre precis cision ion: We have introduced the Gaussian distribution as having two parameters, the mean and the variance. However, it can also be parameterized in terms of its mean and precision (inverse variance),

−
−
τ τ = 1/σ 2 . In particular, the JAGS package used in this book employs this µ, τ ) τ ) has PDF parameterization. In this parameterization, parameterization, Y µ, τ Normal( Normal(µ, τ 1/2 f (Y µ, τ τ )) = exp 2π
√

 ∼∼ − τ 2 (Y − − µ)2
.
(2.17)
This param parameteriza eterization tion mak makes es deriv derivation ationss and compu computation tationss sligh slightly tly easier, especially for the multivariate normal distribution where using a precision matrix (inverse covariance matrix) avoids some matrix inversions. Not surprisingly, the conjugate prior for the precision is the gamma family. iid µ, τ τ )) then the likelihood is Normal(µ, If Y i Normal(
∼
n

τ
τ 1/2 exp
f ( f (Y τ ) τ )
∝
i=1
−
2
2 n/2 τ n/ exp
µ)2
(Y i
−
∝
τ
S S E ,
− 2
(2.18)
50
Bayesian Statistic Statistical al Methods
τ )) and the Gamma(a, Gamma(a, b) prior is π is π((τ and prior gives
∝ τ a−1 exp(−τ b). Combining the likelihood
 ∝ f f ((Yτ τ ))π(τ τ )) ∝ τ A−1 exp(−τ B),
p p((τ Y)
(2.19)
A = n/2+ a and B = SSE/ SSE/2+ where A where = n/ 2+a and B 2+bb are the updated parameters. Therefore, τ Y
Gamma(A, B ). Gamma(A,
(2.20)
 ∼
The InvGamma(a, InvGamma(a, b) prior for the variance and the Gamma(a, Gamma(a, b) prior for the precision give the exact same posterior distribution. That is, if we use the InvGamma(a, InvGamma(a, b) prior for the variance and then convert this to obtain the posterior of 1/σ 1/σ 2 , the results are identical as if we have conducted the analysis with a Gamma(a, Gamma(a, b) prior for the precision. Throughout the book we use the meanvariance parameterization except for cases involving JAGS code when we adopt their meanprecision parameterization.
2.1. 2. 1.5 5
Natu Natura rall con conju juga gate te prio priors rs
We have thus far catalogued a series of popular sampling distributions and corresponding conjugate family of priors. Is there a natural way of constructingchoices a classofofthe conjugate priors given a speciﬁc sampling density f f ((y θ )? It turns out for many of these familiar choices of sampling densities (e.g., exponential family) we can construct a class of conjugate priors, and priors constructed in this manner are called natural conjugate priors. iid f (y θ ) for n and consider a class of priors deﬁned by for i = 1, . . . , n and Let Y i f ( Let

∼

m
π (θ
0 y10 , . . . , ym , m)

∝
f ( f (yj0 θ )

j =1
(2.21)
where y j0 are some arbitrary ﬁxed values in the support of the sampling distriwhere y m and bution for for j = 1, . . . , m and m 1 is a ﬁxed integer. The pseudoobservations
≥
0 j y can
be seen as the hyperparameters of the prior distribution and0 such a prior m f ( f . is a proper distribution if there exists exists m such that j =1 (yj θ ) dθ < To see that the prior deﬁned in (2.21) is indeed conjugate notice that n 0 y10 , . . . , ym , m)
 ∝ π(θ
p( p(θ Y)
i=1
∝

∞
M

f ( f (Y i θ)
j =1
f ( f (yj∗ θ )

(2.22)
M = m + n and yj∗ = yj0 for j = 1, . . . , m and yj∗ = Y j−m for j = where M + n m + 1, . . . , m + m + n n.. Thus, the posterior distribution belongs to the same class of distributions as in (2.21). Below we revisit some previous examples using this method of creating natural natur al conju conjugate gate prior priors. s.
Bernoulli Berno ulli trail trailss (Sectio (Section n 2.1 2.1.1) .1):: Whe When n Y θ
 ∼ Bernoulli(θ Bernoulli(θ ), we
ha hav ve
From prior information to posterior inference
51
f f ((y θ ) θ y (1 θ)1−y and so the natural conjugate prior with the ﬁrst s0 remaining m s0 pseudo observapseudo observations equal equal yj0 = 1 and the remaining 0 , m) θs0 (1 θ)m−s0 , that is, prior π((θ y10 , . . . , ym tions set to y to y j0 = 0 gives the prior π θ Beta( Beta(ss0 + 1, m s0 +1). This beta prior is restricted to have integervalued hyperparameters, but once we see the form we can relax the assumption that m and and s0 are integer valued. Poisson Po isson coun counts ts (Sect (Section ion 2.1 2.1.2) .2):: Whe When n Y θ Poisson(θ Poisson(θ ), we ha hav ve
−
 ∝
−
∼
y
∝

−
−
 ∼
θ
m f f ((given y θ ) by θ πe(−θ yand so the natural prior by using −mθ , where 0 0equation (2.21) 0 θs0 econjugate s0 = where is by 1 , . . . , ym , m) j =1 yj and this naturally leads to a gamma prior distribution distribution θ Gamma( Gamma(ss0 + 1, m). Therefore, m pseudo observations with sample rate we can view the prior as consisting of m pseudo s0 /m /m.. Again, the restriction that that m is an integer can be relaxed this once it is revealed that the prior is from the gamma family. Normal distribution with ﬁxed variance (Section 2.1.3): Assuming Y θ Normal( Normal(µ, µ, 1), we have have f f ((y µ) exp (y µ)2 /(2σ (2σ 2 ) and so the natm 0 0 , m) exp µ)2 /(2σ ural conjugate prior is π (µ y10 , . . . , ym (2σ 2 ) j =1 (yj 0 σ 2 ) , where y¯0 = m exp m(µ y¯0 )2 /(2 (2σ j =1 yj /m and this naturally leads y 0 , σ 2 /m /m). to the prior θ Normal(¯ ). Therefore, the prior can be viewed as consisting of m pseudo observations with mean y¯0 .
 ∝
 ∼ {−
∝

−
∼
}
∼
 ∝

{− − ∝ {−
} −
}∝
Thisguessing systematic a conjugate priorthen takesverifying away theitsmystery of ﬁrst theway formofofobtaining the conjugate prior and conjugacy. The procedure works well when faced with problems that do not have a familiar likelihood and works even when we have vectorvalued parameters.
2.1. 2. 1.6 6
Norm Normal aln nor orma mall model model for a mean mean vect vector or
In Bayesian linear regression the parameter of interest is the vector of regression coeﬃcients. This is discussed extensively in Chapter in Chapter 4, 4, but but here we provide the conjugacy relationship that underlies the regression analysis. Although still a bit cumbersome, linear regression notation is far more concise using matrices. Say the the nvector Y is multivariate normal
 ∼ Normal(Xβ, Σ). (2.23) p matrix X and unknown The mean of Y is decomposed as the known known n × p matrix unknown pvector β , and Σ is the n the n × n covariance matrix. The prior for β is multivariate p covariance normal with mean µ and and p × p covariance matrix Ω, β ∼ Normal(µ, Ω). (2.24) Yβ
As shown in Appendix A.3, the posterior of β is multivariate normal
β Y ∼ Normal Σβ (X Σ−1 Y + Ω−1 µ), Σβ , T
(2.25)
where Σ β = (X Σ−1 X + Ω−1 )−1 . In standard linear regression the errors are assumed to be independent and identically distributed and thus the covariance
52
Bayesian Statistic Statistical al Methods
is proportional to the identity matrix, Σ = σ 2 In . In this case, if the prior is uninformative with Ω−1 0, then the posterior mean reduces to the familiar least squares estimator (XT X)−1 XT Y and the posterior covariance reduces to the covariance of the sampling distribution of the least squares estimator, σ 2 (XT X)−1 .
≈
2.1.7
Normalin Normalinver verse se W Wishar ishartt model model for for a cov covariance ariance matrix matrix
Say Y1 ,..., Yn are vectors of length p that are independently distributed as p p unknown covariance multivariate multiv ariate normal with known mean vectors µi and and p matrix Σ. The conjugate prior for the covariance matrix is the inverse Wishart prior. pri or. The in inve verse rse Wis Wishar hartt fam family’ ily’ss sup support port is sym symmet metric ric posi positiv tivee deﬁ deﬁnit nitee matrices (i.e., covariance matrices), and reduces to the inverse gamma family if if p = 1. The inverse Wishart prior with degrees of freedom ν > p 1 and p p symmetric p symmetric and positive deﬁnite scale matrix R has PDF
×
−
×
π(Σ)
p+1)/ /2 ∝ Σ−(ν + p+1) exp
−
1 Trace(Σ−1 R) 2
(2.26)
− − 1) assuming ν assuming ν > p + 1. The prior concen concentrati tration on ν = p − around R /(>ν 0, − pgives − 1) the increases with ν ,, and prior. with ν therefore small ν ,, say ν small ν sayspecial 1 +is for small small least informative An interesting case and mean E(Σ) = R/(ν p
when ν = p + 1 and R is a diagonal matrix. This induces a uniform prior on when each oﬀdiagonal element of the correlation matrix corresponding to covariance matrix Σ. As shown in Chapter Appendix A.3, the posterior is n
ΣY
−
 ∼ InvWishart p
n + ν,
(Yi
i=1
− µi)(Yi − µi)T + R
.
(2.27)
n ν p 1]. For The posterior mean is [ i=1 (Yi µi )(Yi µi )T + R]/[n + + ν R 0 and ν p + 1, the posterior mean is approximately [ ni=1 (Yi /n,, which is the sample covariance matrix assuming the means µi )(Yi µi )T ]/n are known and not replaced by the sample means. Marathon Marat hon examp example le: Figu Figure re 2.4 2.4 plots the data for several of the top female runners in the 2016 Boston Marathon. Let Y iijj be the speed (min,...,n and ,...,p = utes/mile) for runner runner i = 1,...,n and mile mile j = 1,...,p = 26. For this analysis, we have discarded all runners with missing data (for a missing data analysis see Section 6.4), leaving leaving n = 59 observations Yi = (Y i1 ,...,Y iipp )T . We analyze the covariance of the runners’ data to uncover patterns and possibly strategy. ¯j = For si simp mpli licit city y we co cond nduc uctt th thee an anal alys ysis is condi conditi tion onin ingg on µij = Y n /n,, i.e., the sample mean for mile mile j . For the prior, we take take ν = p + 1 i=1 Y iijj /n /ν so and R = I p /ν so that elements of the correlation matrix have Uniform(1,1) priors. The code in Listing 2.1 generates S generates S samples samples from the posterior of Σ and uses the samples to approximate the posterior mean. To avoid storage prob
≈
≈
−
−
− − −
−
S samples lems, rather than storing all all S samples the code simply retains the running
From prior information to posterior inference
53
(b) Posterior mean covariance matrix
(a) Spaghetti plot of the marathon data 5 2
9
0.10 0 2
) e l i 8 m / e t u n i m ( d e e p S
0.08
e l i M
7
5 1 0.06
0 1
0.04
0.02
6
5
0.00
0
5
10
15
20
25
5
10
Mile
20
25
Mile
(d) Significant conditional correlations
(c) Posterior mean correlation matrix
e l i M
15
5 2
1.0
5 2
0 2
0.8
0 2
0.6
5 1
e l i M
5 1
0.4 0 1
0 1
0.2 5
5
0.0
5
10
15
Mile
20
25
5
10
15
20
25
Mile
FIGURE 2.4 Cov Co vari arianc ance e ana analys lysis is of the 201 2016 6 Bos Boston ton Mar Marath athon on dat data a. Panel (a) plots the minute/mile for each runner (the winner is in black), Panels (b) and (c) show the posterior mean of the covariance and correlation matrices, respectively, and Panel (d) is shaded gray (black) if the 95% (99%) credible interval for the elements of the precision matrix excludes zero.
54
Bayesian Statistic Statistical al Methods
mean of the samples. However, Monte Carlo sampling produces the full joint distribution of all elements of Σ. In particular, for each draw from the posterior, Listing 2.1 converts the simulated covariance matrix to the corresponding correlation matrix, and computes the posterior mean of the correlation matrix. The estimated (posterior mean) variance (the diagonal elements of Figure Figure 2.4b) increases 2.4b) increases with the mile as the top runners separate themselves from the pack. The correlation between speeds at diﬀerent miles (Figure (Figure 2.4c) 2.4c) is is high for all pairs of miles in the ﬁrst half (miles 1–13) of between the raceaasrunner’s the runners maintain a fairly steady pace. The correlation is low ﬁrstand secondhalf mile times, and the strongest correlations in the second half of the race are between subsequent miles. NormalWi Norma lWishart shart model for a precis precision ion matrix: Correl Correlation ation is clearly a central concept in statistics, but it should not be confused with causation. For example, consider the three variables that follow the distribution Z 1 Z 2 , 1). In this Z 1 , 1) and Normal(Z Normal(Z and Z 3 Z 1 , Z 2 Normal( Normal(0,, 1), Normal(0 1), Z 2 Z 1 Normal( toy example, the ﬁrst variable has a causal eﬀect on the second, and the second has a causal eﬀect on the third. The shared relationship with with Z 2 results in a and Z 3 . However, if we condition on Z 2 , then correlation of 0.57 between Z 1 and Z 1 and and Z 3 are independent. Statistical inference about the precision matrix (inverse covariance matrix) Ω = Σ−1 is a step closer to uncover uncovering ing causalit causality y than infere inference nce about the correlation/covariance matrix. For a normal random variable Y = (Y 1 ,...,Y p )T Normal(µ, Ω−1 ), the ( j, ( j, k) element of the precision matrix Ω measures the and Y k after accounting for the eﬀects strength of the correlation between between Y j and of the other p 2 elements of Y. We say that variables j k are conditionvariables j and and k ally correlated if and only if the ( j, ( j, k) element of Ω is nonzero. Therefore, association tests based on Ω rather than Σ eliminate eliminate spuri spurious ous correla correlation tion (e.g., Z 2 ). Assuming and Z 3 ) induced by lingering variables (e.g., (e.g., between between Z 1 and all variables relevant to the problem are included in the p variables under consideration, then these conditional correlations have causal interpretations. The Wishart distribution is the conjugate prior for a normal precision ν, R), then Ω = Σ−1 Wishart( ν, R) and has matrix. If Σ InvWishart( InvWishart(ν, Wishart(ν,

 ∼
∼
∼
∼
−
∼
∼
prior density 1)/2 ∝ Ω( p−ν −1)/ exp −Trace ΩR−1 /2 . (2.28) Given a sample Y1 ,..., Yn ∼ Normal(µ, Ω−1 ) and conditioning on the mean
π (Ω)
vectors µi , the posterior is
n
ΩY
 ∼ Wishart
n + ν,
(Yi
i=1
− µi)(Yi − µi)T + R−1
−1
. (2.29)
Marathon examp Marathon example le: List Listin ingg 2. 2.11 co comp mput utes es and and Figu Figure re 2.4 2.4d d plo plots ts the conditional correlations with credible sets that exclude zero for the Boston Marathon example. Many of the nonzero correlations in Figure in Figure 2.4c 2.4c do do not
From prior information to posterior inference
Listing 2.1 Monte Carlo analysis of the Boston Marathon data. 1 2
# Hype Hyperpriors rpriors
3 4 5
nu R
View more...
Comments