FRM 2
March 30, 2017 | Author: sadiakhn03 | Category: N/A
Short Description
Download FRM 2...
Description
P1.T2.Quantitative Analysis FRM 2012 Study Notes – Vol. II
By David Harper, CFA FRM CIPM www.bionicturtle.com
Table of Contents Stock, Chapter 2: Review of Probability .............................................................................. 2 Stock, Chapter 3: Review of Statistics ............................................................................... 28 Stock, Chapter 4: Linear Regression with one regressor.................................................... 50 Stock, Chapter 5: Single Regression: Hypothesis Tests and Confidence Intervals ............... 59 Stock: Chapter 6: Linear Regression with Multiple Regressors ........................................... 63 Stock, Chapter 7: Hypothesis Tests and Confidence Intervals in Multiple Regression ......... 68 Rachev, Menn, and Fabozzi, Chapter 2: Discrete Probability Distributions ......................... 72 Rachev, Menn, and Fabozzi, Chapter 3: Continuous Probability Distributions .................... 76 Jorion, Chapter 12: Monte Carlo Methods .......................................................................... 87 Hull, Chapter 22: Estimating Volatilities and Correlations .................................................. 98 Allen, Boudoukh, and Saunders, Chapter 2: Quantifying Volatility in VaR Models ...........108
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 1
Stock, Chapter 2:
Review of Probability In this chapter…
Define random variables, and distinguish between continuous and discrete random variables. Define the probability of an event. Define, calculate, and interpret the mean, standard deviation, and variance of a random variable. Define, calculate, and interpret the skewness, and kurtosis of a distribution. Describe joint, marginal, and conditional probability functions. Explain the difference between statistical independence and statistical dependence. Calculate the mean and variance of sums of random variables. Describe the key properties of the normal, standard normal, multivariate normal, Chi-squared, Student t, and F distributions. Define and describe random sampling and what is meant by i.i.d. Define, calculate, and interpret the mean and variance of the sample average. Describe, interpret, and apply the Law of Large Numbers and the Central Limit Theorem.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 2
Define random variables, and distinguish between continuous and discrete random variables. We characterize (describe) a random variable with a probability distribution. The random variable can be discrete or continuous; and in either the discrete or continuous case, the probability can be local (PMF, PDF) or cumulative (CDF). A random variable is a variable whose value is determined by the outcome of an experiment (a.k.a., stochastic variable). “A random variable is a numerical summary of a random outcome. The number of times your computer crashes while you are writing a term paper is random and takes on a numerical value, so it is a random variable.”—S&W Continuous
probability function (pdf, pmf)
Pr (c1 ≤ Z ≤ c2) = φ(c2) - φ(c1)
Pr (Z ≤ c)= φ(c)
Pr (X = 3)
Pr (X ≤ 3)
Cumulative Distribution Function (CDF)
Discrete
Continuous random variable A continuous random variable (X) has an infinite number of values within an interval: b
P (a X b) a f ( x )dx
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 3
Discrete random variable A discrete random variable (X) assumes a value among a finite set including x1, x2, x3 and so on. The probability function is expressed by:
P( X xk ) f ( xk )
Notes on continuous versus discrete random variables
Discrete random variables can be counted. Continuous random variables must be measured.
Examples of a discrete random variable include: coin toss (head or tails, nothing in between); roll of the dice (1, 2, 3, 4, 5, 6); and “did the fund beat the benchmark?”(yes, no). In risk, common discrete random variables are default/no default (0/1) and loss frequency.
Examples of continuous random variables include: distance and time. A common example of a continuous variable, in risk, is loss severity.
Note the similarity between the summation (∑ ) under the discrete variable and the integral (∫) under the continuous variable. The summation (∑) of all discrete outcomes must equal one. Similarly, the integral (∫) captures the area under the continuous distribution function. The total area “under this curve,” from (-∞) to (∞), must equal one.
All four of the so-called sampling distributions—that each converge to the normal—are continuous: normal, student’s t, chi-square, and F distribution.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 4
Summary Discrete Are counted Finite
Continuous Are measured Infinite
Distance, Time (e.g.) Severity of loss (e.g.) Asset returns (e.g.)
Examples in Finance Default (1,0) (e.g.) Frequency of loss (e.g.)
For example Normal Student’s t Chi-square F distribution Lognormal Exponential Gamma, Beta EVT Distributions (GPD, GEV)
Bernoulli (0/1) Binomial (series i.i.d. Bernoullis) Poisson Logarithmic
Define the probability of an event. Probability: Classical or “a priori” definition The probability of outcome (A) is given by:
P ( A)
Number of outcomes favorable to A Total number of outcomes
For example, consider a craps roll of two six-sided dice. What is the probability of rolling a seven; i.e., P[X=7]? There are six outcomes that generate a roll of seven: 1+6, 2+5, 3+4, 4+3, 5+2, and 6+1. Further, there are 36 total outcomes. Therefore, the probability is 6/36. In this case, the outcomes need to be mutually exclusive, equally likely, and “cumulatively exhaustive” (i.e., all possible outcomes included in total). A key property of a probability is that the sum of the probabilities for all (discrete) outcomes is 1.0.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 5
Probability: Relative frequency or empirical definition Relative frequency is based on an actual number of historical observations (or Monte Carlo simulations). For example, here is a simulation (produced in Excel) of one hundred (100) rolls of a single six-sided die:
Roll 1 2 3 4 5 6 Total
Empirical Distribution Freq. 11 17 18 21 18 15 100
% 11% 17% 18% 21% 18% 15% 100%
Note the difference between an a priori probability and an empirical probability:
The a priori (classical) probability of rolling a three (3) is 1/6,
But the empirical frequency, based on this sample, is 18%. If we generate another sample, we will produce a different empirical frequency.
This relates also to sampling variation. The a priori probability is based on population properties; in this case, the a priori probability of rolling any number is clearly 1/6th. However, a sample of 100 trials will exhibit sampling variation: the number of threes (3s) rolled above varies from the parametric probability of 1/6 th. We do not expect the sample to produce 1/6th perfectly for each outcome.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 6
Define, calculate, and interpret the mean, standard deviation, and variance of a random variable. If we can characterize a random variable (e.g., if we know all outcomes and that each outcome is equally likely—as is the case when you roll a single die)—the expectation of the random variable is often called the mean or arithmetic mean.
Mean (expected value) Expected value is the weighted average of possible values. In the case of a discrete random variable, expected value is given by: k
y k pk y i pi
E (Y ) y1p1 y 2 p2
i 1
In the case of a continuous random variable, expected value is given by:
E( X ) xf ( X )dx Variance Variance and standard deviation are the second moment measures of dispersion. The variance of a discrete random variable Y is given by: k
2 2 Y2 variance(Y ) E Y Y y i Y pi
i 1
Variance is also expressed as the difference between the expected value of X^2 and the square of the expected value of X. This is the more useful variance formula:
Y2 E[(Y Y )2 ] E(Y 2 ) [E(Y )]2 Please memorize this variance formula above: it comes in handy! For example, if the probability of loan default (PD) is a Bernouilli trial, what is the variance of PD? We can solve with E[PD^2] – (E[PD])^2, As E[PD^2] = p and E[PD] = p, E[PD^2] – (E[PD])^2 = p – p^2 = p*(1-p).
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 7
Example: Variance of a single six-sided die For example, what is the variance of a single six-sided die? First, we need to solve for the expected value of X-squared, E[X2]. This is given by:
91 1 1 1 1 1 1 E [ X 2 ] (12 ) (22 ) (32 ) (42 ) (52 ) (62 ) 6 6 6 6 6 6 6 Then, we need to square the expected value of X, [E(X)]2. The expected value of a single six-sided die is 3.5 (the average outcome). So, the variance of a single six-sided die is given by:
Variance( X ) E ( X 2 ) [E ( X )]2
91 (3.5)2 2.92 6
Here is the same derivation of the variance of a single six-sided die (which has a uniform distribution) in tabular format:
What is the variance of the total of two six-sided die cast together? It is simply the Variance (X) plus the Variance (Y) or about 5.83. The reason we can simply add them together is that they are independent random variables.
Sample Variance: The unbiased estimate of the sample variance is given by:
sx2
1 k ( y i Y )2 k 1 i 1
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 8
Properties of variance
1.
2 constant 0
2a. X2 Y X2 Y2
only if independent
2b. X2 Y X2 Y2
only if independent
3.
X2 b X2
4.
2 aX a 2 X2
5.
2 2 2 aX b a X
6.
2 2 2 2 2 aX a b Y bY X
7.
X2 E ( X 2 ) E ( X )2
only if independent
Standard deviation: Standard deviation is given by: 2 Y var(Y ) E Y Y
y i Y 2 pi
As variance = standard deviation^2, standard deviation = Square Root[variance]
Sample Standard Deviation: The unbiased estimate of the sample standard deviation is given by:
sX
1 k ( y i Y )2 k 1 i 1
This is merely the square root of the sample variance. This formula is important because this is the technically precise way to calculate volatility.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 9
Define, calculate, and interpret the skewness, and kurtosis of a distribution. Skewness (asymmetry) Skewness refers to whether a distribution is symmetrical. An asymmetrical distribution is skewed, either positively (to the right) or negatively (to the left) skewed. The measure of “relative skewness” is given by the equation below, where zero indicates symmetry (no skewness):
Skewness = 3
E [( X )3 ]
3
For example, the gamma distribution has positive skew (skew > 0):
Gamma Distribution Positive (Right) Skew 1.20 1.00 0.80 0.60 0.40 0.20 -
alpha=1, beta=1
0.0 0.6 1.2 1.8 2.4 3.0 3.6 4.2 4.8
alpha=2, beta=.5 alpha=4, beta=.25
Skewness is a measure of asymmetry If a distribution is symmetrical, mean = median = mode. If a distribution has positive skew, the mean > median > mode. If a distribution has negative skew, the mean < median < mode.
Kurtosis Kurtosis measures the degree of “peakedness” of the distribution, and consequently of “heaviness of the tails.” A value of three (3) indicates normal peakedness. The normal distribution has kurtosis of 3, such that “excess kurtosis” equals (kurtosis – 3).
Kurtosis = 4
E [( X )4 ]
4
Note that technically skew and kurtosis are not, respectively, equal to the third and fourth moments; rather they are functions of the third and fourth moments.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 10
A normal distribution has relative skewness of zero and kurtosis of three (or the same idea put another way: excess kurtosis of zero). Relative skewness > 0 indicates positive skewness (a longer right tail) and relative skewness < 0 indicates negative skewness (a longer left tail). Kurtosis greater than three (>3), which is the same thing as saying “excess kurtosis > 0,” indicates high peaks and fat tails (leptokurtic). Kurtosis less than three ( 3.0 (or excess kurtosis > 0) implies heavy-tails. Financial asset returns are typically considered leptokurtic (i.e., heavy or fat- tailed) For example, the logistic distribution exhibits leptokurtosis (heavy-tails; kurtosis > 3.0):
Logistic Distribution Heavy-tails (excess kurtosis > 0) 0.50 0.40 0.30 0.20 0.10 -
alpha=0, beta=1 alpha=2, beta=1 alpha=0, beta=3 N(0,1) 1 5 9 13 17 21 25 29 33 37 41
Univariate versus multivariate probability density functions A single variable (univariate) probability distribution is concerned with only a single random variable; e.g., roll of a die, default of a single obligor. A multivariate probability density function concerns the outcome of an experiment with more than one random variable. This includes, the simplest case, two variables (i.e., a bivariate distribution).
www.bionicturtle.com
Univariate
Density f(x)= P(X = x)
Cumulative F(x) = P(X ≤ x)
Bivariate
f(x)= P(X = x, Y =y)
f(x) = P(X ≤ x, Y ≤ y)
FRM 2012 QUANTITATIVE ANALYSIS 11
Describe joint, marginal, and conditional probability functions. Stock & Watson illustrate with two variables:
The age of the computer (A), a Bernoulli such that the computer is old (0) or new (1)
The number of times the computer crashes (M)
Marginal probability functions A marginal (or unconditional) probability is the simple case: it is the probability that does not depend on a prior event or prior information. The marginal probability is also called the unconditional probability. In the following table, please note that ten joint outcomes are possible because the age variable (A) has two outcomes and the “number of crashes” variable (M) has five outcomes. Each of the ten outcomes is mutually exclusive and the sum of their probabilities is 1.0 or 100%. For example, the probability that a new computer crashes once is 0.035 or 3.5%. The marginal (unconditional) probability that a computer is new (A = 1) is the sum of joint probabilities in the second row: l
Pr(Y y ) Pr X xi ,Y y
Pr( A 1) 0.5
i 1
0 1
Old
New
Tot
0
1
2
3
4
Tot
0.35
0.065
0.05
0.025
0.01
0.50
0.45
0.035
0.01
0.005
0.00
0.50
0.80
0.100
0.03
0.030
0.01
1.00
“The marginal probability distribution of a random variable Y is just another name for its probability distribution. This term distinguishes the distribution of Y alone (marginal distribution) from the joint distribution of Y and another random variable. The marginal distribution of Y can be computed from the joint distribution of X and Y by adding up the probabilities of all possible outcomes for which Y takes on a specified value”—S&W
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 12
Joint probability functions The joint probability is the probability that the random variables (in this case, both random variables) take on certain values simultaneously.
Pr( X y,Y y )
0 1
Old
New
Tot
Pr( A 0, M 0) 0.35 0
1
2
3
4
Tot
0.35
0.065
0.05
0.025
0.01
0.50
0.45
0.035
0.01
0.005
0.00
0.50
0.80
0.100
0.03
0.030
0.01
1.00
“The joint probability distribution of two discrete random variables, say X and Y, is the probability that the random variables simultaneously take on certain values, say x and y. The probabilities of all possible ( x, y) combinations sum to 1. The joint probability distribution can be written as the function Pr(X = x, Y = y).” —S&W
Conditional probability functions Conditional is the probability of an outcome given (conditional on) another outcome:
Pr(Y y | X x )
Pr( X x,Y y ) Pr( X x )
Pr(M 0 | A 0) 0.35 0.50 0.70
0 1
Old
New
Tot
0
1
2
3
4
Tot
0.35
0.065
0.05
0.025
0.01
0.50
0.45
0.035
0.01
0.005
0.00
0.50
0.80
0.100
0.03
0.030
0.01
1.00
“The distribution of a random variable Y conditional on another random variable X taking on a specific value is called the conditional distribution of Y given X. The conditional probability that Y takes on the value y when X takes on the value x is written: Pr(Y = y | X = x).” –S&W
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 13
Conditional probability = Joint Probability/Marginal Probability What is the probability of B occurring, given that A has already occurred?
P (B | A )
P( A B) P ( A)P (B | A) P ( A B ) P ( A)
Conditional and unconditional expectation An unconditional expectation is the expected value of the variable without any restrictions (or lacking any prior information). A conditional expectation is an expected value for the variable conditional on prior information or some restriction (e.g., the value of a correlated variable). The conditional expectation of Y, conditional on X = x, is given by:
E(Y | X x ) The conditional variance of Y, conditional on X=x, is given by:
var(Y | X x ) The two-variable regression is a important conditional expectation. In this case, we say the expected Y is conditional on X:
E(Y | X i ) B1 B2 X i
For Example: Two Stocks (S) and (T) For example, consider two stocks. Assume that both Stock (S) and Stock (T) can each only reach three price levels. Stock (S) can achieve: $10, $15, or $20. Stock (T) can achieve: $15, $20, or $30. Historically, assume we witnessed 26 outcomes and they were distributed as follows. Note S = S$10/15/20 and T = T$15/20/30 :
T=$15 T=$20 T=$30 Total
S= $10 0 3 3 6
S= $15 2 4 6 12
S=$20 2 3 3 8
Total 4 10 12 26
What is the joint probability? A joint probability is the probability that both random variables will have a certain outcome. Here the joint probability P(S=$20, T=$30) = 3/26.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 14
What is the marginal (unconditional) probability The unconditional probability of the outcome where S=$20 = 8/26 because there are eight events out of 26 total events that produce S=$20. The unconditional probability P(S=20) = 8/26
What is the conditional probability Instead we can ask a conditional probability question: “What is the probability that S=$20 given that T=$20?” The probability that S=$20 conditional on the knowledge that T=$20 is 3/10 because among the 10 events that produce T=$20, three are S=$20.
P (S $20 T $20)
P (S $20,T $20) 3 P (T $20) 10
In summary:
The unconditional probability P(S=20) = 8/26
The conditional probability P(S=20 | T=20) = 3/10
The joint probability P(S=20,T=30) = 3/26
Explain the difference between statistical independence and statistical dependence. X and Y are independent if the condition distribution of Y given X equals the marginal distribution of Y. Since independence implies Pr (Y=y | X=x) = Pr(Y=y):
Pr(Y y | X x )
Pr( X x,Y y ) Pr( X x )
The most useful test of statistical independence is given by:
Pr( X x,Y y ) Pr( X x )P(Y y ) X and Y are independent if their joint distribution is equal to the product of their marginal distributions. Statistical independence is when the value taken by one variable has no effect on the value taken by the other variable. If the variables are independent, their joint probability will equal the product of their marginal probabilities. If they are not independent, they are dependent.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 15
For example, when rolling two dice, the second will be independent of the first. This independence implies that the probability of rolling doublesixes is equal to the product of P(rolling one six) and P(rolling one six). If two die are independent, then P (first roll = 6, second roll = 6) = P(rolling a six) * P (rolling a six). And, indeed: 1/36 = (1/6)*(1/6)
Calculate the mean and variance of sums of random variables. Mean
E(a bX cY ) a b X c Y Variance In regard to the sum of correlated variables, the variance of correlated variables is given by the following (note the two expressions; the second merely substitutes the covariance with the product of correlation and volatilities. Please make sure you are comfortable with this substitution).
X2 Y X2 Y2 2 XY , and given that XY XY X2 Y X2 Y2 2 X Y In regard to the difference between correlated variables, the variance of correlated variables is given by:
X2 Y X2 Y2 2 XY and given that XY X Y X2 Y X2 Y2 2 X Y Variance with constants (a) and (b) Variance of sum includes covariance (X,Y):
variance(aX bY ) a2 X2 2ab XY b2Y2 If X and Y are independent, the covariance term drops out and the variance simply adds::
variance( X Y ) X2 Y2
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 16
Describe the key properties of the normal, standard normal, multivariate normal, Chi-squared, Student t, and F distributions. Normal distribution
0.5
f (x)
0.3
2 1 e ( x ) 2
2 2
4.0
3.0
2.0
1.0
0.0
(1.0)
(2.0)
(3.0)
-0.1
(4.0)
0.1
Key properties of the normal:
Symmetrical around mean; skew = 0
Parsimony: Only requires (is fully described by) two parameters: mean and variance
Summation stability: a linear combination (function) of two normally distributed random variables is itself normally distributed
Kurtosis = 3 (excess kurtosis = 0)
The normal distribution is commonplace for at least three reasons:
The central limit theorem (CLT) says that sampling distribution of sample means tends to be normal (i.e., converges toward a normally shaped distributed) regardless of the shape of the underlying distribution; this explains much of the “popularity” of the normal distribution.
The normal is economical (elegant) because it only requires two parameters (mean and variance). The standard normal is even more economical: it requires no parameters.
The normal is tractable: it is easy to manipulate (especially in regard to closed-form equations like the Black-Scholes)
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 17
Standard normal distribution A normal distribution is fully specified by two parameters, mean and variance (or standard deviation). We can transform a normal into a unit or standardized variable:
Standard normal has mean = 0,and variance = 1 No parameters required!
This unit or standardized variable is normally distributed with zero mean and variance of one (1.0). Its standard deviation is also one (variance = 1.0 and standard deviation = 1.0). This is written as: Variable Z is approximately (“asymptotically”) normally distributed: Z ~ N(0,1)
Standard normal distribution: Critical Z values: Key locations on the normal distribution are noted below. In the FRM curriculum, the choice of one-tailed 5% significance and 1% significance (i.e., 95% and 99% confidence) is common, so please pay particular attention to the yellow highlights:
Critical z values 1.00 1.645 (~1.65) 1.96 2.327 (~2.33) 2.58
Two-sided Confidence ~ 68%
One-sided Significance ~ 15.87%
~ 90%
~ 5.0 %
~ 95%
~ 2.5%
~ 98%
~ 1.0 %
~ 99%
~ 0.5%
Memorize two common critical values: 1.65 and 2.33. These correspond to confidence levels, respectively, of 95% and 99% for a one-tailed test. For VAR, the one-tailed test is relevant because we are concerned only about losses (left-tail) not gains (right-tail).
Multivariate normal distributions Normal can be generalized to a joint distribution of normal; e.g., bivariate normal distribution. Properties include: 1. If X and Y are bivariate normal, then aX + bY is normal;
any linear combination is normal 2. If a set of variables has a multivariate normal distribution,
the marginal distribution of each is normal 3. If variables with a multivariate normal distribution have covariances that equal zero,
then the variables are independent
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 18
Chi-squared distribution
Chi-square distribution
40%
30%
k=2
20%
k=5
10%
k = 29
0% 0
10
20
30
For the chi-square distribution, we observe a sample variance and compare to hypothetical population variance. This variable has a chi-square distribution with (n-1) d.f.:
s2 2 2 (n 1) ~ ( n 1) Chi-squared distribution is the sum of m squared independent standard normal random variables. Properties of the chi-squared distribution include:
Nonnegative (>0)
Skewed right, but as d.f. increases it approaches normal
Expected value (mean) = k, where k = degrees of freedom
Variance = 2k, where k = degrees of freedom
The sum of two independent chi-square variables is also a chi-squared variable
Chi-squared distribution: For example (Google’s stock return variance) Google’s sample variance over 30 days is 0.0263%. We can test the hypothesis that the population variance (Google’s “true” variance) is 0.02%. The chi-square variable = 38.14:
Sample variance (30 days) Degrees of freedom (d.f.) Population variance? Chi-square variable =CHIDIST() = p value Area under curve (1- )
0.0263% 29 0.0200% 38.14 11.93% 88.07%
= 0.0263%/0.02%*29 @ 29 d.f., Pr[.1] = 39.0875
With 29 degrees of freedom (d.f.), 38.14 corresponds to roughly 10% (i.e., to left of 0.10 on the lookup table). Therefore, we can reject the null with only 88% confidence; i.e., we are likely to accept the probability that the true variance is 0.02%.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 19
Student t’s distribution
t distribution vs. Normal 0.04 0.03 2
0.02
20
0.01
Normal 0 0.4 0.8 1.2 1.6 2 2.4 2.8 3.2 3.6
0.00
The student’s t distribution (t distribution) is among the most commonly used distributions. As the degrees of freedom (d.f.) increases, the t-distribution converges with the normal distribution. It is similar to the normal, except it exhibits slightly heavier tails (the lower the d.f.., the heavier the tails). The student’s t variable is given by:
X X Sx n
t
Properties of the t-distribution:
Like the normal, it is symmetrical
Like the standard normal, it has mean of zero (mean = 0)
Its variance = k/(k-2) where k = degrees of freedom. Note, as k increases, the variance approaches 1.0. Therefore, as k increases, the t-distribution approximates the standard normal distribution.
Always slightly heavy-tail (kurtosis>3.0) but converges to normal. But the student’s t is not considered a really heavy-tailed distribution
In practice, the student’s t is the mostly commonly used distribution. When we test the significance of regression coefficients, the central limit thereom (CLT) justifies the normal distribution (because the coefficients are effectively sample means). But we rarely know the population variance, such that the student’s t is the appropriate distribution. When the d.f. is large (e.g., sample over ~30), as the student’s t approximates the normal, we can use the normal as a proxy. In the assigned Stock & Watson, the sample sizes are large (e.g., 420 students), so they tend to use the normal.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 20
Student t’s distribution: For example For example, Google’s average periodic return over a ten-day sample period was +0.02% with sample standard deviation of 1.54%. Here are the statistics:
Sample Mean Sample Std Dev Days (n=10)
0.02% 1.54% 10
Confidence Significance (1-)
95% 5%
Critical t Lower limit Upper limit
2.262 -1.08% 1.12%
The sample mean is a random variable. If we know the population variance, we assume the sample mean is normally distributed. But if we do not know the population variance (typically the case!), the sample mean is a random variable following a student’s t distribution. In the Google example above, we can use this to construct a confidence (random) interval:
X t
s n
We need the critical (lookup) t value. The critical t value is a function of:
Degrees of freedom (d.f.); e.g., 10-1 =9 in this example, and
Significance; e.g., 1-95% confidence = 5% in this example
The 95% confidence interval can be computed. The upper limit is given by:
X (2.262)
1.54% 1.12% 10
And the lower limit is given by:
X (2.262)
1.54% 1.08% 10
Please make sure you can take a sample standard deviation, compute the critical t value and construct the confidence interval.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 21
Both the normal (Z) and student’s t (t) distribution characterize the sampling distribution of the sample mean. The difference is that te normal is used when we know the population variance; the student’s t is used when we mus rely on the sample variance. In practice, we don’t know the population variance, so the student’s t is typically appropriate.
Z
X X
t
X
n
X X
SX n
F-Distribution
F distribution 10% 8% 6% 4% 2% 0%
19,19 9,9 0.1 0.4 0.7 1.0 1.3 1.6 1.9 2.2
The F distribution is also called the variance ratio distribution (it may be helpful to think of it as the variance ratio!). The F ratio is the ratio of sample variances, with the greater sample variance in the numerator:
F
s x2 sy2
Properties of F distribution:
Nonnegative (>0)
Skewed right
Like the chi-square distribution, as d.f. increases, approaches normal
The square of t-distributed r.v. with k d.f. has an F distribution with 1,k d.f.
m * F(m,n)=χ2
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 22
F-Distribution: For example For example, based on two 10-day samples, we calculated the sample variance of Google and Yahoo. Google’s variance was 0.0237% and Yahoo’s was 0.0084%. The F ratio, therefore, is 2.82 (divide higher variance by lower variance; the F ratio must be greater than, or equal to, 1.0).
=VAR() =COUNT() F ratio Confidence Significance =FINV()
GOOG 0.0237% 10 2.82 90% 10% 2.44
YHOO 0.0084% 10
At 10% significance, with (10-1) and (10-1) degrees of freedom, the critical F value is 2.44. Because our F ratio of 2.82 is greater than (>) 2.44, we reject the null (i.e., that the population variances are the same). We conclude the population variances are different.
Moments of a distribution The k-th moment about the mean () is given by:
( x )k i 1 i k-th moment n
n
In this way, the difference of each data point from the mean is raised to a power (k=1, k=2, k=3, and k=4). There are the four moments of the distribution:
If k=1, refers to the first moment about zero: the mean.
If k=2, refers to the second moment about the mean: the variance.
If k=3, refers to the third moment about the mean: skewness
If k=4, refers to the fourth moment about the mean: tail density and peakedness.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 23
Define and describe random sampling and what is meant by i.i.d. A random sample is a sample of random variables that are independent and identically distributed (i.i.d.)
Independent
Identical
Not (auto) correlated
Same Mean, Same Variance Homo-skedastic
Independent and identically distributed (i.i.d.) variables:
Each random variable has the same (identical) probability distribution (PDF/PMF, CDF) distribution
Each random variable is drawn independently of the others; no serial- or autocorrelation
The concept of independent and identically distributed (i.i.d.) variables is a key assumption we often encounter: to scale volatility by the square root of time requires i.i.d. returns. If returns are not i.i.d., then scaling volatlity by the square root of time will give an incorrect answer.
Define, calculate, and interpret the mean and variance of the sample average. The sample mean is given by:
1 n E (Y ) E (Yi ) Y n i 1 The variance of the sample mean is given by:
variance(Y )
www.bionicturtle.com
Y2 n
Std Dev(Y ) Y
Y n
FRM 2012 QUANTITATIVE ANALYSIS 24
We expect the sample mean to equal the population mean The sample mean is denoted by Y . The expected value of the sample mean is, as you might expect, the population mean:
E(Y ) Y Y This formula says, “we expect the average of our sample will equal the average of the population.” (over-bar signifies sample, Greek mu signifies the mean (average).
Sampling distribution of the sample mean If either: (i) the population is infinite and random sampling, or (ii) finite population and sampling with replacement, the variance of the sampling distribution of means is:
E [(Y Y ) ] 2
2 Y
Y2 n
This says, “The variance of the sample mean is equal to the population variance divided by the sample size.” For example, the (population) variance of a single six-sided die is 2.92. If we roll three die (i.e., sampling “with replacement”), then the variance of the sampling distribution = (2.92 3) = 0.97. If the population is size (N), if the sample size n N, and if sampling is conducted “without replacement,” then the variance of the sampling distribution of means is given by:
2 Y
Y2 N n n N 1
Standard error is the standard deviation of the sample mean The standard error is the standard deviation of the sampling distribution of the estimator, and the sampling distribution of an estimator is a probability (frequency distribution) of the estimator (i.e., a distribution of the set of values of the estimator obtained from all possible same-size samples from a given population). For a sample mean (per the central limit theorem!), the variance of the estimator is the population variance divided by sample size. The standard error is the square root of this variance; the standard error is a standard deviation:
se
Y2 n
www.bionicturtle.com
Y n
FRM 2012 QUANTITATIVE ANALYSIS 25
If the population is distributed with mean and variance 2 but the distribution is not a normal distribution, then the standardized variable given by Z below is “asymptotically normal; i.e., as (n) approaches infinity () the distribution becomes normal.
Z
Y Y ~ N(0,1) Y
se
Y
Y
n The denominator is the standard error: which is simply the name for the standard deviation of sampling distribution.
Describe, interpret, and apply the Law of Large Numbers and the Central Limit Theorem. In brief:
Law of large numbers: under general conditions, the sample mean (Ӯ) will be near the population mean.
Central limit theorem (CLT): As the sample size increases, regardless of the underlying distribution, the sampling distributions approximates (tends toward) normal
Central limit theorem (CLT) We assume a population with a known mean and finite variance, but not necessarily a normal distribution (we may not know the distribution!). Random samples of size (n) are then drawn from the population. The expected value of each random variable is the population’s mean. Further, the variance of each random variable is equal the population’s variance divided by n (note: this is equivalent to saying the standard deviation of each random variable is equal to the population’s standard deviation divided by the square root of n). The central limit theorem says that this random variable (i.e., of sample size n, drawn from the population) is itself normally distributed, regardless of the shape of the underlying population. Given a population described by any probability distribution having mean () and finite variance (2), the distribution of the sample mean computed from samples (where each sample equals size n) will be approximately normal. Generally, if the size of the sample is at least 30 (n 30), then we can assume the sample mean is approximately normal!
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 26
Not Normal! (individually)
But sample mean (and sum) → Normal Distribution! (if finite variance)
Each sample has a sample mean. There are many sample means. The sample means have variation: a sampling distribution. The central limit theorem (CLT) says the sampling distribution of sample means is asymptotically normal.
Summary of central limit theorem (CLT):
We assume a population with a known mean and finite variance, but not necessarily a normal distribution.
Random samples (size n) drawn from the population.
The expected value of each random variable is the population mean
The distribution of the sample mean computed from samples (where each sample equals size n) will be approximately (asymptotically) normal.
The variance of each random variable is equal to population variance divided by n (equivalently, the standard deviation is equal to the population standard deviation divided by the square root of n).
Sample Statistics and Sampling Distributions When we draw from (or take) a sample, the sample is a random variable with its own characteristics. The “standard deviation of a sampling distribution” is called the standard error. The mean of the sample or the sample mean is a random variable defined by:
X
X1 X 2 n
www.bionicturtle.com
Xn
FRM 2012 QUANTITATIVE ANALYSIS 27
Stock, Chapter 3:
Review of Statistics In this chapter…
Describe and interpret estimators of the sample mean and their properties. Describe and interpret the least squares estimator. Define, interpret and calculate the critical t‐values. Define, calculate and interpret a confidence interval. Describe the properties of point estimators: Distinguish between unbiased and biased estimators Define an efficient estimator and consistent estimator Explain and apply the process of hypothesis testing: Define and interpret the null hypothesis and the alternative hypothesis Distinguish between one‐sided and two‐sided hypotheses Describe the confidence interval approach to hypothesis testing Describe the test of significance approach to hypothesis testing Define, calculate and interpret type I and type II errors Define and interpret the p value Define, calculate, and interpret the sample variance, sample standard deviation, and standard error. Define, calculate, and interpret confidence intervals for the population mean. Perform and interpret hypothesis tests for the difference between two means. Define, describe, apply, and interpret the t-statistic when the sample size is small. Interpret scatterplots. Define, describe, and interpret the sample covariance and correlation.
Describe and interpret estimators of the sample mean and their properties. An estimator is a function of a sample of data to be drawn randomly from a population. An estimate is the numerical value of the estimator when it is actually computed using data from a specific sample. An estimator is a random variable because of randomness in selecting the sample, while an estimate is a nonrandom number.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 28
The sample mean, Ӯ, is the best linear unbiased estimator (BLUE). In the Stock & Watson example, the average (mean) wage among 200 people is $22.64:
Sample Mean Sample Standard Deviation Sample size (n) Standard Error H0: Population Mean = Test t statistic p value
$22.64 $18.14 200 1.28 $20.00 2.06 4.09%
Please note:
The average wage of (n = ) 200 observations is $22.64 The standard deviation of this sample is $18.14 The standard error of the sample mean is $1.28 because $18.14/SQRT(200) = $1.28 The degrees of freedom (d.f.) in this case are 199 = 200 – 1
“An estimator is a recipe for obtaining an estimate of a population parameter. A simple analogy explains the core idea: An estimator is like a recipe in a cook book; an estimate is like a cake baked according to the recipe.” Barreto & Howland, Introductory Econometrics In the above example, the sample mean is an estimator of the unknown, true population mean (in this case, the same mean estimator gives an estimate of $22.64). What makes one estimator superior to another?
Unbiased: the mean of the sampling distribution is the population mean (mu)
Consistent. When the sample size is large, the uncertainty about the value of arising from random variations in the sample is very small.
Variance and efficiency. Among all unbiased estimators, the estimator has the smallest variance is “efficient.”
If the sample is random (i.i.d.), the sample mean is the Best Linear Unbiased Estimator (BLUE). The sample mean is:
Consistent, AND
The most EFFICIENT among all linear UNBIASED estimators of the population mean
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 29
'Describe and interpret the least squares estimator. The estimator (m) that minimizes the sum of squared gaps (Yi – m) is called the least squares estimator: n
Yi m
2
i 1
i 1
The estimator (m) that minimizes the sum of squared gaps in the formula above is called the least squares estimator.
Define, interpret and calculate critical t‐values. The t-statistic or t-ratio is given by:
t
Y Y ,0 SE (Y )
The critical t-value or “lookup” t-value is the t-value for which the test just rejects the null hypothesis at a given significance level. For example:
95% two-tailed (2T) critical t-value with 20 d.f. is 2.086
Significance test: is t-statistic > critical (lookup) t?
The critical t-values bound a region within the student’s distribution that is a specific percentage (90%? 95%? 99%?) of the total area under the student’s t distribution curve. The student’s t distribution with (n-1) degrees of freedom (d.f.) has a confidence interval given by:
Y t
SY S Y Y t Y n n
For example: critical t If the (small) sample size is 20, then the 95% two-tailed critical t is 2.093. That is because the degrees of freedom are 19 (d.f. = n - 1) and if we review the lookup table on the following page (corresponds to Gujarati A-2) under the column = 0.025/0.5 and row = 19, then we find the cell value = 2.093. Therefore, given 19 d.f., 95% of the area under the student’s t distribution is bounded by +/- 2.093. Specifically, P(-2.093 ≤ t ≤ 2.093) = 95%. Please note, further because the distribution is symmetrical (skew=0), 5% among both tails implies 2.5% in the left-tail.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 30
Student’s t Lookup Table Excel function: = TINV(two-tailed probability [larger #], d.f.)
d.f. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
1-tail: 0.25 2-tail: 0.50 1.000 0.816 0.765 0.741 0.727 0.718 0.711 0.706 0.703 0.700 0.697 0.695 0.694 0.692 0.691 0.690 0.689 0.688 0.688 0.687 0.686 0.686 0.685 0.685 0.684 0.684 0.684 0.683 0.683 0.683
0.1 0.2 3.078 1.886 1.638 1.533 1.476 1.440 1.415 1.397 1.383 1.372 1.363 1.356 1.350 1.345 1.341 1.337 1.333 1.330 1.328 1.325 1.323 1.321 1.319 1.318 1.316 1.315 1.314 1.313 1.311 1.310
0.05 0.1 6.314 2.920 2.353 2.132 2.015 1.943 1.895 1.860 1.833 1.812 1.796 1.782 1.771 1.761 1.753 1.746 1.740 1.734 1.729 1.725 1.721 1.717 1.714 1.711 1.708 1.706 1.703 1.701 1.699 1.697
0.025 0.05 12.706 4.303 3.182 2.776 2.571 2.447 2.365 2.306 2.262 2.228 2.201 2.179 2.160 2.145 2.131 2.120 2.110 2.101 2.093 2.086 2.080 2.074 2.069 2.064 2.060 2.056 2.052 2.048 2.045 2.042
0.01 0.02 31.821 6.965 4.541 3.747 3.365 3.143 2.998 2.896 2.821 2.764 2.718 2.681 2.650 2.624 2.602 2.583 2.567 2.552 2.539 2.528 2.518 2.508 2.500 2.492 2.485 2.479 2.473 2.467 2.462 2.457
0.005 0.01 63.657 9.925 5.841 4.604 4.032 3.707 3.499 3.355 3.250 3.169 3.106 3.055 3.012 2.977 2.947 2.921 2.898 2.878 2.861 2.845 2.831 2.819 2.807 2.797 2.787 2.779 2.771 2.763 2.756 2.750
0.001 0.002 318.309 22.327 10.215 7.173 5.893 5.208 4.785 4.501 4.297 4.144 4.025 3.930 3.852 3.787 3.733 3.686 3.646 3.610 3.579 3.552 3.527 3.505 3.485 3.467 3.450 3.435 3.421 3.408 3.396 3.385
The green shaded area represents values less than three (< 3.0). Think of it as the “sweet spot.” For confidences less than 99% and d.f. > 13, the critical t is always less than 3.0. So, for example, a computed t of 7 or 13 will generally be significant. Keep this in mind because in many cases, you do not need to refer to the lookup table if the computed t is large; you can simply reject the null.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 31
Define, calculate and interpret a confidence interval. The confidence interval uses the product of [standard error х critical “lookup” t]. In the Stock & Watson example, the confidence interval is given by 22.64 +/- (1.28)(1.96) because 1.28 is the standard error and 1.96 is the critical t (critical Z) value associated with 95% two-tailed confidence:
Sample Mean $22.64 Sample Std Deviation $18.14 Sample size (n) 200 Standard Error 1.28 Confidence 95% Critical t 1.972 Lower limit $20.11 95% CI for Y Y 1.96SE Y Upper limit $25.17 22.64 1.28 1.972
Confidence Intervals: Another example with a sample of 28 P/E ratios Assume we have price-to-earnings ratios (P/E ratios) of 28 NYSE companies:
Mean Variance Std Dev Count d.f. Confidence (1-α) Significance (α) Critical t Standard error Lower limit Upper limit Hypothesis t value p value Reject null with
23.25 90.13 9.49 28 27 95% 5% 2.052 1.794 19.6 26.9 18.5 2.65 1.3% 98.7%
= 23.25 - (2.052)*(1.794) = 23.25 + (2.052)*(1.794)
= (23.25 - 18.5) / (1.794)
The confidence coefficient is selected by the user; e.g., 95% (0.95) or 99% (0.99). The significance = 1 – confidence coefficient.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 32
To construct a confidence interval with the dataset above:
Determine degrees of freedom (d.f.). d.f. = sample size – 1. In this case, 28 – 1 = 27 d.f.
Select confidence. In this case, confidence coefficient = 0.95 = 95%
We are constructing an interval, so we need the critical t value for 5% significance with two-tails.
The critical t value is equal to 2.052. That’s the value with 27 d.f. and either 2.5% onetailed significance or 5% two-tailed significance (see how they are the same provided the distribution is symmetrical?)
The standard error is equal to the sample standard deviation divided by the square root of the sample size (not d.f.!). In this case, 9.49/SQRT(28) 1.794.
The lower limit of the confidence interval is given by: the sample mean minus the critical t (2.052) multiplied by the standard error (9.49/SQRT[28]).
The upper limit of the confidence interval is given by: the sample mean plus the critical t (2.052) multiplied by the standard error (9.49/SQRT[28]).
Sx S X X t x n n 9.49 9.49 23.25 2.052 X 23.25 2.052 28 28 X t
This confidence interval is a random interval. Why? Because it will vary randomly with each sample, whereas we assume the population mean is static. We don’t say the probability is 95% that the “true” population mean lies within this interval. That implies the true mean is variable. Instead, we say the probability is 95% that the random interval contains the true mean. See how the population mean is trusted to be static and the interval varies?
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 33
Describe the properties of point estimators:
An estimator is a function of a sample of data to be drawn randomly from a population.
An estimate is the numerical value of the estimator when it is actually computed using data from a specific sample.
The key properties of point estimators include:
Linearity: estimator is a linear function of sample observations. For example, the sample mean is a linear function of the observations.
Unbiasedness: the average or expected value of the estimator is equal to the true value of the parameter.
Minimum variance: the variance of the estimator is smaller than any “competing” estimator. Note: an estimator can have minimum variance yet be biased.
Efficiency: Among the set of unbiased estimators, the estimator with the minimum variance is the efficient estimator (i.e., it has the smallest variance among unbiased estimators)
Best linear estimator (BLUE): the estimate that combines three properties: (i) linear, (ii) unbiased, and (iii) minimum variance
Consistency: an estimator is consistent if, as the sample size increases, it approaches (converges on) the true value of the parameter
Distinguish between unbiased and biased estimators An estimator is unbiased if:
E Y Y Otherwise the estimator is biased. If the expected value of the estimator is the population parameter, the estimator is unbiased. If, in repeated applications of a method the mean value of the estimators coincides with the true parameter value, that estimator is called an unbiased estimator. Unbiasedness is a repeated sampling property: if we draw several samples of size (n) from a population and compute the unbiased sample statistic for each sample, the average of will tend to approach (converge on) the population parameter.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 34
Define an efficient estimator and consistent estimator An efficient estimate is both unbiased (i.e., the mean or expectation of the statistic is equal to the parameter) and its variance is smaller than the alternatives (i.e., all other things being equal, we would prefer a smaller variance). A statement of the error or precision of an estimate is often called its reliability
Efficient: among unbiased, estimator will smallest variance
“Consistent” is about property as sample size increases
Efficient
Consistent
• Unbiased • Smallest variance
• As sample size increases, estimator approaches true parameter value • As n→∞, E[estimator] = parameter
variance Y variance Y
www.bionicturtle.com
p Y Y
FRM 2012 QUANTITATIVE ANALYSIS 35
Explain and apply the process of hypothesis testing: Define & interpret the null hypothesis and the alternative
Distinguish between one‐sided and two‐sided hypotheses
Describe the confidence interval approach to hypothesis testing
Describe the test of significance approach to hypothesis testing
Define, calculate and interpret type I and type II errors
Define and interpret the p value
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 36
Define and interpret the null hypothesis and the alternative hypothesis Please not the null must contain the equal sign (“=“): Define & interpret the null hypothesis and the alternative
H0 : E (Y ) Y ,0 H1 : E (Y ) Y ,0
Distinguish between one‐sided and two‐sided hypotheses
H0 : E (Y ) $20
Describe the confidence interval approach to hypothesis testing
H1 : E (Y ) $20
Describe the test of significance approach to hypothesis testing The null hypothesis, denoted by H0, is tested against the alternative hypothesis, which is denoted by H1 or sometimes HA.
Define, calculate and interpret type I and type II errors
Often, we test for the significance of the intercept or a Define and interpret the p value partial slope coefficient in a linear regression. Typically, in this case, our null hypothesis is: “the slope is zero” or “there is no correlation between X and Y” or “the regression coefficients jointly are not significant.” In which case, if we reject the null, we are finding the statistic to be significant which, in this case, means “significantly different than zero.” Statistical significance implies our null hypothesis (i.e., the parameter equals zero) was rejected. We concluded the parameter is nonzero. For example, a “significant” slope estimate means we rejected the null hypothesis that the true slope is zero.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 37
Distinguish between one‐sided and two‐sided hypotheses Your default assumption should be a two-sided hypothesis. If unsure, assume two-sided.
Define & interpret the null hypothesis and the alternative
Here is a one-sided null hypothesis:
H0 : E (Y ) Y ,0
Distinguish between one‐sided and two‐sided hypotheses
H1 : E (Y ) Y ,0 Specifically, “The one-sided null hypothesis is that the population average wage is less than or equal to $20.00:”
H0 : E (Y ) $20 H1 : E (Y ) $20
Describe the confidence interval approach to hypothesis testing Describe the test of significance approach to hypothesis testing
Define, calculate and interpret type I and type II errors
Define and interpret the p value
The null hypothesis always includes the equal sign (=), regardless! The null cannot include only less than ().
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 38
Describe the confidence interval approach to hypothesis testing In the confidence interval approach, instead of computing the test statistic, we define the confidence interval as a function of our confidence level; i.e., higher confidence implies a wider interval.
Define & interpret the null hypothesis and the alternative
Then we simply ascertain if the null hypothesized value is within the interval (within the “acceptance region”).
Distinguish between one‐sided and two‐sided hypotheses
Y 1.96SE Y Y 2.58SE Y
90% CI for Y Y 1.64SE Y 95% CI for Y 99% CI for Y
Describe the confidence interval approach to hypothesis testing
Describe the test of significance approach to hypothesis testing Define, calculate and interpret type I and type II errors
Define and interpret the p value
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 39
Describe the test of significance approach to hypothesis testing In the significance approach, instead of defining the confidence interval, we compute the standardized distance in standard deviations from the observed mean to the null hypothesis: this is the test statistic (or computed t value). We compare it to the critical (or lookup) value. If the test statistic is greater than the critical (lookup) value, then we reject the null.
Reject H0 at 90% if t
act
1.64
Reject H0 at 95% if t act 1.96 Reject H0 at 99% if t
act
2.58
Define & interpret the null hypothesis and the alternative Distinguish between one‐sided and two‐sided hypotheses Describe the confidence interval approach to hypothesis testing Describe the test of significance approach to hypothesis testing Define, calculate and interpret type I and type II errors
Define and interpret the p value
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 40
Define, calculate and interpret type I and type II errors If we reject a hypothesis which is actually true, we have committed a Type I error. If, on the other hand, we accept a hypothesis that should have been rejected, we have committed a Type II error.
Define & interpret the null hypothesis and the alternative
Type I error = significance level = α = Pr [reject H0 | H0 is true]
Distinguish between one‐sided and two‐sided hypotheses
Type II error = β = Pr [“accept” H0 | H0 is false]
We can reject null with (1-p)% confidence
Describe the confidence interval approach to hypothesis testing
Type I: to reject a true hypothesis
Type II: to accept a false hypothesis
Type I and Type II errors: for example
Describe the test of significance approach to hypothesis testing Define, calculate and interpret type I and type II errors
Suppose we want to hire a portfolio manager who has produced an average return of +8% versus an index that Define and interpret the p value returned +7%. We conduct a test statistical test to determine whether the “excess +1%” is due to luck or “alpha” skill. We set a 95% confidence level for our test. In technical parlance, our null hypothesis is that the manager adds no skill (i.e., the expected return is 7%). Under the circumstances, a Type I error is the following: we decide that excess is significant and the manager adds value, but actually the out-performance was random (he did not add skill). In technical terms, we mistakenly rejected the null. Under the circumstances, a Type II error is the following: we decide the excess is random and, to our thinking, the out-performance was random. But actually it was not random and he did add value. In technical terms, we falsely accepted the null.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 41
Define and interpret the p value The p-value is the “exact significance level:”
Lowest significance level a which a null can be rejected We can reject null with (1-p)% confidence
The p-value is an abbreviation that stands for “probability-value.” Suppose our hypothesis is that a population mean is 10; another way of saying this is “our null hypothesis is H0: mean = 10 and our alternative hypothesis is H1: mean 10.” Suppose we conduct a two-tailed test, given the results of a sample drawn from the population, and the test produces a p-value of .03. This means that we can reject the null hypothesis with 97% confidence – in other words, we can be fairly confident that the true population mean is not 10.
Define & interpret the null hypothesis and the alternative Distinguish between one‐sided and two‐sided hypotheses Describe the confidence interval approach to hypothesis testing Describe the test of significance approach to hypothesis testing Define, calculate and interpret type I and type II errors Define and interpret the p value
Our example was a two-tailed test, but recall we have three possible tests:
The parameter is greater than (>) the stated value (right-tailed test), or
The parameter is less than ( 3)
Stochastic behavior of returns Risk measurement (VaR) concerns the tail of a distribution, where losses occur. We want to impose a mathematical curve (a “distributional assumption”) on asset returns so we can estimate losses. The parametric approach uses parameters (i.e., a formula with parameters) to make a distributional assumption but actual returns rarely conform to the distribution curve. A parametric distribution plots a curve (e.g., the normal bell-shaped curve) that approximates a range of outcomes but actual returns are not so well-behaved: they rarely “cooperate.”
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 109
Value at Risk (VaR) – 2 asset, relative vs. absolute Know how to compute two-asset portfolio variance & scale portfolio volatility to derive VaR:
Inputs (per annum( Trading days /year Initial portfolio value (W) VaR Time horizon (days) (h) VaR confidence interval Asset A Volatility (per year) Expected Return (per year) Portfolio Weight (w) Asset B Volatility Expected Return (per year) Portfolio Weight (1-w) Correlation (A,B) Autocorrelation (h-1, h) Outputs Annual Covariance (A,B) Portfolio variance Exp Portfolio return Portfolio volatility (per year) Period (h days) Exp periodic return (u) Std deviation (h), i.i.d Scaling factor Std deviation (h), Autocorrelation Normal deviate (critical z value) future value Expected Relative VaR, i.i.d Absolute VaR, i.i.d Relative VaR, AR(1) Absolute VaR, AR(1)
252 $100 10 95% 10.0% 12.0% 50% 20.0% 25.0% 50% 0.30 0.25
0.0060 0.0060 0.0155 0.0155 18.5%
Independent, = 0. Mean reverting = negative
COV = (correlation A,B)(volatility A)(volatility B)
12.4% 0.73% 2.48% 15.78 3.12% 1.64 100.73 100.73 $4.08 $3.35 $5.12 $4.39
Don’t need to know this, used for AR(1) Standard deviation if auto-correlation. Normal deviate Doesn’t include the mean return Includes return; i.e., loss from zero The corresponding VaRs, if autocorrelation incorporated. Note VaR is higher!
Relative VaR, iid = $100 value * 2.48% 10-day sigma * 1.645 normal deviate Absolute VaR, iid = $100 * (-0.73% + 2.48% * 1.645) Relative VaR, AR(1) = $100 value * 3.12% 10-day AR sigma * 1.645 normal deviate Absolute VaR, AR(1) = $100 * (-0.73% + 3.12% * 1.645)
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 110
Discuss how asset return distributions tend to deviate from the normal distribution. Compared to a normal (bell-shaped) distribution, actual asset returns tend to be:
Fat-tailed (a.k.a., heavy tailed): A fat-tailed distribution is characterized by having more probability weight (observations) in its tails relative to the normal distribution.
Skewed: A skewed distribution refers—in this context of financial returns—to the observation that declines in asset prices are more severe than increases. This is in contrast to the symmetry that is built into the normal distribution.
Unstable: the parameters (e.g., mean, volatility) vary over time due to variability in market conditions.
NORMAL RETURNS
ACTUAL FINANCIAL RETURNS
Symmetrical
Skewed
“Normal” Tails
Fat-tailed (leptokurtosis)
Stable
Unstable (time-varying)
Interest rate distributions are not constant over time 10 years of interest rate data are collected (1982 – 1993). The distribution plots the daily change in the three-month treasury rate. The average change is approximately zero, but the “probability mass” is greater at both tails. It is also greater at the mean; i.e., the actual mean occurs more frequently than predicted by the normal distribution.
4.5% 4.0% 3.5% 3.0% 2.5% 2.0% 1.5% 1.0% 0.5% 0.0%
3rd Moment = Skew • 3
2nd Variance “scale”
4th Moment = kurtosis • 4
1st moment -3
www.bionicturtle.com
Actual returns: 1. Skewed 2. Fat-tailed (kurtosis>3) 3. Unstable
-2
-1
0 1 Mean “location”
2
3
FRM 2012 QUANTITATIVE ANALYSIS 111
Explain potential reasons for the existence of fat tails in a return distribution and discuss the implications fat tails have on analysis of return distributions. A distribution is unconditional if tomorrow’s distribution is the same as today’s distribution. But fat tails could be explained by a conditional distribution: a distribution that changes over time. Two things can change in a normal distribution: mean and volatility. Therefore, we can explain fat tails in two ways:
Conditional mean is time-varying; but this is unlikely given the assumption that markets are efficient
Conditional volatility is time-varying; Allen says this is the more likely explanation!
Normal distribution says: -10% @ 95th %ile
If fat tails, expected VaR loss is understated! Explain how outliers can really be indications that the volatility varies with time. We observe that actual financial returns tend to exhibit fat-tails. Jorion (like Allen et al) offers two possible explanations:
The true distribution is stationary. Therefore, fat-tails reflect the true distribution but the normal distribution is not appropriate
The true distribution changes over time (it is “time-varying”). In this case, outliers can in reality reflect a time-varying volatility.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 112
Distinguish between conditional and unconditional distributions. An unconditional distribution is the same regardless of market or economic conditions; for this reason, it is likely to be unrealistic. A conditional distribution in not always the same: it is different, or conditional on, some economic or market or other state. It is measured by parameters such as its conditional mean, conditional standard deviation (conditional volatility), conditional skew, and conditional kurtosis.
Discuss the implications regime switching has on quantifying volatility. A typical distribution is a regime-switching volatility model: the regime (state) switches from low to high volatility, but is never in between. A distribution is “regime-switching” if it changes from high to low volatility. The problem: a risk manager may assume (and measure) an unconditional volatility but the distribution is actually regime switching. In this case, the distribution is conditional (i.e., it depends on conditions) and might be normal but regime-switching; e.g., volatility is 10% during a low-volatility regime and 20% during a high-volatility regime but during both regimes, the distribution may be normal. However, the risk manager may incorrectly assume a single 15% unconditional volatility. But in this case, the unconditional volatility is likely to exhibit fat tails because it does not account for the regime switching.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 113
Explain the various approaches for estimating VaR. Volatility versus Value at Risk (VaR) Volatility is an input into our (parametric) value at risk (VaR):
VaR$ W$z VaR% z Linda Allen’s Historical-based approaches The common attribute to all the approaches within this class is their use of historical time series data in order to determine the shape of the conditional distribution.
Parametric approach. The parametric approach imposes a specific distributional assumption on conditional asset returns. A representative member of this class of models is the conditional (log) normal case with time-varying volatility, where volatility is estimated from recent past data.
Nonparametric approach. This approach uses historical data directly, without imposing a specific set of distributional assumptions. Historical simulation is the simplest and most prominent representative of this class of models.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 114
Implied volatility based approach. This approach uses derivative pricing models and current derivative prices in order to impute an implied volatility without having to resort to historical data. The use of implied volatility obtained from the Black–Scholes option pricing model as a predictor of future volatility is the most prominent representative of this class of models.
Jorion’s Value at Risk (VaR) typology Please note that Jorion’s taxonomy approaches from the perspective of local versus full valuation. In that approach, local valuation tends to associate with parametric approaches:
Risk Measurement
Local valuation
Full valuation
Linear models
Nonlinear models
Historical Simulation
Full Covariance matrix
Gamma
Monte Carlo Simulation
Factor Models
Convexity
Diagonal Models
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 115
Value at Risk (VaR) Parametric Delta normal Non parametric Historical Simulation Bootstrap Monte Carlo Hybrid (semi-p) HS + EWMA EVT POT (GPD) Block maxima (GEV)
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 116
Volatility 1. Implied Volatility 2. Equally weighted returns or unweighted (STDEV) 3. More weight to recent returns GARCH(1,1) EWMA 4. MDE (more weight to similar states!)
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 117
Historical approaches An historical-based approach can be non-parametric, parametric or hybrid (both). Nonparametric directly uses a historical dataset (historical simulation, HS, is the most common). Parametric imposes a specific distributional assumption (this includes historical standard deviation and exponential smoothing)
Compare, contrast and calculate parametric and non-parametric approaches for estimating conditional volatility, including: HISTORICAL STANDARD DEVIATION Historical standard deviation is the simplest and most common way to estimate or predict future volatility. Given a history of an asset’s continuously compounded rate of returns we take a specific window of the K most recent returns. This standard deviation is called a moving average (MA) by Jorion. The estimate requires a window of fixed length; e.g., 30 or 60 trading days. If we observe returns (rt) over M days, the volatility estimate is constructed from a moving average (MA): M
(1/ M ) rt2i 2 t
i 1
Each day, the forecast is updated by adding the most recent day and dropping the furthest day. In a simple moving average, all weights on past returns are equal and set to (1/M). Note raw returns are used instead of returns around the mean (i.e., the expected mean is assumed zero). This is common in short time intervals, where it makes little difference on the volatility estimate. For example, assume the previous four daily returns for a stock are 6% (n-1), 5% (m-2), 4% (n3) and 3% (n-4). What is a current volatility estimate, applying the moving average, given that our short trailing window is only four days (m=14)? If we square each return, the series is 0.0036, 0.0025, 0.0016 and 0.0009. If we sum this series of squared returns, we get 0.0086. Divide by 4 (since m=4) and we get 0.00215. That’s the moving average variance, such that the moving average volatility is about 4.64%. The above example illustrates a key weakness of the moving average (MA): since all returns weigh equally, the trend does not matter. In the example above, notice that volatilty is trending down, but MA does not reflect in any way this trend. We could reverse the order of the historical series and the MA estimation would produce the same result.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 118
The moving average (MA) series is simple but has two drawbacks
The MA series ignores the order of the observations. Older observations may no longer be relevant, but they receive the same weight.
The MA series has a so-called ghosting feature: data points are dropped arbitrarily due to length of the window.
Compare, contrast and calculate parametric and non-parametric approaches for estimating conditional volatility, including: GARCH APPROACH, EXPONENTIAL SMOOTHING (EWMA), and Exponential smoothing (conditional parametric) Modern methods place more weight on recent information. Both EWMA and GARCH place more weight on recent information. Further, as EWMA is a special case of GARCH, both EWMA and GARCH employ exponential smoothing.
GARCH (p, q) and in particular GARCH (1, 1) GARCH (p, q) is a general autoregressive conditional heteroskedastic model:
Autoregressive (AR): tomorrow’s variance (or volatility) is a regressed function of today’s variance—it regresses on itself
Conditional (C): tomorrow’s variance depends—is conditional on—the most recent variance. An unconditional variance would not depend on today’s variance
Heteroskedastic (H): variances are not constant, they flux over time
GARCH regresses on “lagged” or historical terms. The lagged terms are either variance or squared returns. The generic GARCH (p, q) model regresses on (p) squared returns and (q) variances. Therefore, GARCH (1, 1) “lags” or regresses on last period’s squared return (i.e., just 1 return) and last period’s variance (i.e., just 1 variance). GARCH (1, 1) given by the following equation.
ht 0 1rt21 ht 1
ht or t2 a or ht 1 or 2 rt-1
www.bionicturtle.com
conditional variance (i.e., we're solving for it)
or
weighted long-run (average) variance 2 t-1
2 rt-1,t
previous variance previous squared return
FRM 2012 QUANTITATIVE ANALYSIS 119
Persistence is a feature embedded in the GARCH model. In the above formulas, persistence is = (b + c) or (alpha-1+ beta). Persistence refers to how quickly (or slowly) the variance reverts or “decays” toward its long-run average. High persistence equates to slow decay and slow “regression toward the mean;” low persistence equates to rapid decay and quick “reversion to the mean.” A persistence of 1.0 implies no mean reversion. A persistence of less than 1.0 implies “reversion to the mean,” where a lower persistence implies greater reversion to the mean. As above, the sum of the weights assigned to the lagged variance and lagged squared return is persistence (b+c = persistence). A high persistence (greater than zero but less than one) implies slow reversion to the mean. But if the weights assigned to the lagged variance and lagged squared return are greater than one, the model is non-stationary. If (b+c) is greater than 1 (if b+c > 1) the model is non-stationary and, according to Hull, unstable. In which case, EWMA is preferred. Linda Allen says about GARCH (1, 1):
GARCH is both “compact” (i.e., relatively simple) and remarkably accurate. GARCH models predominate in scholarly research. Many variations of the GARCH model have been attempted, but few have improved on the original.
The drawback of the GARCH model is its nonlinearity.
For example: Solve for long-run variance in GARCH (1,1) Consider the GARCH (1, 1) equation below:
n2 0.2 un21 n21 Assume that:
the alpha parameter = 0.2,
the beta parameter = 0.7, and
Note that omega is 0.2 but don’t mistake omega (0.2) for the long-run variance! Omega is the product of gamma and the long-run variance. So, if alpha + beta = 0.9, then gamma must be 0.1. Given that omega is 0.2, we know that the long-run variance must be 2.0 (0.2 0.1 = 2.0).
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 120
EWMA EWMA is a special case of GARCH (1,1). Here is how we get from GARCH (1,1) to EWMA:
GARCH(1,1) t2 a brt21,t c t21 Then we let a = 0 and (b + c) =1, such that the above equation simplifies to:
GARCH(1,1) = t2 brt21,t (1 b) t21 This is now equivalent to the formula for exponentially weighted moving average (EWMA):
EWMA t2 brt21,t (1 b) t21
t2 t21 (1 )rt21,t In EWMA, the lambda parameter now determines the “decay:” a lambda that is close to one (high lambda) exhibits slow decay.
RiskMetricsTM Approach RiskMetrics is a branded form of the exponentially weighted moving average (EWMA) approach. The optimal (theoretical) lambda varies by asset class, but the overall optimal parameter used by RiskMetrics has been 0.94. In practice, RiskMetrics only uses one decay factor for all series:
0.94 for daily data 0.97 for monthly data (month defined as 25 trading days)
Technically, the daily and monthly models are inconsistent. However, they are both easy to use, they approximate the behavior of actual data quite well, and they are robust to misspecification. Each of GARCH (1, 1), EWMA and RiskMetrics are each parametric and recursive.
Advantages and Disadvantages of MA (i.e., STDEV) vs. GARCH GARCH estimations can provide estimations that are more accurate than MA Jorion’s Moving average (MA) = Allen’s STDEV
GARCH
Ghosting feature
More recent data assigned greater weights
Trend information is not incorporated
A term added to incorporate mean reversion
Except Linda Allen warns: GARCH (1,1) needs more parameters and may pose greater MODEL RISK (“chases a moving target”) when forecasting out-of-sample
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 121
Graphical summary of the parametric methods that assign more weight to recent returns (GARCH & EWMA)
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 122
Summary Tips: GARCH (1, 1) is generalized RiskMetrics; and, conversely, RiskMetrics is restricted case of GARCH (1,1) where a = 0 and (b + c) =1. GARCH (1, 1) is given by:
n2 VL un21 n21 The three parameters are weights and therefore must sum to one:
1 Be careful about the first term in the GARCH (1, 1) equation: omega (ω) = gamma(λ) * (average long-run variance). If you are asked for the variance, you may need to divide out the weight in order to compute the average variance. Determine when and whether a GARCH or EWMA model should be used in volatility estimation In practice, variance rates tend to be mean reverting; therefore, the GARCH (1, 1) model is theoretically superior (“more appealing than”) to the EWMA model. Remember, that’s the big difference: GARCH adds the parameter that weights the long-run average and therefore it incorporates mean reversion. GARCH (1, 1) is preferred unless the first parameter is negative (which is implied if alpha + beta > 1). In this case, GARCH (1,1) is unstable and EWMA is preferred.
Explain how the GARCH estimations can provide forecasts that are more accurate. The moving average computes variance based on a trailing window of observations; e.g., the previous ten days, the previous 100 days. There are two problems with moving average (MA):
Ghosting feature: volatility shocks (sudden increases) are abruptly incorporated into the MA metric and then, when the trailing window passes, they are abruptly dropped from the calculation. Due to this the MA metric will shift in relation to the chosen window length
Trend information is not incorporated
GARCH estimates improve on these weaknesses in two ways:
More recent observations are assigned greater weights. This overcomes ghosting because a volatility shock will immediately impact the estimate but its influence will fade gradually as time passes
A term is added to incorporate reversion to the mean
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 123
Explain how persistence is related to the reversion to the mean. Given the GARCH (1, 1) equation:
ht 0 1rt21 ht 1
Persistence 1 GARCH (1, 1) is unstable if the persistence > 1. A persistence of 1.0 indicates no mean reversion. A low persistence (e.g., 0.6) indicates rapid decay and high reversion to the mean. GARCH (1, 1) has three weights assigned to three factors. Persistence is the sum of the weights assigned to both the lagged variance and lagged squared return. The other weight is assigned to the long-run variance. If P = persistence and G = weight assigned to long-run variance, then P+G = 1. Therefore, if P (persistence) is high, then G (mean reversion) is low: the persistent series is not strongly mean reverting; it exhibits “slow decay” toward the mean. If P is low, then G must be high: the impersistent series does strongly mean revert; it exhibits “rapid decay” toward the mean. The average, unconditional variance in the GARCH (1, 1) model is given by:
LV
0 1 1
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 124
Compare, contrast and calculate parametric and non-parametric approaches for estimating conditional volatility, including: HISTORIC SIMULATION Historical simulation is easy: we only need to determine the “lookback window.” The problem is that, for small samples, the extreme percentiles (e.g., the worst one percent) are less precise. Historical simulation effectively throws out useful information. “The most prominent and easiest to implement methodology within the class of nonparametric methods is historical simulation (HS). HS uses the data directly. The only thing we need to determine up front is the lookback window. Once the window length is determined, we order returns in descending order, and go directly to the tail of this ordered vector. For an estimation window of 100 observations, for example, the fifth lowest return in a rolling window of the most recent 100 returns is the fifth percentile. The lowest observation is the first percentile. If we wanted, instead, to use a 250 observations window, the fifth percentile would be somewhere between the 12th and the 13th lowest observations (a detailed discussion follows), and the first percentile would be somewhere between the second and third lowest returns.” –Linda Allen
Compare and contrast the use of historic simulation, multivariate density estimation, and hybrid methods for volatility forecasting. Nonparametric Volatility Forecasting
Historic Simulation (HS) • Sort returns • Lookup worst • If n=100, for 95th percentile look between bottom 5th & 6th
www.bionicturtle.com
MDE • Like ARCH(m) • But weights based on function of [current vs. historical state] • If state (n-50) state (today), heavy weight to that return2
Hybrid (HS & EWMA) • Sort returns (like HS) • But weight them, greater weight to recent (like EWMA)
FRM 2012 QUANTITATIVE ANALYSIS 125
Historical Simulation Multivariate density estimation Hybrid approach
Advantages
Disadvantages
Easiest to implement (simple, convenient)
Uses data inefficiently (much data is not used)
Very flexible: weights are function of state (e.g., economic context such as interest rates) not constant
Onerous model: weighting scheme; conditioning variables; number of observations Data intensive
Unlike the HS approach, better incorporates more recent information
Requires model assumptions; e.g., number of observations
Compare, contrast and calculate parametric and non-parametric approaches for estimating conditional volatility, including: MULTIVARIATE DENSITY ESTIMATION Multivariate Density Estimation (MDE) The key feature of multivariate density estimation is that the weights (assigned to historical square returns) are not a constant function of time. Rather, the current state—as parameterized by a state vector—is compared to the historical state: the more similar the states (current versus historical period), the greater the assigned weight. The relative weighting is determined by the kernel function:
K
( t i )ut2i 2 t
i 1
Kernel function
Vector describing economic state at time t-i
Instead of weighting returns^2 by time, Weighting by proximity to current state
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 126
Compare EWMA to MDE:
Both assign weights to historical squared returns (squared returns = variance approximation);
Where EWMA assigns the weight as an exponentially declining function of time (i.e., the nearer to today, the greater the weight), MDE assigns the weight based on the nature of the historical period (i.e., the more similar to the historical state, the greater the weight)
Compare, contrast and calculate parametric and non-parametric approaches for estimating conditional volatility, including: HYBRID METHODS The hybrid approach is a variation on historical simulation (HS). Consider the ten (10) illustrative returns below. In simple HS, the return are sorted from best-to-worst (or worst-tobest) and the quantile determines the VaR. Simple HS amounts to giving equal weight to each returns (last column). Given 10 returns, the worst return (-31.8%) earns a 10% weight under simple HS.
Sorted Return -31.8% -28.8% -25.5% -22.3% 5.7% 6.1% 6.5% 6.9% 12.1% 60.6%
Periods Ago 7 9 6 10 1 2 3 4 5 8
Hybrid Weight 8.16% 6.61% 9.07% 5.95% 15.35% 13.82% 12.44% 11.19% 10.07% 7.34%
Cum'l Hybrid Compare Weight to HS 8.16% 10% 14.77% 20% 23.83% 30% 29.78% 40% 45.14% 50% 58.95% 60% 71.39% 70% 82.58% 80% 92.66% 90% 100.00% 100%
However, under the hybrid approach, the EWMA weighting scheme is instead applied. Since the worst return happened seven (7) periods ago, the weight applied is given by the following, assuming a lambda of 0.9 (90%): Weight (7 periods prior) = 90%^(7-1)*(1-90%)/(1-90%^10) = 8.16%
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 127
Note that because the return happened further in the past, the weight is below the 10% that is assigned under simple HS. 120% 100%
Hybrid Weights HS Weights
80% 60% 40%
20% 0% 1
2
3
4
5
6
7
Hybrid methods using Google stock’s prices and returns: Number Google (GOOG) Period of days Date Close Return Sorted ago 6/24/2009 409.29 0.89% 1 -5.90% 76 6/23/2009 405.68 -0.41% 2 -5.50% 94 6/22/2009 407.35 -3.08% 3 -4.85% 86 6/19/2009 420.09 1.45% 4 -4.29% 90 6/18/2009 414.06 -0.27% 5 -4.25% 78 6/17/2009 415.16 -0.20% 6 -3.35% 47 6/16/2009 416 -0.18% 7 -3.26% 81 6/15/2009 416.77 -1.92% 8 -3.08% 3 6/12/2009 424.84 -0.97% 9 -3.01% 88 6/11/2009 429 -0.84% 10 -2.64% 55
8
9
10
Cumulative Weight HS Hybrid 1.0% 0.2% 0.2% 2.0% 0.1% 0.3% 3.0% 0.1% 0.4% 4.0% 0.1% 0.5% 5.0% 0.2% 0.7% 6.0% 0.6% 1.3% 7.0% 0.2% 1.4% 8.0% 3.7% 5.1% 9.0% 0.1% 5.2% 10.0% 0.4% 5.7%
In this case:
Sample includes 100 returns (n=100)
We are solving for the 95th percentile (95%) value at risk (VaR)
For the hybrid approach, lambda = 0.96
Sorted returns are shown in the purple column
The HS 95% VaR = ~ 4.25% because it is the fifth-worst return (actually, the quantile can be determined in more than one way)
However, the hybrid approach returns a 95% VaR of 3.08% because the “worst returns” that inform the dataset tend to be further in the past (i.e., days ago = 76, 94, 86, 90…). Due to this, the individual weights are generally less than 1%.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 128
Explain the process of return aggregation in the context of volatility forecasting methods. The question is: how do we compute VAR for a portfolio which consists of several positions. The first approach is the variance-covariance approach: if we make (parametric) assumptions about the covariances between each position, then we extend the parametric approach to the entire portfolio. The problem with this approach is that correlations tend to increase (or change) during stressful market events; portfolio VAR may underestimate VAR in such circumstances. The second approach is to extend the historical simulation (HS) approach to the portfolio: apply today’s weights to yesterday’s returns. In other words, “what would have happened if we held this portfolio in the past?” The third approach is to combine these two approaches: aggregate the simulated returns and then apply a parametric (normal) distributional assumption to the aggregated portfolio. The first approach (variance-covariance) requires the dubious assumption of normality—for the positions “inside” the portfolio. The text says the third approach is gaining in popularity and is justified by the law of large numbers: even if the components (positions) in the portfolio are not normally distributed, the aggregated portfolio will converge toward normality.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 129
Explain how implied volatility can be used to predict future volatility To impute volatility is to derivate volatility (to reverse-engineer it, really) from the observed market price of the asset. A typical example uses the Black-Scholes option pricing model to compute the implied volatility of a stock option; i.e., option traders will average at-the-money implied volatility from traded puts and calls. The advantages of implied volatility are:
Truly predictive (reflects market’s forwardlooking consensus) Does not require, nor is restrained by, historical distribution patterns
The shortcomings (or disadvantages) of implied volatility include:
Model-dependent
Options on the same underlying asset may trade at different implied volatilities; e.g., volatility smile/smirk
Stochastic volatility; i.e., the model assumes constant volatility, but volatility tends to change over time
Limited availability because it requires traded (set by market) price
Explain how to use option prices to derive forecasts of volatilities This requires that a market mechanism (e.g., an exchange) can provide a market price for the option. If a market price can be observed, then instead of solving for the price of an option, we use an option pricing model (OPM) to reveal the implied (implicit) volatility. We solve (“goal seek”) for the volatility that produces a model price equal to the market price:
cmarket f ( ISD ) Where the implied standard deviation (ISD) is the volatility input into an option pricing model (OPM). Similarly, implied correlations can also be “recovered” (reverse-engineered) from options on multiple assets. According to Jorion, ISD is a superior approach to volatility estimation. He says, “Whenever possible, VAR should use implied parameters” [i.e., ISD or market implied volatility].
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 130
Discuss implied volatility as a predictor of future volatility and its shortcomings. Many risk managers describe the application of historical volatility as similar to “driving by looking in the rear-view mirror.” Another flaw is the assumption of stationarity; i.e., the assumption that the past is indicative of the future. Implied volatility, “an intriguing alternative,” can be imputed from derivative prices using a specific derivative pricing model. The simplest example is the Black–Scholes implied volatility imputed from equity option prices.
In the presence of multiple implied volatilities for various option maturities and exercise prices, it is common to take the at-the-money (ATM) implied volatility from puts and calls and extrapolate an average implied; this implied is derived from the most liquid (ATM) options
The advantage of implied volatility is that it is a forward-looking, predictive measure. “A particularly strong example of the advantage obtained by using implied volatility (in contrast to historical volatility) as a predictor of future volatility is the GBP currency crisis of 1992. During the summer of 1992, the GBP came under pressure as a result of the expectation that it should be devalued relative to the European Currency Unit (ECU) components, the deutschmark (DM) in particular (at the time the strongest currency within the ECU). During the weeks preceding the final drama of the GBP devaluation, many signals were present in the public domain … This was the case many times prior to this event, especially with the Italian lira’s many devaluations. Therefore, the market was prepared for a crisis in the GBP during the summer of 1992. Observing the thick solid line depicting option-implied volatility, the growing pressure on the GBP manifests itself in options prices and volatilities. Historical volatility is trailing, “unaware” of the pressure. In this case, the situation is particularly problematic since historical volatility happens to decline as implied volatility rises. The fall in historical volatility is dueto the fact that movements close to the intervention band are bound to be smaller by the fact of the intervention bands’ existence and the nature of intervention, thereby dampening the historical measure of volatility just at the time that a more predictive measure shows increases in volatility.” – Linda Allen
Is implied volatility a superior predictor of future volatility? “It would seem as if the answer must be affirmative, since implied volatility can react immediately to market conditions. As a predictor of future volatility this is certainly an important feature.”
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 131
Why does implied volatility tend to be greater than historical volatility? According to Linda Allen, “empirical results indicate, strongly and consistently, that implied volatility is, on average, greater than realized volatility.” There are two common explanations.
Market inefficiency due to supply and demand forces.
Rational markets: implied volatility is greater than realized volatility due to stochastic volatility. “Consider the following facts: (i) volatility is stochastic; (ii) volatility is a priced source of risk; and (iii) the underlying model (e.g., the Black–Scholes model) is, hence, misspecified, assuming constant volatility. The result is that the premium required by the market for stochastic volatility will manifest itself in the forms we saw above – implied volatility would be, on average, greater than realized volatility.”
But implied volatility has shortcomings.
Implied volatility is model-dependent. A mis-specified model can result in an erroneous forecast.
“Consider the Black–Scholes option-pricing model. This model hinges on a few assumptions, one of which is that the underlying asset follows a continuous time lognormal diffusion process. The underlying assumption is that the volatility parameter is constant from the present time to the maturity of the contract. The implied volatility is supposedly this parameter. In reality, volatility is not constant over the life of the options contract. Implied volatility varies through time. Oddly, traders trade options in “vol” terms, the volatility of the underlying, fully aware that (i) this vol is implied from a constant volatility model, and (ii) that this very same option will trade tomorrow at a different vol, which will also be assumed to be constant over the remaining life of the contract.” –Linda Allen
At any given point in time, options on the same underlying may trade at different vols. An example is the [volatility] smile effect – deep out of the money (especially) and deep in the money (to a lesser extent) options trade at a higher volatility than at the money options.
Explain long horizon volatility/VaR and the process of mean reversion according to an AR(1) model. Explain the implications of mean reversion in returns and return volatility The key idea refers to the application of the square root rule (S.R.R. says that variance scales directly with time such that the volatility scales directly with the square root of time). The square root rule, while mathematically convenient, doesn’t really work in practice because it requires that normally distributed returns are independent and identically distributed (i.i.d.).
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 132
What I mean is, we use it on the exam, but in practice, when applying the square root rule to scaling delta normal VaR/volatility, we should be sensitive to the likely error introduced. Allen gives two scenarios that each illustrate “violations” in the use of the square root rule to scale volatility over time:
If mean reversion… Then square root rule In returns In return volatility
Overstates long run volatility If current vol. > long run volatility, overstates If current vol. < long run volatility, understates
For FRM purposes, three definitions of mean reversion are used:
Mean reversion in the asset dynamics. The price/return tends towards a long-run level; e.g., interest rate reverts to 5%, equity log return reverts to +8%
Mean reversion in variance. Variance reverts toward a long-run level; e.g., volatility reverts to a long-run average of 20%. We can also refer to this as negative autocorrelation, but it's a little trickier. Negative autocorrelation refers to the fact that a high variance is likely to be followed in time by a low variance. The reason it's tricky is due to short/long timeframes: the current volatility may be high relative to the long run mean, but it may be "sticky" or cluster in the short-term (positive autocorrelation) yet, in the longer term it may revert to the long run mean. So, there can be a mix of (short-term) positive and negative autocorrelation on the way being pulled toward the long run mean.
Autoregression in the time series. The current estimate (variance) is informed by (a function of) the previous value; e.g., in GARCH(1,1) and exponentially weighted moving average (EWMA), the variance is a function of the previous variance.
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 133
Square root rule The simplest approach to extending the horizon is to use the “square root rule”
(rt ,t J ) (rt ,t 1) J
J-period VAR = J 1-period VAR
For example, if the 1-period VAR is $10, then the 2-period VAR is $14.14 ($10 x square root of 2) and the 5-period VAR is $22.36 ($10 x square root of 5). The square-root-rule: under the two assumptions below, VaR scales with the square root of time. Extend one-period VaR to J-period VAR by multiplying by the square root of J. The square root rule (i.e., variance is linear with time) only applies under restrictive i.i.d. The square-root rule for extending the time horizon requires two key assumptions:
Random-walk (acceptable)
Constant volatility (unlikely)
www.bionicturtle.com
FRM 2012 QUANTITATIVE ANALYSIS 134
View more...
Comments