Cointegration Part II
Short Description
Cointegration Notes...
Description
Cointegration – Hypothesis testing and identification Testing restrictions We have so far seen estimation and other issues associated with a cointegrated V AR model. But as in any empirical application, in a cointegrated V AR system also we can systematically test many hypotheses of interest, mainly dictated by theoretical considerations. Though such restrictions on both the cointegrating vectors as well as the loadings matrix do not help us in identifying these vectors, these are of interest by themselves. As an example, by testing for the exclusion of a particular variable in a cointegrating vector, potentially irrelevant variables may be tested out of the system, thus reducing the dimension of the analysis. Many other interesting propositions, like the concept of weak exogeneity, can be tested on the loadings matrix too. In this module, we shall see how to formulate such restrictions in two different ways – one, in terms of free or unrestricted parameters and another in terms of of restrictions explicitly, which is the more traditional way. Which way should one follow, is entirely a matter of taste; but as a practice, we shall demonstrate the use of both ways. ——————— Formulating hypotheses as restrictions on β : Restrictions on the β vector can be imposed in terms of si free parameters or in terms of mi restrictions. We first specify in terms of free parameters. Let ϕi be the (si × 1) redefined coefficient vector, Hi is a(N 1 × si ) design matrix of known elements, N 1 is the dimension of Zt in the V AR model, where N 1 is N plus any deterministic variable and constant included in the V AR model. And i = 1, . . . , r. These are the notations for formulating the hypotheses in terms of free parameters. And in terms of restrictions, we specify Ri matrices of size (N 1 × mi ), where mi = N 1 − si restrictions on β i such that R01 β 1 = 0, . . . , R0r β r = 0. Let us illustrate these with help of a vector of variables, Zt = (mrt , ytr , ∇pt , Rm,t , Rb,t , Ds )0 . These variables are typically used in macro/monetary relations and Ds is a dummy variable. Let us suppose there are 3 cointegrating relations. Note that, when we estimate, a cointegrating relation will contain all the variables in the Z vector. It may so happen, that the coefficient attached to a particular variable is very near zero, but we may like to check if it can be statistically considered to be so; and this is exactly what we check in any hypothesis testing. And hence, the first cointegrating relation, β 01 Zt should actually look like β 01 Zt = β11 mrt + β12 ytr + β13 ∇pt + β14 Rm,t + β15 Rb,t + β16 Ds ,
(Unrestricted).
But to illustrate hypotheses testing involving cointegration vectors, let us suppose for exposition sake, that the first cointegration looks like β 01 Zt = [(mrt − ytr ) − b1 (Rm,t − Rb,t ) − b2 DS ] ,
(Restricted).
Note here that inflation rate has been omitted. Let us demonstrate how we arrived at the restricted cointegrating vector from the unrestricted one, first by using only the free parameters and then by using the restrictions only. Treating (mrt − ytr ) and (Rm,t − Rb,t ) as one variable, we may say that there are three f ree parameters. Note also that in this expositional cointegration relation, the coefficients are such that β12 = −β11 and β15 = −β14 . This simply means that the coefficients of mrt and ytr are equal in size but opposite in sign. The same is true with the coefficients of Rm,t and Rb,t . Now, to differentiate between the restricted and the unrestricted vectors, let us redefine the cointegrating vector β1 such that, β11 = −β12 = ϕ11 and β14 = −β15 = ϕ12 , ϕ13 = β16 . With this, the redefined cointegrating vector, ϕ1 , is a (3 × 1) vector. Hence,
30
β 1 = H 1 ϕ1 =
1 0 0 −1 0 0 0 0 0 0 1 0 0 −1 0 0 0 1
ϕ11 ϕ12 = ϕ 13
ϕ11 −ϕ11 0 ϕ12 −ϕ12 ϕ13
With this we can write the expositional cointegrating vector as ϕ11 (mrt − ytr ) + ϕ12 (Rm,t − Rb,t ) + ϕ13 Ds . Notice next that in the expositional cointegrating relation, we have normalized on the first variable – that is, we have set the coefficient of the first variable to be 1, so that now the normalized cointegrating vector is (1, −1, 0, ϕ12 /ϕ11 , −ϕ12 /ϕ11 , ϕ13 /ϕ11 )0 . We shall simplify further and assume that b1 = −ϕ12 /ϕ11 and b2 = −ϕ13 /ϕ11 so that the normalized cointegrating vector is (1, −1, 0, −b1 , b1 , ; −b2 )0 . Thus, we get the first expositional restricted cointegration vector as (1, −1, 0, −b1 , b1 , −b2 )Zt = [(mrt − ytr ) − b1 (Rm,t − Rb,t ) − b2 Ds ] . ——————————— We shall now demonstrate for the same vector, how to arrive at the restricted cointegrating vector, using only the implied restrictions on the first cointegrating vector. First we shall fix the dimension of the R matrix, which is a (6 × 3) matrix. In terms of restrictions notice, that −β11 = β12 so that β11 + β12 = 0; β13 = 0 and −β14 = β15 so that β14 + β15 = 0. With this we get the first restricted cointegrating vector as β11 β 1 1 0 0 0 0 12 β R01 β 1 = 0 0 1 0 0 0 13 = 0 β14 0 0 0 1 1 0 β 15 β16 ——————————— Using the same logic on the second restricted cointegrating vector, β 02 Zt = ytr − b3 (∇pt − Rb,t ), we can write using the free parameters. Notice that there are only two free parameters, so that si = 2 and H2 is now (6 × 2) matrix: 0 0 0 1 0 ϕ21 1 ϕ21 0 ϕ22 β 2 = H2 ϕ 2 = = . 0 ϕ22 0 0 0 −1 −ϕ 22
0
0
0
Similarly, in terms of restrictions, with the R matrix now being a (6 × 4) matrix, we have β21 1 0 0 0 0 0 β22 0 0 1 0 1 0 β23 R02 β 2 = =0 0 0 0 1 0 0 β24 0 0 0 0 0 1 β 25
β26 ——————————— 31
And, for the third restricted cointegrating vector given by β 03 Zt = (Rm,t − Rb,t ) + b4 Ds one can show that, using free parameters and with H3 being a (6 × 2) 0 0 0 0 0 0 0 0 0 ϕ 31 β 3 = H 3 ϕ3 = = 1 0 ϕ32 ϕ31 −1 0 −ϕ 31 ϕ32 0 1
matrix, .
And, in terms of restrictions, we have the R matrix as a (6 × 4) matrix, and arrange them as β31 1 0 0 0 0 0 β32 0 1 0 0 0 0 β33 0 R3 β 3 = =0 0 0 1 0 0 0 β34 0 0 0 1 1 0 β 35
β36 Note the following: • Ri = H⊥ ,i , that is R0i Hi = 0. • Since such testing is done normally after the rank has been determined, such restrictions are null hypothesis on the stationary linear combination of variables. ——————————— Same restrictions on all cointegrating vectors For some reason, we may be interested in testing for the exclusion of a particular variable from all cointegrating vectors. This results in the ‘same’ exclusion restrictions on all the cointegrating vectors. Or, we may want to check if some well known economic relation is common to all relations. To be more specific, suppose we want to test if the relation (mrt − ytr ) is common to all cointegrating relations. How do we test it? We continue with the same vector of variables, Zt , as before and also assume that we have three cointegrating relations. But we shall not refer back to the three expositional restricted cointegrating vectors used before. So for example, for this set up, our cointegrating relations are simply given as β 01 Zt , β 02 Zt , β 03 Zt respectively. Writing out the first cointegrating relation explicitly,we have β 01 Zt = β11 mrt + β12 ytr + β13 ∇pt + β14 Rm,t + β15 Rb,t + β16 Ds . The restriction we want to test implies, that for this vector, β11 (mrt − ytr ). Since our aim is to check if this is common to all cointegrating relations, we have for the other two cointegrating vectors, β21 (mrt − ytr ) and β31 (mrt − ytr ). If we redefine our restricted cointegrated vectors as ϕ1 , ϕ2 and ϕ3 , the restrictions imply that ϕ21 = −ϕ11 , ϕ22 = −ϕ12 , ϕ23 = −ϕ13 With this, we can impose these restrictions either using the free parameters or the restrictions. Using the free parameters, for example, we have the (6 × 5) H matrix, the restrictions can be expressed compactly as 1 0 0 0 0 ϕ11 ϕ12 ϕ13 −1 0 0 0 0 ϕ22 ϕ23 ϕ 0 1 0 0 0 21 Hϕ = ϕ31 ϕ32 ϕ33 0 0 1 0 0 0 0 0 1 0 ϕ41 ϕ42 ϕ43 ϕ51 ϕ52 ϕ53 0 0 0 0 1 32
However, expressing this in terms of restrictions alone, is easier since R matrix is of dimension (N 1 × m) where m is the number of restrictions in each vector. For the present case, this simply means R0 β = 0, R0 = (1 1 0 0 0 0). And the transformed data vector for this set up becomes, (mr − y r )t ∇pt Rm,t H0 Zt = Rb,t Ds
Note that, if this restriction is accepted, an important consequence is that, instead of using two variables mrt and ytr separately, we use the relation (mrt − ytr ) as one variable, so that the dimension of the variables, now is 5. Such restrictions are common in economic theory. The restriction (mr −y r )t is generally understood as measuring money income velocity; (Rm,t −Rb,t ) is defined as the rate spread; and (R − ∇p) measures real interest rate. However, the following important points need to be noted. • Note that the number of restrictions m that we can impose on the endogenous variables is constrained by the fact that N − m ≥ r. This means that we can impose only one more restriction. For example we can check if the interest rate spread (Rm − Rb ) is common to all cointegrating relations. And if this is alsoaccepted, then the transformed data vector becomes 0 H0 Zt = (mr − y r )t , ∇pt , (Rm − Rb )t , Ds . This model will produce exactly three eigen values and is testable. • Remember that the restricted model is not going to reestimate the number of cointegrating relations. What we are interested in, is the log likelihood value of the restricted model, given that we have already estimated three cointegrating vectors against the ‘unrestricted’ model which had identified these cointegrating relations with the full set of variables. So, what we are basically looking at is, restrictions within the estimated cointegrating vectors; so the number of such vectors is not going to change in the restricted version but the number of variables within each vector may change. For instance, if the above restriction that both the relation (mr − y r )t and the interest rate spread relation (Rm − Rb )t are common to all three cointegrating vectors has been accepted, then the number of variables in each cointegrating relations will now be four instead of the original six variables. The number of cointegrating rank, however, will remain the same. ————————
33
Estimation in restricted cointegrated systems We have not yet mentioned anything about how we are going to estimate restricted models. Note that in all cases of hypotheses testing involving the cointegrated V AR model, the statistical test that is generally used is the likelihood ratio (LR) test. Thus, following the general practice, we shall estimate both the restricted and the unrestricted versions and use the respective log likelihood values in the test. ————————We shall outline below the steps involved in the estimation of the restricted model subject to the condition that the same restriction is applied to all the cointegrating vectors. Let us take, for example, that the relation (mr − y r )t is common to all the vectors. • Estimate the unrestricted model following the steps given on pages 22 through 25 and calculate the log likelihood value using the first three eigen values. • Before estimating the restricted model, note that the transformed V ECM looks like, ∇Zt = αϕ0 H0 Zt−1 + Γ1 ∇Zt−1 + Γ2 ∇Zt−2 + · · · + Γp−1 ∇Zt−p+1 + et implying that the unrestricted cointegrating vector, β has now been replaced by the restricted cointegrating vector β c = Hϕ. • We again go through the same steps outlined on pages 25 through 27, only with the difference that in I.2 we regress HZt−1 on the short run dynamics and go through the rest of the steps. • Now we calculate the LR statistic as follows n o ˆ 1 ) + ln(1 − λc ) − ln(1 − λ ˆ 2 ) + · · · ln(1 − λc ) − ln(1 − λ ˆr ) . 2(L∗A −L∗0 ) = T ln(1 − λc1 ) − ln(1 − λ 2 r • Recall that the entire restricted modeling is done on stationary cointegrating relations and hence all statistical testing can be done using Gaussian properties. Hence, this statistic is distributed as χ2 (j) where j = rm and m is the number of restrictions in each cointegrating vector. So, there are j degrees of freedom. In the present case, there was only one restriction per cointegrating vector; hence j = 3. • Suffice it to say that the above mentioned steps are generally those that are used to estimate any restricted model. So we shall not outline the steps of all restricted models. ————————There are many other interesting restrictions that can be tested on the β vectors. For illustration we can conduct a joint test that the interest rate spread is common to all the relation and the dummy variable can be excluded from all the relations. One can also check if a particular relation is stationary. For example, one can check if ∇pt is stationary. In this case the cointegrating vector is (1 − 1) and this known can be tested for its correctness. The basic framework is the same as before. Hence, we shall not pursue them here. Interested readers can refer to the book by K.Juselius on this subject. We shall next examine the implications of testing restrictions on the α vector. ————————
34
Formulating hypotheses as restrictions on α : Restrictions on the α vector is closely associated with the concept of weak exogeneity. A test of zero row vector is equivalent to testing for a particular variable to be weakly exogenous to the long run parameters. When the null is accepted, then the particular variable is singled out as the common driving trend. One can check also for a known vector in α. —————Testing for long run weak exogeneity The hypothesis that a particular variable influences, but at the same time, is not influenced by other variables in the system, is called the ‘no levels feedback’ hypothesis or the concept of weak exogeneity. The way to test is as before, both with the free parameters and restrictions. We express the restricted vector as Hαc : α = Hαc , where α is (N × r) matrix; H is a (N × s) matrix, where s is the number of free parameters; αc is (s × r) matrix of nonzero α coefficients. The equivalent form with the restrictions in the α vector is Hαc : R0 α = 0,
where R = H⊥ .
Note the following important point. • When the null of a zero row vector has been accepted, it means that particular variable does not adjust to the deviations in the long run relations, meaning that the variable can be considered a common stochastic trend. And since, there cannot be greater than N − r common trends in a system with r cointegrating relations, the number of such zero row restrictions can be at most, (N − r). Empirical illustration Let us work again with the same Zt vector where now the vector is Zt = (mrt , ytr , ∇pt , Rm,t , Rb,t )0 . We want to test if the bond rate, Rb,t is long run weakly exogenous for the long run parameters in the data – that is, we want to test if α51 = α52 = α53 = 0. So here s = 4 and m = 1. So our V ECM model set up will be
35

∇mrt ∇ytr ∇2 pt ∇Rm,t ∇Rb,t {z ↓
∇Zt
α11 α12 α13 β11 Z1,t−1 + β12 Z2,t−1 + β13 Z3,t−1 + β14 Z4,t−1 + β15 Z5,t−1 = · · · + · · · · · · · · · β21 Z1,t−1 + β22 Z2,t−1 + β23 Z3,t−1 + β24 Z4,t−1 + β25 Z5,t−1 β31 Z1,t−1 + β32 Z2,t−1 + β33 Z3,t−1 + β34 Z4,t−1 + β35 Z5,t−1 α51 α52 α53  {z } ↓ } β 0 Zt−1
∇Rb,t
Since our interest is the equation for Rb,t , we shall write out the equation explicitly as = α51 (β11 Z1,t−1 + β12 Z2,t−1 + β13 Z3,t−1 + β14 Z4,t−1 + β15 Z5,t−1 ) + α52 (β21 Z1,t−1 + β22 Z2,t−1 + β23 Z3,t−1 + β24 Z4,t−1 + β25 Z5,t−1 ) + α53 (β31 Z1,t−1 + β32 Z2,t−1 + β33 Z3,t−1 + β34 Z4,t−1 + β35 Z5,t−1 ) If the null of α51 = α52 = α53 = 0 is accepted in the equation for Rb,t , then we consider that the bond market is weakly exogenous. Hence, we restrict it the following way: = ··· +
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 c c c β 1 Zt−1 α12 α13 α11 0 · · · · · · · · · β 2 Zt−1 αc αc αc β 03 Zt−1 43 42 41
c c c α13 α12 α11 β 01 Zt−1 ··· ··· ··· 0 β 2 Zt−1 = ··· + c c c α43 α41 α42 β 03 Zt−1 0 0 0
———————————– The same specification in terms of R matrix is easily seen to be R0 = [0,
0,
0,
0,
1] .
Some issues about the estimation of V ECM under this restriction is worth noting. • Assuming that the bond rate is weakly exogenous implies that valid statistical inference on β can be obtained from the four dimensional system consisting of all variables but the bond rate. Such an analysis is called the partial system analysis and the model that uses those four variables is called the partial model and the lone equation explaining the bond rate is called the marginal model. Thus evidence of weak exogeneity actually gives us a condition when a partial model can be used to efficiently estimate β without loss of information. This argument is based on a partitioning of the density function into conditional and marginal densities. We shall explain this below. • The implication of the above property is that, when there are N − r = m zero rows in the α matrix, – note that m ≤ N − r – we can partition the N equations into N − m equations that exhibit levels feedback and m equations that do not. Since m equations do not contain information about longrun relations, one can estimate a system of N − m variables conditional on the m marginal models of the weakly exogenous variables. • Note that, ironically, if we want to estimate β from a partial model, we have to estimate the full system first and test for weak exogeneity of a variable! If the null has been accepted, then it may be profitable to reestimate the partial model conditioned on the weakly exogenous 36
variable. Why reestimate? Because, reestimating a partial system, after accepting the null of weak exogeneity of a variable, results in a more balanced model some times. This may be true especially if there nonlinearities in the system or nonconstant parameters. • However, in many cases, it may be of interest to estimate the partial model from the outset. In our system we typically include variables which we know apriori to be weakly exogenous. For example, we know that US interest rates affect Indian rates but we also know for sure that Indian rates do not influence US rates at all! A variable like the oil prices affects all other macro variables in a system, without being affected by them. If we have such variables in our model, it is profitable to go for the partial model right from the outset. This is normally the strategy adopted by researchers, especially if they have a large number of variables. • However, if we go for the partial model on such apriori considerations from the outset, we have to refer to a different set of asymptotic critical values. Assuming that our initial classification of weak exogeneity is correct, we can refer to the tables calculated by Johansen et.al in Journal of Business and Economic Statistics, 1998, pp.388399. • Now, just what is this partial model? How is this related to the concept of weak exogeneity? To understand the link, we digress a bit and recall some basic statistical results. Details follow. —————————– Weak exogeneity and partial models. We recall some basic statistical results. Marginal distributions: Let X ≡ (X1 , X2 ) be a bivariate random vector with a joint distribution function F (x1 , x2 ). The question that naturally arises is if we could separate X1 and X2 and consider them as individual random variables. The answer to this question leads to the concept of marginal distribution. Given that the probability model has been defined in terms of the joint density functions, it is necessary to define these in terms of the marginal density functions. Hence, the marginal density functions of X1 and X2 is Z
∞
f (x1 , x2 )dx2
f1 (x1 ) = Z−∞ ∞ f2 (x2 ) =
f (x1 , x2 )dx1 −∞
Literally this means the marginal density of Xi (i = 1, 2) is obtained by integrating or throwing out Xj (j 6= i) from the joint density. The algebra behind this assertion should be available in any elementary book on statistics. Conditional distributions Another useful idea is to consider the question of deriving the density of a subset of random vectors by conditioning with respect to some other subset of random vectors given the joint density, which leads us to the concept of conditional distributions. This is of great value in the context of the probability model, because it offers a way to decompose the joint density function. Formally, if we need the conditional density of X2 given X1 , f (x1 , x2 ) = (f1 (x1 )) · (f (x2 x1 )). Needless to say that if X1 and X2 are independent f (x1 , x2 ) = f1 (x1 )f2 (x2 ). 37
Given the importance of these concepts, we shall define these in the context of bivariate normal density function, which takes the form f (x1 , x2 ; µ, Σ) and we write X ∼ (µ, Σ) where we can deduce the following. 2 ρσ1 σ2 σ12 µ1 σ1 σ12 , where ρ = σ12 = ; Σ = µ= σ1 σ2 ρσ σ σ2 σ σ2 µ 21
2
2 1
2
=⇒ (det Σ) = σ12 σ22 (1 − ρ2 ) > 0
2
− 1 < ρ < 1.
The marginal and conditional distributions in this case are denoted by, X2 ∼ N (µ2 , σ22 ) σ1 2 2 (X1 X2 ) ∼ N µ1 + ρ (x2 − µ2 ), σ1 (1 − ρ σ2 σ2 2 2 (X2 X1 ) ∼ N µ2 + ρ (x1 − µ1 ), σ2 (1 − ρ σ1
X1 ∼ N (µ1 , σ12 ),
(1) (2) (3)
How does one retrieve the model implied by these distributions? From (2) we can write the model for X1 given X2 as X1 = a + bX2 + υ1 where X2 = µ2 + υ2 , υ2 ∼ N (0, σ22 ) σ1 Here a = µ1 − bµ2 ; b = ρ , υ1 ∼ N (0, σ 2 ) σ 2 = σ12 (1 − ρ2 ) σ2 Similarly, from (3) we can write the model for X2 given X1 as X2 = a∗ + b∗ X1 + υ2∗ where ∗ ∗ X1 = µ2 + υ1 , υ1 ∼ N (0, σ12 ) σ2 Here a∗ = µ2 − b∗ µ1 ; b∗ = ρ , υ2∗ ∼ N (0, σ ˜2) σ ˜ 2 = σ22 (1 − ρ2 ) σ1 ————————– Now we can generalize these points to the N vector to get the multivariate density and conditional density functions. If X ∼ N (µ, Σ) then the marginal distribution of any (K × 1) subset of X, where X1 µ1 Σ11 Σ12 X= ; µ ; Σ= X2 µ2 Σ21 Σ22 Marginal distributions of X1 and X2 are easily seen to be, X1 ∼ N (µ1 ,
Σ11 ),
and X2 ∼ N (µ2 ,
Σ22 )
For the same partition, the conditional distributions are given by, (X1 X2 ) ∼ N µ1 + Σ12 Σ−1 22 (X2 − µ2 ),
Σ11 − Σ12 Σ−1 22 Σ21
(X2 X1 ) ∼ N µ2 + Σ21 Σ−1 11 (X1 − µ1 ),
Σ22 − Σ21 Σ−1 11 Σ12
and ——————————– 38
Now we shall use these concepts and establish how to derive the partial model in the cointegrated V AR framework. Let Zt = (Z1t , Z2t )0 . Let us partition e1t Γ1i α1 ; et = ; Γi = α= e2t Γ2i α2 With these partitions given let us partition the V ECM as follows: 0
∇Z1t = α1 β Zt−1 +
p−1 X
Γ1i ∇Zt−i + e1t
i=1 p−1
∇Z2t = α2 β 0 Zt−1 +
X
Γ2i ∇Zt−i + e2t
i=1
For this partition scheme, we have
Ω11 Ω12 Ω21 Ω22
Ω=
.
And, 0
µ1 = E(∇Z1t ) = α1 β Zt−1 +
p−1 X
Γ1i ∇Zt−i
i=1 p−1
µ2 = E(∇Z2t ) = α2 β 0 Zt−1 +
X
Γ2i ∇Zt−i
i=1
Mapping with the definition of conditional density defined before, we have X1 = ∇Z1t , X2 = ∇Z2t and Σ = Ω so that, from the formula for the conditional density of (X1 X2 ), we have the conditional model for ∇Z1t , given ∇Z2t and given the past, 0
∇Z1t = ω∇Z2t + (α1 − ωα2 ) β Zt−1 +
p−1 X
˜ 1i ∇Zt−i + e ˜1t Γ
i=1 −1 Ω12 Ω22 ;
where ω =
˜ 1i = (Γ1i − ωΓ2i ) ; Γ
e˜1t = (e1t − ωe2t )
and this partial model has variance Ω11.2 = Ω11 − Ω12 Ω−1 22 Ω21 Since β enters both the equations for ∇Z1t and ∇Z2t , we cannot analyse the conditional model for ∇Z1t alone, unless α2 = 0. If we can show this, then 0
∇Z1t = ω∇Z2t + α1 β Zt−1 +
p−1 X
˜ 1i ∇Zt−i + e ˜1t Γ
i=1 p−1
∇Z2t =
X
˜ 2i ∇Zt−i + e2t Γ
i=1
With this, a fully efficient estimate of β can be obtained from the partial model explained by the equation for Z1t . We estimate it by the usual method of concentrating out the short run dynamics as well as ∇Z2t . Such an estimation delivers a total of (N − m) eigenvalues from which we use r nonzero eigen values to decide the number of cointegrating vectors. More details are available in the book by Johansen. 39
Identification in cointegrated systems With nonstationary data, cointegration is a real possibility. We had in the previous discussion seen the issues connected with a cointegrated data set up. Recall that, one could get r < N such cointegrating relations given a vector of N variables. But in a cointegrated model, we have both a longrun structure (given by the cointegrating relations) and the shortrun structure given by the equations in differences. The classical concept of identification is related to prior economic structure. But here Johansen approaches the identification as a purely statistical process and lists out three different meanings: • generic identification, which is related to a statistical model, • empirical identification, which is related to the estimated parameter values, so that we do not accept basically any over identification restriction on parameters, and • economic identification, which is tested to the economic interpretability of the estimated coefficients of an empirically identified structure. Ideally all three must be fulfilled for an empirical model to be considered satisfactory. We shall start with a V AR and the associated V ECM. Being reduced form models, how does one retrieve the so called structure behind these reduced form models? Let us demonstrate this with the simplest of the V ECM models: ∇Zt = αβ 0 Zt−1 + Γ1 ∇Zt−1 + et
et ∼ N (0, Ω)
A structural model is defined by the economic formulation of the problem and can be, for instance, given by B0 ∇Zt = Bβ 0 Zt−1 + B1 ∇Zt−1 + vt , vt ∼ N (0, Σ) with −1 −1 −1 −1 Γ1 = B0 B1 ; α = B0 B; et = B0 vt ; Ω = B0 ΣB0 In a VECM, for a unique identification of the short run structural parameters, given by the set {B0 , B1 , B, Ω} , we have to normally impose N (N − 1) restrictions on the N equations. Note however that the set of longrun parameters is the same in both forms, implying that identification of the longrun structure can be done in either form. In order to identify the longrun relations, we formulate restrictions on the individual cointegrating relations. The problem of identifying the long run structure is similar to the one encountered in econometrics in connection with identifying a simultaneous system equations model. The classical result in identification of the system is given by a rank condition. (See Goldberger,1964, Econometric Theory) and this has been extended to the VECM context by Johansen(1995,Journal of Econometrics,69,111132). Just as in the classical case, where we impose restrictions on the parameters such that the parameter matrix satisfies a rank condition, in the VECM context also we have to impose (r − 1) restrictions on each cointegrating vector, so that in general we need to impose r(r − 1) justidentifying restrictions on β. Since Goldberger’s scheme of identification is based on parameters, which are generally unknown, Johansen defines the rank conditions based on the observable matrices, H and R. The idea here is to choose these matrices in such a way that the linear restrictions implied by them satisfy a rank condition. Let us demonstrate this with the help of both free parameters and restrictions in a cointegrating relation. Accordingly, let Hi = Ri,⊥ be a (N 1 × si ) matrix of full rank; and let Ri be a (N 1 × mi ) matrix of full rank, with (si + mi = N 1) so that R0i Hi = 0. Thus, there are mi restrictions and si parameters to be estimated in the ith relation. Thus, the cointegrating relations are thus assumed to satisfy the restrictions R0i β i = 0 or equivalently, β i = Hi ϕi for some si −vector ϕi ; that is, β = (H1 ϕ1 , . . . , Hr ϕr )
40
where the matrices H1 , . . . , Hr express some linear economic hypotheses to be tested against the data. Herein we specify the condition for identification: The first cointegrating relation is identified, if and only if, rank(R01 β 1 , R01 β 2 , . . . , R01 β r ) = rank(R01 H1 ϕ1 , . . . , R01 Hr ϕr ) = r − 1. The intuitive meaning behind this is very simple. When applying the restrictions of one cointegrating on other cointegrating vectors, we get a matrix of rank r − 1. Hence it is not possible to obtain linear combination of β 2 , . . . , β r to construct a vector in the same way as β 1 which could be confused with β 1 . Hence β 1 can be recognized among all linear combinations of β 1 , . . . , β r as the only one that satisfy the restrictions R1 . But how does one check this if the parameter values are unknown? And we can estimate the parameters only if the restrictions are identifying. So, to make the condition operational, Johansen provides us with a condition to check which of the cointegrating vectors are identified based only on the known coefficient matrices, Ri and Hi . The condition is: For all i and k = 1, . . . , r − 1 and any set of indices 1 ≤ i1 ≤ . . . ≤ ik ≤ r, not containing i, it holds that, rank (R0i Hi1 , . . . , R0i Hik ) ≥ k. If the condition is satisfied for any particular i, then the restrictions are satisfying that particular cointegrating vector. If all β i vectors similarly satisfy this rank condition, then the model is generically identified. Basically, this is the first criterion that must be satisfied, if one is interested in identifying a particular cointegrating relation. As an example consider r = 2 where the condition that must be satisfied is ri.j = rank (R0i Hj ) ≥ 1, i 6= j. For r = 3, we have the conditions, ri.j = rank (R0i Hj ) ≥ 1, ri.jm = rank (R0i (Hj , Hm )) ≥ 2,
i 6= j i, j, m different
So, if one is interested in identifying structures in a cointegrated model, then the above rank condition should be verified before one proceeds with model estimation. Note that this condition tackles only equation by equation restrictions. More specifically, only exclusion or zero restrictions are allowed. Cross equation restrictions or restrictions on covariance matrix are not allowed. ————————We shall fix this with an example. Let us suppose that we have the following set of variables: Zt = (p1 , p2 , e12 , i1 , i2 )0 where the first two variables are prices in two different countries A and B, e12 is the exchange rate between the two countries and the last two are the interest rates prevailing in the two countries. The vector (p1 − p2 , ∇p1 , e12 , i1 , i2 ) is found to be I(1). Let us suppose we have found three cointegrating relations. We want to check if these are identified with some restrictions that satisfy some stylized facts, like the long run PPP relation, (p1 − p2 − e1t ), and the uncovered interest rate parity (UIP) relation, (i1t − i2t ). It was found that, while PPP stationarity was accepted, UIP stationarity was rejected. Next our intention was to check, if a combination or modification of these two hypotheses would give us, stationary relations. Accordingly, let us impose the following restrictions: β = (H1 ϕ1 , H2 ϕ2 , H3 ϕ3 ) , H1 =
1 0 0 0 −1 0 0 1 0 −1
,
H2 =
1 0 0 0 0
41
0 0 1 0 0
0 0 0 0 1
where ,
H3 =
0 1 0 0 0
0 0 0 1 0
.
The first describes the relation between real exchange rates and the interest differential, the second is a modified PPP relation and the third describes a relation between price inflation and nominal interest rate. The rank condition would tell us if these restrictions identify the parameters of the long run relations. We don’t know the parameter values and hence we go for the generic identification. We calculate the following matrices: 0 0 1 1 0 0 0 1 1 0 R01 (H2 : H3 ) = 0 0 1 0 1 , · · · etc. R01 H3 = 0 1 , R01 H2 = 0 0 1 , 1 0 0 0 0 1 0 0 0 0 and find that, rank(R01 H2 ) = 2, rank(R02 H1 ) = 1, rank(R03 H1 ) = 2,
rank(R01 H3 ) = 2, rank(R02 H3 ) = 2, rank(R03 H2 ) = 3,
rank(R01 (H2 : H3 )) = 3 rank(R02 (H1 : H3 )) = 2 rank(R03 (H1 : H2 )) = 3
Thus we see that for the proposed set of restrictions, all the cointegrating relations are satisfying the generic condition and almost all are identified. Next one may proceed to the empirical and economic identification of these relations. ———————Just identification and normalisation Johansen (1994,Journal of Econometrics,63,736) suggests a normalisation procedure for cointegrating vectors that will be just identifying as well. In this case, the generic rank condition for identification of cointegrating vectors will be automatically satisfied. However, one has to still justify such restrictions imposed by the normalisation scheme as economically meaningful. If not, one can impose restrictions that satisfy some economic theory; but in such cases, one may have to test if the restrictions satisfy the generic rank condition. The necessity to impose restrictions to identify cointegrating relations arises because the cointegrating linear combinations are not unique. For example, we can always translate Π = αβ 0 as ˜ 0 where α ˜ = βQ0−1 . We have to choose Q in such a way ˜β ˜ = αQ and β Π = αQQ−1 β 0 = α that it imposes (r − 1) restrictions on each cointegrating vector so that the rank condition for a generic identification is automatically satisfied. Johansen suggests that Q = β10 where β10 is a (r × r) 0 0 ˜ 0 where ˜β nonsingular matrix, defined by β 0 = [β 01 β 02 ] . In this case, αβ 0 = (αβ 01 ) β −1 =α 1 β h i ˜ 0 = Ir : β −10 β 0 . For example, assume that β is a (5 × 3) matrix and let us partition ˜ = (αβ 01 ) , β α 1 2 it the following way.
β11 β21 β31 ··· β41 β51
β12 β22 β32 ··· β42 β52
β13 β23 β33 ··· β43 β53
β1 = ··· β 2
˜ implies so that β
β1 −1 β = ··· 1 β 2
1 0 0 0 1 0 0 0 1 ··· ··· ··· β˜41 β˜42 β˜43 β˜51 β˜52 β˜53 42
Notice that our choice of Q = β10 in our example has in fact imposed two zero restrictions and one normalisation on each cointegrating relation. We shall consider a just identified structure describing long run relationship involving endogenous variables real money, inflation and shortterm interest rate and exogenous variables, real income and bond rate, corresponding to the following restrictions on β. β = (H1 ϕ1 , H2 ϕ2 , H3 ϕ3 ) where H1 =
1 0 0 0 0 0
0 1 0 0 0 0
0 0 0 0 1 0
0 0 0 0 0 1
,
H2 =
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 0 1 0
0 0 0 0 0 1
,
H3 =
0 1 0 0 0 0
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
H1 picks up real money, H2 explains inflation rate and H3 explains the short rate and the two weakly exogenous variables enter all three relations. Note that this structure describes the long run ‘reduced form’ model for the endogenous variables in terms of the weakly exogenous variables. Note that, no testing for the generic rank condition is involved in this case, because as the r − 1 = 2 restrictions have been achieved by linear combinations of the unrestricted cointegrating relations, that is, by rotating the cointegrating space.
43
View more...
Comments