We shall re-do the example from the lecture slides.
Say we have data collected on a monthly basis over five years (i.e., 60 months) on the following variables:
Y
market capitalization of Company B ($000)X
the price of oil (dollars per barrel) above the benchmark pricesuppressPackageStartupMessages({
library(readxl)
library(gdata)
})
txt1 <- "http://uosis.mif.vu.lt/~rlapinskas/(data%20R&GRETL/"
txt2 <- "badnews.xls"
tmp = tempfile(fileext = ".xls")
#Download the file
download.file(url = paste0(txt1, txt2),
destfile = tmp, mode = "wb")
#Read it as an excel file
BADNEWS <- read_excel(path = tmp)
BADNEWS <- rename.vars(data.frame(BADNEWS),
from = c("Y", "X"),
to = c("capit", "price"), info = FALSE)
BADNEWS <- ts(BADNEWS, freq = 12)
plot.ts(BADNEWS)
Since this is time series data and it is likely that previous months news about the oil price will affect current market capitalization, it is necessary to include lags of X in the regression. Below are present OLS estimates of the coefficients in a distributed lag model in which market capitalization is allowed to depend on present news about the oil price and news up to \(q_{max} = 4\) months ago. That is: \[capit_t = \alpha + \beta_0 price_t + \beta_1 price_{t-1} + ... + \beta_4 price_{t-4} + \epsilon_t\]
suppressPackageStartupMessages({library(dynlm)})
mod.L4 <- dynlm(capit ~ L(price, 0:4), data = BADNEWS)
round(summary(mod.L4)$coef, 4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 91173.3150 1949.8502 46.7591 0.0000
## L(price, 0:4)0 -131.9943 47.4361 -2.7826 0.0076
## L(price, 0:4)1 -449.8597 47.5566 -9.4595 0.0000
## L(price, 0:4)2 -422.5183 46.7778 -9.0324 0.0000
## L(price, 0:4)3 -187.1041 47.6409 -3.9274 0.0003
## L(price, 0:4)4 -27.7710 47.6619 -0.5827 0.5627
Just looking at the coefficient values, what can we conclude about the effect of news about the oil price on Company B’s market capitalization?
Increasing the oil price by one dollar per barrel in a given month is associated with:
and so on. To provide some intuition about what the ceteris paribus condition implies in this context, note that, for example, we can also express the second statement as: ‘Increasing the oil price by one dollar in a given month will tend to reduce the market capitalization in the following month by $ 449’860, assuming that no other change in the oil price occurs’.
Since the p-value corresponding to the explanatory variable \(price_{t-4}\) is greater than 0.05, we cannot reject the null hypothesis that \(\beta_4 = 0\) at the 5% level of significance. Accordingly we drop this variable from the model and re-estimate the lag length equal to 3, yielding the following results:
mod.L3 <- dynlm(capit ~ L(price, 0:3), data = BADNEWS)
round(summary(mod.L3)$coef, 4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 90402.2210 1643.1828 55.0165 0.0000
## L(price, 0:3)0 -125.9000 46.2405 -2.7227 0.0088
## L(price, 0:3)1 -443.4918 45.8816 -9.6660 0.0000
## L(price, 0:3)2 -417.6089 45.7332 -9.1314 0.0000
## L(price, 0:3)3 -179.9043 46.2520 -3.8896 0.0003
The p-value for testing \(\beta_3 = 0\) is 0.0003, which is much less than 0.05. We therefore conclude that the variable \(price_{t-3}\) does indeed belong in the distributed lag model. Hence \(q = 3\) is the lag length we select for this model.
In a formal report, we would present this table of results or the equation: \[capit_t = 90402.22 -125.9 price_t -443.49 price_{t-1} -417.61 price_{t-2} -179.90 price_{t-3}\]
Since these results are similar to those discussed above, we will not repeat their interpretation.
In general, considering the following model:
\[Y_t = \alpha + \phi Y_{t-1} + \beta_0 X_t + \beta_1 X_{t-1} + \epsilon_t, \quad 0 < \phi < 1\]
The short-run multiplier can be calculated by taking the partial derivatives: \[\dfrac{\partial Y_t}{\partial X_t} = \beta_0\] Which shows that an increase in \(X\) with one unit has an immediate impact on \(Y\) of \(\beta_0\) units.
The long-run multiplier (or equilibrium multiplier):
In order to calculate the long-run multiplier, we need to see how a one-unit increase in \(X\) at time \(t\) affects \(Y\) as \(t\) increases.
The effect after one period is: \[\dfrac{\partial Y_{t+1}}{\partial X_t} = \phi \dfrac{\partial Y_t}{\partial X_t} + \beta_1 = \phi \beta_0 + \beta_1\]
Similarly, after two periods:
\[\dfrac{\partial Y_{t+2}}{\partial X_t} = \phi \dfrac{\partial Y_{t+1}}{\partial X_t} = \phi (\phi \beta_0 + \beta_1)\]
and so on. This shows that after the first period, the effect is decreasing if \(|\phi| < 1\).
Imposing this so-called stability condition (i.e. \(|\phi| < 1\)) allows us to determine the long-run effect of a unit change in \(X_t\).
\[\beta_0 + (\phi \beta_0 + \beta_1) + \phi (\phi \beta_0 + \beta_1) + ... =\beta_0 + (1 + \phi + \phi^2 +...)(\phi\beta_0 + \beta_1) = \dfrac{\beta_0 + \beta_1}{1 - \phi}\]
This says that if \(X_t\) increases with one unit for one moment, the expected cumulative change (increase or decrease) in \(Y_t\) is given by \((\beta_0 + \beta_1)/(1-\phi)\).
Note that we can calculate the symbolic expressions of the partial derivatives in R! Unfortunately, we cannot combine the expressions in an intuitive way (as we can in Python
):
expr_t0 = quote(phi*Y.lag.1 + b0*X + b1*X.lag.1)
expr_t1 = substitute(phi * expr_t0 + beta_0 * X_t_plus_1 + beta_1 * X, list(expr_t0 = expr_t0))
expr_t2 = substitute(phi * expr_t1 + beta_0 * X_t_plus_2 + beta_1 * X_t_plus_1, list(expr_t1 = expr_t1))
print("Y_{t}:"); print(expr_t0)
## [1] "Y_{t}:"
## phi * Y.lag.1 + b0 * X + b1 * X.lag.1
print("Y_{t+1}:"); print(expr_t1)
## [1] "Y_{t+1}:"
## phi * (phi * Y.lag.1 + b0 * X + b1 * X.lag.1) + beta_0 * X_t_plus_1 +
## beta_1 * X
print("Y_{t+2}:"); print(expr_t2)
## [1] "Y_{t+2}:"
## phi * (phi * (phi * Y.lag.1 + b0 * X + b1 * X.lag.1) + beta_0 *
## X_t_plus_1 + beta_1 * X) + beta_0 * X_t_plus_2 + beta_1 *
## X_t_plus_1
print(paste0("Incorrect expression: ", expression(phi*expr_t0 + b0*X.1 + b1*X)))
## [1] "Incorrect expression: phi * expr_t0 + b0 * X.1 + b1 * X"
#Partial derivative of Y_t with respect to X_t
D(expr_t0, "X")
## b0
#Partial derivative of Y_{t+1} with respect to X_t
D(expr_t1, "X")
## phi * b0 + beta_1
#Partial derivative of Y_{t+2} with respect to X_t
D(expr_t2, "X")
## phi * (phi * b0 + beta_1)
We can even evaluate the expression for specific values:
val <- D(expression(phi*Y.lag.1 + b0*X + b1*X.lag.1), "X")
eval(val, envir = list(b0 = 1))
## [1] 1
val2 <- D(expression(phi*(phi*Y.lag.1 + b0*X + b1*X.lag.1) + b0*X.1 + b1*X), "X")
eval(val2, envir = list(b0 = 1, b1 = 0.5, phi = -0.1))
## [1] 0.4
Data contains the yearly data on Puerto Rican employment rate, minimum wage and other variables.
suppressPackageStartupMessages({require(readxl)})
txt1 <- "http://uosis.mif.vu.lt/~rlapinskas/(data%20R&GRETL/"
txt2 <- "prmin.data.xls"
tmp = tempfile(fileext = ".xls")
#Download the file
download.file(url = paste0(txt1, txt2),
destfile = tmp, mode = "wb")
#Read it as an excel file
puerto <- ts(read_excel(path = tmp, col_names = FALSE), start=1950, freq = 1)
## Warning in data.matrix(data): NAs introduced by coercion
## Warning in data.matrix(data): NAs introduced by coercion
colnames(puerto)=c("year","avgmin","avgwage","kaitz","avgcov","covt","mfgwage",
"prdef","prepop","prepopf","prgnp","prunemp","usgnp","tt","post74",
"lprunemp","lprgnp","lusgnp","lkaitz","lprun_1","lprepop","lprep_1",
"mincov","lmincov","lavgmin")
We will estimate the following model: \[ \begin{aligned} lprepop_t &= \beta_0 + \beta_1 lmincov_t + \beta_2 lusgnp_t + \epsilon_t \end{aligned} \]
where
avgmin
is the average minimum wage, avgwage
is the average overall wage, and avgcov
is the average coverage rate (the proportion of workers actually covered by the minimum wage law).
plot(puerto[,c(21,24,18)])
All the variables have a trend close to linear (accurate analysis would require to test for unit root first!).
round(summary(lm(lprepop ~ lmincov + lusgnp, data = puerto))$coefficients, 4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.0544 0.7654 -1.3776 0.1771
## lmincov -0.1544 0.0649 -2.3797 0.0229
## lusgnp -0.0122 0.0885 -0.1377 0.8913
Thus, if the minimum wage increases then employment declines which matches classical economics. On the other hand, the GNP seems to be not significant but we shall test the claim right now:
round(summary(lm(lprepop~lmincov+lusgnp+tt,data=puerto))$coefficients, 4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -8.6963 1.2958 -6.7113 0.0000
## lmincov -0.1687 0.0442 -3.8126 0.0006
## lusgnp 1.0573 0.1766 5.9860 0.0000
## tt -0.0324 0.0050 -6.4415 0.0000
Here, we interpret the coefficient of lusgnp
as follows: if usgnp
raises 1% more then, it should according to its long run trend, prepop
will raise extra 1.057%.
The above regression is equivallent to this (rewritten initial equation in the form of deviations from the trend):
round(
summary(lm(lm(lprepop~tt)$res~lm(lmincov~tt)$res+lm(lusgnp~tt)$res,data=puerto))$coefficients,
4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.0000 0.0061 0.0000 1.0000
## lm(lmincov ~ tt)$res -0.1687 0.0436 -3.8683 0.0005
## lm(lusgnp ~ tt)$res 1.0573 0.1741 6.0734 0.0000
We can see that accounting for a time effect determines the significance of the exogenous variables in our model.
Let us analyze daily IBM stock prices spanning May 17, 1961 to November 2, 1962 (369 days in all) and daily closing prices of German DAX index starting at the 130th day of 1991.
suppressPackageStartupMessages({
library(dynlm)
library(waveslim)
library(datasets)
})
#?ibm
data(ibm)
#?EuStockMarkets
data(EuStockMarkets)
DAX=EuStockMarkets[1:369, 1]
par(mfrow = c(1,3))
plot(ibm); plot(DAX, type = "l")
iD = lm(ibm ~ DAX)
plot(DAX, ibm); abline(iD)
round(summary(iD)$coefficients, 4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 31.9930 74.0349 0.4321 0.6659
## DAX 0.2735 0.0453 6.0403 0.0000
Though, because of their nature, ibm
and DAX
should not have any relationship, the coefficient of regression is very significant. This is an example of spurious regression which can be explained through the fact that the errors of the model have a unit root.
The following should be carried out:
ibm
has a unit rootDAX
has a unit rootID
model have a unit rootThe (1) and (2) can be carried out in the same manner as in the previous lectures. Regarding (3):
iD.res=ts(iD$res)
round(summary(dynlm(d(iD.res)~L(iD.res)+time(iD.res)))$coefficients, 4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.8987 1.0885 1.7443 0.0820
## L(iD.res) -0.0134 0.0071 -1.8838 0.0604
## time(iD.res) -0.0113 0.0054 -2.1066 0.0358
Recall that the unit root hypothesis \(H_0\) claims that the coefficient at L(iD.res)
equals zero. In this case of testing for a unit root in errors, we have a further complication: here the t-statistics has not the Dickey-Fuller, but the Engle-Granger distribution whose critical values are given in this table:
No. of variables (included Y) |
1% |
Significance level 5% |
10% |
---|---|---|---|
2 | -3.90 | -3.34 | -3.04 |
3 | -4.29 | -3.74 | -3.45 |
4 | -4.64 | -4.10 | -3.81 |
5 | -4.96 | -4.42 | -4.13 |
In order to test the cointegration of these two variables note that the t-statistics equals -1.884 which is closer to 0 than -3.34, therefore there is no ground to reject \(H_0\). Thus the errors have a unit root or, in other words, ibm
and DAX
are not cointegrated and the strong regression bonds are only spurious.
It is well known that many economic time series are DS and therefore the regression for levels is often misleading (the standard Student or Wald tests provide wrong p-values). On the other hand, if these DS series are cointegrated, the OLS method is OK.
Recall that if Y and X have unit roots (i.e., are nonstationary), but some linear combination \(Y_t - \alpha - \beta X_t\) is (trend) stationary, then we say that Y and X are cointegrated.
In order to establish cointegration, we estimate unknown coefficients \(\alpha\) and \(\beta\) by means of OLS and then test whether the errors of the model \(Y_t = \alpha+ \beta X_t (+ \gamma t) + \epsilon_t\) have a unit root (as always, respective test is applied to the residuals \(e_t = \hat{\epsilon}_t = Y_t - \hat{\alpha} - \hat{\beta} X_t (-\hat{\gamma}t)\)).
The dataset contains quarterly data, 1947:01 - 1989:03, where
lc
- logarithms of the real quarterly personal expenditure (comsumption)ly
- logarithms of the real quarterly aggregated disposable incometxt1 <- "http://uosis.mif.vu.lt/~rlapinskas/(data%20R&GRETL/"
txt2 <- "hamilton.txt"
#Read it as an excel file
ham <- read.csv(paste0(txt1, txt2), header = TRUE, sep = " ")[, -1]
lc=ts(ham[,1],start=1947,freq=4)
ly=ts(ham[,2],start=1947,freq=4)
par(mfrow=c(1,3))
plot(lc,ylab="lc&ly");lines(ly,col=2)
plot(ly,lc)
mod1 = lm(lc ~ ly + time(ly)) # cointegration equation (time included)
round(summary(mod1)$coefficients, 4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -2003.8047 152.3933 -13.1489 0
## ly 0.6557 0.0253 25.9632 0
## time(ly) 1.1410 0.0867 13.1541 0
lines(as.numeric(ly),mod1$fit,col = 2)
plot(lm(lc~ly+time(ly))$res,type="l");abline(0,0)
It is easy to verify that both series have unit roots and their final models are:
suppressPackageStartupMessages({
library(dynlm)
library(urca)
})
#Test for unit root in lc
mod2c = dynlm(d(lc) ~ L(lc) + L(d(lc),1:5) + time(lc))
#Continue the procedure until only the significant variables remain...
mod2c = dynlm(d(lc) ~ L(lc) + L(d(lc), 1:2) + time(lc))
#Do the same for ly
mod4y = dynlm(d(ly) ~ L(ly) + L(d(ly), 1:4))
#Model output
round(summary(mod2c)$coefficients, 4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -307.4920 129.8662 -2.3678 0.0191
## L(lc) -0.0520 0.0218 -2.3918 0.0179
## L(d(lc), 1:2)1 0.0655 0.0759 0.8628 0.3895
## L(d(lc), 1:2)2 0.2395 0.0759 3.1571 0.0019
## time(lc) 0.1755 0.0739 2.3752 0.0187
round(summary(mod4y)$coefficients, 4)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.8023 1.4982 1.8705 0.0632
## L(ly) -0.0025 0.0020 -1.2216 0.2236
## L(d(ly), 1:4)1 0.0081 0.0764 0.1061 0.9157
## L(d(ly), 1:4)2 -0.0615 0.0753 -0.8163 0.4155
## L(d(ly), 1:4)3 0.0828 0.0753 1.1005 0.2728
## L(d(ly), 1:4)4 -0.2104 0.0737 -2.8558 0.0049
The coeff of L(lc)
is \(-2.392 > -3.45\) and the coeff of L(ly)
is \(-1.222 > -2.89\). In both cases we do not reject the null hypothesis that a unit root is present in each of these time series.
Recall that these models can also be created ´automatically´ (if using selectlags=´AIC´
) with the ur.df
function from the urca package:
summary(ur.df(lc, type = "trend", lags = 13,selectlags = "AIC"))
##
## ###############################################
## # Augmented Dickey-Fuller Test Unit Root Test #
## ###############################################
##
## Test regression trend
##
##
## Call:
## lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.0917 -0.3565 0.0356 0.4626 2.7845
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 32.93396 14.24322 2.312 0.02211 *
## z.lag.1 -0.04993 0.02203 -2.266 0.02483 *
## tt 0.04185 0.01869 2.239 0.02658 *
## z.diff.lag1 0.05440 0.07822 0.695 0.48781
## z.diff.lag2 0.25265 0.07819 3.231 0.00151 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7771 on 152 degrees of freedom
## Multiple R-squared: 0.08556, Adjusted R-squared: 0.06149
## F-statistic: 3.555 on 4 and 152 DF, p-value: 0.008376
##
##
## Value of test-statistic is: -2.2665 12.4227 2.6088
##
## Critical values for test statistics:
## 1pct 5pct 10pct
## tau3 -3.99 -3.43 -3.13
## phi2 6.22 4.75 4.07
## phi3 8.43 6.49 5.47
We want to test whether the errors of the model: \[lc_t = \alpha + \beta ly_t + \gamma t + \epsilon_t\]
make a stationary process, i.e. do not have a unit root. The arguments of the function dynlm
must be a time series
. We will try two different models cointegration models - one with a trend and one without. We convert the residuals of the cointegrated models to time series objects:
cy.res.no.trend <- ts(lm(lc ~ ly)$res, start = 1947, frequency = 4)
cy.res.trend <- ts(lm(lc ~ ly + time(lc))$res, start = 1947, frequency = 4)
We want to test whether \(|\phi| < 0\) in \(cy.res_t = \phi cy.res_{t-1} + \epsilon_t\). However, if the errors of this model do not constitute WN, the standard OLS procedure gives the erroneous estimate of \(\phi\). One approach is to replace the AR(1) model by an AR(p) model. Another approach is to use the AR(1) model, but to take into account the fact that the errors are not a WN (this could be done by the Phillips-Ouliaris test).
This method is also called the Engle-Granger method, it tests the unit root in errors hypothesis. However, the OLS procedure proposes the coefficients to the cointegration equation such that the variance of the residuals is minimum, thus the residuals are ¥too stationary´. In other words, the null hypothesis for a unit root will be rejected too often.
This is why we use other critical values to test the null hypothesis \(H_0: \rho = 0\) and \(\Delta e_t = \rho e_{t-1} + \gamma_1 \Delta e_{t-1} + ... + \gamma_{p-1} \Delta e_{t-p+1} + w_t\).
The asymptotic 5% critical values for residual unit root tests for cointegration in \[Y_t = \alpha + \beta X_t + \gamma_t + \epsilon_t\]
No. of X’s in the right-hand-side of eq. |
No deterministic terms in eq. |
Only constant in eq. |
Constant and Trend in eq. |
---|---|---|---|
1 | -2.76 | -3.37 | -3.80 |
2 | -3.27 | -3.77 | -4.16 |
3 | -3.74 | -4.11 | -4.49 |
4 | -4.12 | -4.45 | -4.74 |
5 | -4.40 | -4.71 | -5.03 |
we shall perform the test ¡manually´. Here is the final models of residuals:
#Cointegration equation (without a trend)
round(
summary(dynlm(d(cy.res.no.trend)~-1+L(cy.res.no.trend)+L(d(cy.res.no.trend),1:4)))$coefficients,
4)
## Estimate Std. Error t value Pr(>|t|)
## L(cy.res.no.trend) -0.1450 0.0527 -2.7520 0.0066
## L(d(cy.res.no.trend), 1:4)1 -0.2725 0.0798 -3.4143 0.0008
## L(d(cy.res.no.trend), 1:4)2 -0.0806 0.0817 -0.9868 0.3252
## L(d(cy.res.no.trend), 1:4)3 -0.0378 0.0808 -0.4678 0.6406
## L(d(cy.res.no.trend), 1:4)4 -0.2296 0.0732 -3.1377 0.0020
In this case, we have that since -2.752 is closer to zero than -3.37, we do not reject the unit root in errors null hypothesis, thus the processes lc
and ly
are not cointegrated.
#Cointegration equation (with a trend)
round(
summary(dynlm(d(cy.res.trend)~-1+L(cy.res.trend)+L(d(cy.res.trend),1)))$coefficients,
4)
## Estimate Std. Error t value Pr(>|t|)
## L(cy.res.trend) -0.2415 0.0532 -4.5368 0.0000
## L(d(cy.res.trend), 1) -0.2208 0.0716 -3.0845 0.0024
On the other hand, if we include a trend component, then our t-statistic -4.5368 < -3.80. In this case, we reject the null hypothesis, i.e. we would get that lc
and ly
are cointegrated!.
This highlights one drawback of the Engle-Granger procedure - Engle-Granger test does not consistently indicate cointegration. One possible explanation is that the Engle-Granger and Dickey-Fuller tests are known to have low power.
The Phillips-Ouliaris cointegration test of \[H_0: \text{x and y are not cointegrated}\] can be performed with the ca.po
function from the urca
package or po.test
from tseries
package:
cy.data=cbind(lc,ly)
cy.po <- ca.po(cy.data, demean = "trend", type="Pz")
summary(cy.po)
##
## ########################################
## # Phillips and Ouliaris Unit Root Test #
## ########################################
##
## Test of type Pz
## detrending of series with constant and linear trend
##
## Response lc :
##
## Call:
## lm(formula = lc ~ zr + trd)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.85014 -0.37853 -0.00368 0.50769 3.08677
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 33.85240 14.79603 2.288 0.0234 *
## zrlc 0.88800 0.04841 18.345 <2e-16 ***
## zrly 0.06024 0.03552 1.696 0.0918 .
## trd 0.04291 0.01940 2.212 0.0283 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7874 on 166 degrees of freedom
## Multiple R-squared: 0.9997, Adjusted R-squared: 0.9996
## F-statistic: 1.582e+05 on 3 and 166 DF, p-value: < 2.2e-16
##
##
## Response ly :
##
## Call:
## lm(formula = ly ~ zr + trd)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.4872 -0.4648 0.0132 0.5494 5.3991
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -28.47975 19.97038 -1.426 0.155717
## zrlc 0.25511 0.06533 3.905 0.000137 ***
## zrly 0.79301 0.04795 16.539 < 2e-16 ***
## trd -0.03992 0.02619 -1.524 0.129349
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.063 on 166 degrees of freedom
## Multiple R-squared: 0.9994, Adjusted R-squared: 0.9994
## F-statistic: 8.897e+04 on 3 and 166 DF, p-value: < 2.2e-16
##
##
##
## Value of test-statistic is: 49.341
##
## Critical values of Pz are:
## 10pct 5pct 1pct
## critical values 71.9586 81.3812 102.0167
The test statistics 49.341 is closer to 0 than 81.3812 (alternatively, since 49.341 < 81.3812), therefore we do not reject the null hypothesis and conclude that lc
and ly
are not cointegrated.
If \(X\) and \(Y\) are integrated but not cointegrated, their ties (in levels) can only be spurious. In order to get sensible results, the model may be augmented with new variables or another ADL model for (stationary differences) created: \[\Delta Y_t = \phi + \gamma_1 \Delta Y_{t-1} + ... + \gamma_p \Delta Y_{t-p} + \omega_0 \Delta X_t + ... + \omega_q \Delta X_{t-q} + \epsilon_t\]
Similar model can also be analyzed in the case of cointegration, but then one more term, the so-called error correction term \(\hat{u}_{t-1}\) (here \(Y_t = \alpha + \beta X_t + u_t\) is a long-run equilibrium or cointegration equation) should be included: \[\Delta Y_t = \phi + \delta t + \lambda \hat{u}_{t-1} + \gamma_1 \Delta Y_{t-1} + ... + \gamma_p \Delta Y_{t-p} + \omega_0 \Delta X_t + ... + \omega_q \Delta X_{t-q} + \epsilon_t\]
This is the error correction model (ECM) where the coefficient \(\lambda\) must be negative (e.g., if \(\lambda = -0.25\), then unit deviation from equilibrium will be compensated in \(1/0.25=4\) time units).
The ECM is usually created in two steps:
\[\Delta Y_t = \phi + \delta t + \lambda \hat{u}_{t-1} + \gamma_1 \Delta Y_{t-1} + ... + \gamma_p \Delta Y_{t-p} + \omega_0 \Delta X_t + ... + \omega_q \Delta X_{t-q} + \epsilon_t\]
Let us say, that we found that lnQ
and ln(X/G)
are cointegrated (this should be tested as a separate step!) from the dataset, where:
txt1 <- "http://uosis.mif.vu.lt/~rlapinskas/(data%20R&GRETL/"
txt2 <- "data1.txt"
#Read it as an excel file
d1 <- ts(read.table(paste0(txt1, txt2), header = TRUE), start=1963)
X=d1[,"NTE"]
Q=d1[,"RF"]
P=d1[,"NF"]/d1[,"RF"]
G=d1[,"NTE"]/d1[,"RTE"]
c1=lm(log(Q)~log(X/G))
summary(c1) # cointegration (equilibrium) equation
##
## Call:
## lm(formula = log(Q) ~ log(X/G))
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.060861 -0.013353 -0.006115 0.015492 0.047468
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 7.11378 0.25406 28.00 <2e-16 ***
## log(X/G) 0.35537 0.01778 19.99 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.02478 on 28 degrees of freedom
## Multiple R-squared: 0.9345, Adjusted R-squared: 0.9322
## F-statistic: 399.5 on 1 and 28 DF, p-value: < 2.2e-16
c1res=ts(c1$res,start=1963)
We use the residuals of the cointegration equation c1 and create a regression model: \[\Delta lnQ_t = \lambda \cdot clres_{t-1} + \gamma_1 \Delta lnQ_{t-1} + \omega_0 \Delta ln(X/G)_t + \omega_1 \Delta ln(X/G)_t + \epsilon_t\]
mod1 = dynlm(d(log(Q))~ -1 + L(c1res) + L(d(log(Q))) + L(d(log(X/G)),0:1))
summary(mod1)
##
## Time series regression with "ts" data:
## Start = 1965, End = 1992
##
## Call:
## dynlm(formula = d(log(Q)) ~ -1 + L(c1res) + L(d(log(Q))) + L(d(log(X/G)),
## 0:1))
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.023810 -0.007176 -0.001476 0.006199 0.028039
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## L(c1res) -0.4233 0.1208 -3.504 0.001823 **
## L(d(log(Q))) 0.4051 0.1592 2.545 0.017794 *
## L(d(log(X/G)), 0:1)0 0.5368 0.1347 3.984 0.000548 ***
## L(d(log(X/G)), 0:1)1 -0.2532 0.1387 -1.825 0.080523 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.01307 on 24 degrees of freedom
## Multiple R-squared: 0.7338, Adjusted R-squared: 0.6894
## F-statistic: 16.54 on 4 and 24 DF, p-value: 1.242e-06
The error correction term has the right sign and is significant (coef. of L(c1res)
has a p-value = 0.001823 < 0.05
), the model itself is quite acceptable.
We can still improve the model by including other differences of \(I(1)\) variables, eg.s. log-differences of relative food prices P/G: \[\Delta lnQ_t = \lambda \cdot clres_{t-1} + \gamma_1 \Delta lnQ_{t-1} + \omega_0 \Delta ln(X/G)_{t-1} + \omega_1 \Delta ln(X/G)_t + \tilde{\omega}_0 \Delta ln(P/G)_{t-1} + \tilde{\omega}_1 \Delta ln(P/G)_t+ \epsilon_t\] After deleting insignificant terms, we get such an ECM:
mod2 = dynlm(d(log(Q))~-1+L(c1res)+L(d(log(Q)))+d(log(X/G))+d(log(P/G)))
summary(mod2)
##
## Time series regression with "ts" data:
## Start = 1965, End = 1992
##
## Call:
## dynlm(formula = d(log(Q)) ~ -1 + L(c1res) + L(d(log(Q))) + d(log(X/G)) +
## d(log(P/G)))
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.017798 -0.011171 -0.002318 0.005203 0.027164
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## L(c1res) -0.40437 0.10795 -3.746 0.000999 ***
## L(d(log(Q))) 0.24010 0.13413 1.790 0.086080 .
## d(log(X/G)) 0.36579 0.08985 4.071 0.000440 ***
## d(log(P/G)) -0.32361 0.10135 -3.193 0.003906 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.01169 on 24 degrees of freedom
## Multiple R-squared: 0.7872, Adjusted R-squared: 0.7518
## F-statistic: 22.2 on 4 and 24 DF, p-value: 8.984e-08
Here the short-run demand elasticities with respect to total real expenditure and relative price are 0.36579 and 0.32361, respectively. In the long-run, however, demand is independent of relative price, while the expenditure elasticity falls sligthly to the 0.35537 in the equilibrium relationship (see the model c1
above).
We also note, that the residuals are WN:
suppressPackageStartupMessages({library(forecast)})
tsdisplay(mod2$residuals)