We begin this notebook by setting the default output plot sizes in this notebook:
options(repr.plot.width = 10)
options(repr.plot.height = 8)
dt4
Dataset¶cat(paste0("Last updated: ", Sys.Date()))
As in the univariate case, we begin by loading the dataset:
dt4 <- read.csv(file = "http://www.principlesofeconometrics.com/poe5/data/csv/cps5_small.csv", sep = ",", dec = ".", header = TRUE)
It is always a good idea to get a general look at the data - to make sure that everything loaded correctly:
head(dt4)
Make sure that the data types assigned to each column are correct:
str(dt4)
We can also get some summary statistics:
print(summary(dt4))
Everything appears to be in order - we can move on to modelling.
In this example data, we have the following:
wage
;black
, educ
, exper
, faminc
, female
, metro
, south
, midwest
, west
.We will begin by plotting pairwise scatter-plots for non-indicator variables:
#From: https://stat.ethz.ch/R-manual/R-devel/library/graphics/html/pairs.html
panel.hist <- function(x, ...){
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE, breaks = 30)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, ...)
}
panel.cor <- function(x, y, digits = 2, prefix = "", cex.cor, ...){
usr <- par("usr"); on.exit(par(usr))
par(usr = c(0, 1, 0, 1))
r <- abs(cor(x, y))
txt <- format(c(r, 0.123456789), digits = digits)[1]
txt <- paste0(prefix, txt)
if(missing(cex.cor)) cex.cor <- 0.8/strwidth(txt)
text(0.5, 0.5, txt, cex = 2)
}
#
pairs(dt4[, c('educ','exper', 'faminc', 'wage')],
diag.panel = panel.hist, lower.panel = panel.smooth, #upper.panel = panel.cor,
col = "dodgerblue4", pch = 21, bg = adjustcolor("dodgerblue3", alpha = 0.2))
Note that the diagonal elements are the histogram of the variables, while the upper and lower triangle of the plot matrix are the scatter-plots of the same variables. So, we will examine thediagonal plots and the plots in either the upper, or lower, triangle.
From the plots we can say that:
wage
and faminc
data appears to be scattered more for larger values ofwage
. - `educ` and `exper`;
- `educ` and `faminc`;
- `educ` and `wage`;
exper
and faminc
is not as clear from the plots;We also see that the correlation between explanatory variables is weaker, compared to the correlation between educ
and the remaining variables.
print(cor(dt4[, c('educ', 'exper', 'faminc', 'wage')]))
We can also plot the scatter-matrix of the whole dataset:
pairs(dt4, diag.panel = panel.hist, lower.panel = panel.smooth, upper.panel = panel.cor,
col = "dodgerblue4", pch = 21, bg = adjustcolor("dodgerblue3", alpha = 0.2))
Though for indicator variables these plots do not show much.
We will quickly check if the regional indicator variables provided do not have all of the regions:
head(dt4[, c("south", "west", "midwest", "metro")])
min(dt4[, "south"] + dt4[, "west"] + dt4[, "midwest"])
Since they do not sum to one - we can include all of the variable in our model without falling into a dummy variable trap (otherwise we would need to exclude one regional indicator variable from the model and treat it as a base region).
We can also look at their frequency table:
table(dt4[, "south"] + dt4[, "west"] + dt4[, "midwest"])
Note that the maximum value is 1. If the maximum was 2 - this would show that some of the variables indicate something else, than the rest.
For example, if we include the metro
indicator variable:
table(dt4[, "south"] + dt4[, "west"] + dt4[, "midwest"] + dt4[, "metro"])
We see that there are a number of rows that have a sum of 2, which means that metro
indicates something else, rather than region.
In other words, south
, west
and midwest
regions will be compared to a base OTHER
region.
We will begin by specifying the following model: $$ \begin{aligned} \log(wage) &= \beta_0 + \beta_1 educ + \beta_2 educ^2 \\ &+ \beta_3 exper + \beta_4 exper^2 + \beta_5 metro + \beta_6 south + \beta_7 west + \beta_8 midwest + \beta_9 female + \beta_{10} black \end{aligned} $$
We expect the followign signs for the non-intercept coefficients:
wage
;wage
may be lessened, however, if the additional year is for PhD-level education, then the additional year of education may have an increased effect on wage
. For now, we will assume that is the case, i.e. $\beta_2 > 0$.wage
;wage
. Note: exper^2
is alongside exper
, so we do not know (as of right now), if there is a number of years of experience that results in a decrease in wage
. We are assuming that it would result in an increase, but at a lower rate ($\beta_4 < 0$), compared to someone with less initial years of experience;We assume that other family income,faminc
, should not affect the wage of a person, i.e. we treat it as insignificant variable, whose correlation may be spurious and as such, we do not include it in the model.
Note: it is possible to also include interaction variables in the model. As we already have 10 variables - we will skip this for now, but later on, we will examine some of these variables.
mdl_0_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + west + midwest + female + black", data = dt4)
print(summary(mdl_0_fit))
P.S. If we specify our own power function, we can estimate the same model:
poly_var <- function(x, p){
return(x^p)
}
print(summary(lm(formula = "log(wage) ~ educ + poly_var(educ, 2) + exper + poly_var(exper, 2) + metro + south + west + midwest + female + black", data = dt4)))
Going back to our case, we see that:
educ
, exper
and their squared values;metro
is also as we would expect;south
region earn significantly less than people in all of the remaining regions (other + west + midwest), however, we can only check this if we remove the remaining regional variables and only leave the south variable.female
is negative and significant, indicating possible discrimination in the work force (again, this is only the inital model, so we cannot be sure yet);black
is negatuve but it is insignificant, indicating that there is no racial discrimination.We want to separately test the hypothesis that a coefficient is significant: $$ H_0: \beta_j = 0\\ H_1: \beta_j \neq 0 $$ The test is automatically carried out and the $t$-statistic and $p$-values are presented in the model summary output.
Insignificant variables are those, whose $p$-value is greater than the 0.05 significance level:
print(round(coef(summary(mdl_0_fit)), 4))
We will begin by removing the insignificant variable (where the $p$-value is greater than 0.05) with the largest $p$-values - in this case it is the indicator variable black
.
mdl_1_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + west + midwest + female", data = dt4)
print(round(coef(summary(mdl_1_fit)), 4))
Next up, we will remove the indicartor variable west
. Then we will remove the indicator variable midwest
. We note that after doing so, the base group would be other
+ west
+ midwest
.
mdl_2_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + midwest + female", data = dt4)
print(round(coef(summary(mdl_2_fit)), 4))
mdl_3_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + female", data = dt4)
print(round(coef(summary(mdl_3_fit)), 4))
The coefficient of the indicator variable black
is insignificant, so we remove it as well.
From the model output we can write down the estimated model as:
$$ \begin{aligned} \underset{(se)}{\widehat{\log(wage)}} &= \underset{(0.167)}{1.5288} + \underset{(0.023)}{0.0452} \cdot educ + \underset{(0.001)}{0.0022} \cdot educ^2 \\ &+ \underset{(0.004)}{0.0287} \cdot exper - \underset{(0.0001)}{0.0004} \cdot exper^2 \\ &+ \underset{(0.034)}{0.1219} \cdot metro - \underset{(0.028)}{0.0640} \cdot south - \underset{(0.027)}{0.1788} \cdot female \end{aligned} $$
Furthermore, we can interpret the variables in the following way:
educ
(education) by 1 year
results in an increasing effect on wage
(i.e. the positive effect on wage
is further increased depending on the initial level of eudc
), ceteris paribus:educ = 0
and then increases by $1$ year, then wage
increases by $100 \cdot (0.0452 + 0.0022) = 4.74\%$;educ > 0
and then increases by $1$ year, then the difference in the change in educ
results in:
$$
\begin{aligned}
\underset{(se)}{\widehat{\log(wage)}} \bigg|_{educ + 1} - \underset{(se)}{\widehat{\log(wage)}} \bigg|_{educ} &= 0.0452 + 0.0022 \cdot (educ + 1)^2 - 0.0022 \cdot educ^2\\
&= 0.0452 + 0.0022 \cdot (2 \cdot educ + 1)
\end{aligned}
$$
then wage
increases by $100 \cdot \left[ 0.0452 + 0.0022 \cdot (2 \cdot educ + 1) \right] \%$exper
(years of experience) by 1 year
results in a decreasing effect on wage
(i.e. the positive effect on wage
is decreased depending on the initial level of exper
, ceteris paribus:exper = 0
increases by $1$ year, then wage
increases by $100 \cdot (0.0287 - 0.0004) = 2.83\%$;exper > 0
increases by $1$ year, then the difference in the change in exper
results in:
$$
\begin{aligned}
\underset{(se)}{\widehat{\log(wage)}} \bigg|_{exper + 1} - \underset{(se)}{\widehat{\log(wage)}} \bigg|_{exper} &= 0.0287 - 0.0004 \cdot (exper + 1)^2 + 0.0004 \cdot exper^2\\
&= 0.0287 - 0.0004 \cdot (2 \cdot exper + 1)
\end{aligned}
$$
then wage
changes by $100 \cdot \left[ 0.0452 + 0.0022 \cdot (2 \cdot exper + 1) \right] \%$.exper
that result in a maximum (or minimum) value of wage
, ceteris paribus. Taking the partial derivative (calculating the marginal effect) and equating it to zero yields:
$$
\begin{aligned}
\dfrac{\partial {\widehat{\log(wage)}}}{\partial exper} &= 0.0287 - 0.0004 \cdot 2 \cdot educ = 0.0287 - 0.0008 \cdot exper = 0
\end{aligned}
$$print(cbind(coef(mdl_3_fit)))
print(paste0("Maximum Wage when exper = ", coef(mdl_3_fit)[4] / (-coef(mdl_3_fit)[5] * 2)))
So, when $exper = 33.95$, wage
will be at its maximum since $\dfrac{\partial^2 {\widehat{\log(wage)}}}{\partial^2 exper} < 0$.
Note that we used the exact values instead of the rounded values from the formulas. Reason being that the rounding error would give us a different value:
0.0287 / 0.0008
Which would not be the same, one we try with to verify the results with the actual data.
exper
results in no change in wage:
$$
\begin{aligned}
0.0287 - 0.0004 \cdot (2 \cdot educ + 1) &= 0
\end{aligned}
$$zero_inc <- (coef(mdl_3_fit)[4] / (- coef(mdl_3_fit)[5]) - 1) / 2
print(zero_inc)
So, if the initial value of $exper = 33.45$, wage
would not change with an additional unit increase in exper
. Remember thatexper
can only be integer-valued.
Furthermore, for $exper > 33.45$, wage
would decrease from a unit increase in exper
. We can verify this by taking the initial value exper = 36
:
# Repeat the first row twice:
tst_dt <- dt4[c(1, 1), ]
# Reset row index numbering to avoid duplicate row index numbers
rownames(tst_dt) <- NULL
# Set `exper` column values:
tst_dt[, "exper"] = c(36, 37)
# Print the sample data:
print(tst_dt)
Note the ceteris paribus condition - there is only only a unit change in exper
, while the remaining values are do not change.
Now, we can calculate and compare the predicted values:
tst_pred <- predict(mdl_3_fit, newdata = tst_dt)
print(tst_pred)
print(diff(tst_pred))
which would be a $0.215\%$ decrease in wage for someone with 36 years of experience earning an additional year of experience.
This can be verified manually, by taking the exponents of the predicted values (since these predicted values are from the log-linear model) and caluclating their percentage change: $$ \dfrac{WAGE_{NEW} - WAGE_{OLD}}{WAGE_{OLD}} \cdot 100 $$
print(diff(exp(tst_pred)) / exp(tst_pred[1]) * 100)
We note that this is an approximation only, and not a true equality between logarithm differences and percentage change.
It is very close when the percentage change is small, but for larger percentage changes, it may differ greatly.
exper = 34.95
):tst_dt[, "exper"] <- c(33, 34)
print(tst_dt)
tst_pred <- predict(mdl_3_fit, newdata = tst_dt)
print(tst_pred)
print(diff(tst_pred))
exper
is only integer valued, then we can verify the point, where wage
does not change from a unit incease in exper
:tst_dt[, "exper"] <- c(zero_inc, zero_inc + 1)
print(tst_dt)
tst_pred <- predict(mdl_3_fit, newdata = tst_dt)
print(tst_pred)
diff(tst_pred)
metro = 1
) earns around $100 \cdot 0.1219 = 12.19\%$ more than someone not from a metro area, ceteris paribus;south = 1
) earns around $100 \cdot 0.0640 = 6.4\%$ less (since the coefficient is $-0.0640 < 0$) than someone not from the south. female = 1
), then they earn around $100 \cdot 0.1788 = 17.88\%$ less than someone who is not female.# Define the plot layout matrix:
layout_mat <- matrix(c(1, 1, 2, 2, 3, 3, 3, 3), nrow = 2, byrow = TRUE)
print(layout_mat)
layout(layout_mat)
plot(mdl_3_fit$fitted.values, mdl_3_fit$residuals, type = "p", pch = 21, bg = "cornflowerblue", main = "Residuals vs Fitted", ylab = "residuals", xlab = "fitted values", cex = 1.5)
hist(mdl_3_fit$residuals, col = "cornflowerblue", breaks = 30, main = "Residual Histogram")
qqnorm(mdl_3_fit$residuals, main = "Q-Q plot of residuals", pch = 21, bg = "cornflowerblue", cex = 1.5)
qqline(mdl_3_fit$residuals, col = "red", lwd = 2)
We can see that:
Next, we move on to testing a few hypothesis.
The hypothesis that we want to test is: $$ \begin{cases} H_0&: \gamma_1 = 0 \text{ (residuals are homoskedastic)}\\ H_1&: \gamma_1 \neq 0 \text{ (residuals are heteroskedastic)} \end{cases} $$
We will begin with the Breusch-Pagan Test:
# Breusch–Pagan Test
print(lmtest::bptest(mdl_3_fit))
We have that the $p$-value < 0.05, so we reject the null hypothesis that the residuasl are homoskedastic. Which means that the residuals are heteroskedastic.
Next, we look at the Goldfeld-Quandt Test results:
# Goldfeld–Quandt Test
print(lmtest::gqtest(mdl_3_fit, alternative = "two.sided"))
The $p$-value > 0.05, so we have no grounds to reject the null hypothesis and conclude that the residuals are homoskedastic.
Finally, we look at the White Test results:
# White Test
print(lmtest::bptest(mdl_3_fit, ~ metro*south*midwest*female*educ*exper + I(educ^2) + I(exper^2), data = dt4))
The White test returns the results in the same order as the BP test. So, the $p$-value < 0.05, so we reject the null hypothesis and conclude that the residuals are heteroskedastic.
The hypothesis that we want to test is: $$ \begin{cases} H_0&:\text{the errors are serially uncorrelated}\\ H_1&:\text{the errors are autocorrelated (the exact order of the autocorrelation depends on the test carried out)} \end{cases} $$
We will begin with the Durbin-Watson Test, where the alternative hypothesis is that the autocorrelation is of order 1:
# Durbin–Watson Test
print(lmtest::dwtest(mdl_3_fit, alternative = "two.sided"))
The DW statistic is close to 2, so we do not reject the null hypothesis that there is no serial correlation.
Next up is the Breusch-Godfrey Test, where we can select the autocorrelation order ourselves. We have selected a 2nd order autocorrelation:
# Breusch-Godfrey Test
print(lmtest::bgtest(mdl_3_fit, order = 2))
The BG test returns the values in the same order as the BP test. the $p$-value = 0.901018 > 0.05, so we have no grounds to reject the null hypothesis of no autocorrelation.
We could test with higher order autocorrelation and examine the results, lets try with up to order 20:
for(i in 2:20){
print(paste0("BG Test for autocorrelation order = ", i, "; p-value = ", round(lmtest::bgtest(mdl_3_fit, order = i)$p.value, 5)))
}
As we can see, we have no grounds to reject the null hypothesis of autocorrelation in any of the cases.
The hypothesis that we want to test is: $$ \begin{cases} H_0&:\text{residuals follow a normal distribution}\\ H_1&:\text{residuals do not follow a normal distribution} \end{cases} $$
We will carry our the following tests and combine their $p$-values a single output:
norm_tests = c("Anderson-Darling",
"Shapiro-Wilk",
"Kolmogorov-Smirnov",
"Cramer–von Mises",
"Jarque–Bera")
norm_test <- data.frame(
p_value = c(nortest::ad.test(mdl_3_fit$residuals)$p.value,
shapiro.test(mdl_3_fit$residuals)$p.value,
ks.test(mdl_3_fit$residuals, y = "pnorm", alternative = "two.sided")$p.value,
nortest::cvm.test(mdl_3_fit$residuals)$p.value,
tseries::jarque.bera.test(mdl_3_fit$residuals)$p.value),
Test = norm_tests)
print(norm_test)
We see that the $p$-value is less than the $5\%$ significance level for the Anderson-Darling, Shapiro-Wilk and Kolmogorov-Smirnov tests, where we would reject the null hypothesis of normality. On the other hand the $p$-value is greater than 0.05 for Cramer-von Mises and Jarque-Bera tests, where we would not reject the null hypothesis of normality.
As indicated in the lecture notes, that Shapiro–Wilk has the best power for a given significance, furthermore, 3 our of 5 tests suggest non-normal residuals, so we will go with their results.
OVERALL CONCLUSIONS:
Assumption (MR.5) is related to multicollinearity and will be examined in a later TASK. But from what we have seen so far, almost all of the coefficients are statistically significant, with correct signs. Furthermore, since we were able to estimate the model via OLS, there is not exact collinearity (i.e. no exact linear dependence) between the regressors. So, there may be no collinear variables in our model.
If we look back at our final univariate regression model - the log-linear model: $$ \underset{(se)}{\widehat{\log(\text{wage})}} = \underset{(0.0702)}{1.5968} + \underset{(0.0048)}{0.0988} \cdot \text{educ} $$ We can estimate it here as well, and re-examine its residuals:
lm_univar_fit <- lm(formula = "log(wage) ~ educ", data = dt4)
print(coef(summary(lm_univar_fit)))
layout(layout_mat)
plot(lm_univar_fit$fitted.values, lm_univar_fit$residuals, type = "p", pch = 21, bg = "cornflowerblue", main = "Residuals vs Fitted", ylab = "residuals", xlab = "fitted values", cex = 1.5)
hist(lm_univar_fit$residuals, col = "cornflowerblue", breaks = 30, main = "Residual Histogram")
qqnorm(lm_univar_fit$residuals, main = "Q-Q plot of residuals", pch = 21, bg = "cornflowerblue", cex = 1.5)
qqline(lm_univar_fit$residuals, col = "red", lwd = 2)
Compared to the univariate model:
Again note that the fitted values are on the horizontal axis, which also highlights another interesting poitns regarding the range of attainable fitted values in these models.
Looking at the residual vs fitted value plot, the number of fitted values, greater than 3.2, but less than 3.35 is:
length(lm_univar_fit$fitted.values[lm_univar_fit$fitted.values > 3.2 & lm_univar_fit$fitted.values < 3.35])
length(mdl_3_fit$fitted.values[mdl_3_fit$fitted.values > 3.2 & mdl_3_fit$fitted.values < 3.35])
By using the multivariate regression mode specification we now get fitted values, which are more evenly scattered across their interval, whereas in the univaraite case, we had fitted values clustered along a limited range.
To make everything easier to follow, we will examine the interaction terms one-by one, so as not to overwhelm with too many variables in the model.
mdl_4_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + female*black", data = dt4)
print(summary(mdl_4_fit))
Note that we may want to have a more readable format, which can be achieved with the stargazer pacakge.
Both black
and female * black
are insignificant, so we can remove them from the regression.
Alternatively, we may want to carry out an $F$-test to test the joint hypothesis that: $$ \begin{cases} H_0&: \beta_{female} = 0, \beta_{black} = 0, \beta_{female \times black} = 0\\ H_1&: \text{at least one of the tested parameters is not zero} \end{cases} $$
If fail to reject the null hypothesis, then both race and gender have no significant effect on the model.
print(car::linearHypothesis(mdl_4_fit, c("female=0", "black=0", "female:black=0")))
Since the $p$-value < 0.05, we reject the null hypothesis and conclude that at least one of the variables is statistically significant.
If we only look at the joint hypothesis for black
and female:black
:
print(car::linearHypothesis(mdl_4_fit, c("black=0", "female:black=0")))
Then we do not reject the null hypothesis that both black
and female:black
are not statistically significant and thus we can remove them both from our model.
We can also do this with the ANOVA test: by specifying the restricted model under the null:
mdl_4_restricted_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south", data = dt4)
print(anova(mdl_4_restricted_fit, mdl_4_fit))
mdl_4_restricted_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + female", data = dt4)
print(anova(mdl_4_restricted_fit, mdl_4_fit))
We see that we get the exact same $F$-statistic and the exact same $p$-value. So, we can use either method to carry out the $F$-test for multiple coefficient significance (i.e. multiple restricitons).
Note: we may omit print()
to get a more concise output:
car::linearHypothesis(mdl_4_fit, c("black=0", "female:black=0"))
anova(mdl_4_restricted_fit, mdl_4_fit)
Do note though, when working with .R
scripts or via a command terminal (e.g. when executing the scripts on a remote server) the print()
command is essential for printing output, whereas wimply calling the function would not print the result by default.
female
is negative, it would be interesting to see, whether a higher education has a different effect based on a persons gender.mdl_4_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + female*educ", data = dt4)
print(summary(mdl_4_fit))
We note that the $p$-value of educ
is close to 0.05. On the other hand the interaction variable between gender and education is significant (as well as the squared education, $educ^2$), so we will leave the variables included. We will further motivate this decision via the $F$-test.
Looking at the $F$-test for the hypothesis: $$ \begin{cases} H_0&: \beta_{educ} = 0, \beta_{female \times educ} = 0\\ H_1&: \text{at least one of the tested parameters is not zero} \end{cases} $$
car::linearHypothesis(mdl_4_fit, c("educ=0", "educ:female=0"))
Note: THE NAMES OF THE INTERACTION VARIABLE MUST MATCH EXACTLY THE ONES IN THE OUTPUT. The coefficient names can be viewed by:
t(names(coef(mdl_4_fit)))
If we use female:educ
instead of educ:female
- we would get an error:
tryCatch({
car::linearHypothesis(mdl_4_fit, c("educ=0", "female:educ=0"))
}, error = function(e) {
print(e)
})
Note: in order to avoid our code breaking, we wrap the code in a tryCatch({...})
and have a function specifially to deal with errors. In this case, it simply prints the error. In practice, we may need to save the output to a separate file, or have additional methods to handle any possible errors.
The $p$-value is less than 0.05, so we reject the null hypothesis and conclude that at least one variable is statistically significant.
However, removing only educ
, but leaving the interaction term would further complicate interpretation, especially since its $p$ value is so close to the $5\%$ significance level. If we relax the significance level, the nall the variables are statistically significant at the 0.1 ($10\%$) significance level.
INTERPRETATION:
Looking at the model coefficients: $$ \begin{aligned} \underset{(se)}{\widehat{\log(wage)}} &= \underset{(0.1688)}{1.5876} + \underset{(0.0227)}{0.0446} \cdot educ + \underset{(0.0008)}{0.0019} \cdot educ^2 \\ &+ \underset{(0.0036)}{0.0289} \cdot exper - \underset{(0.0001)}{0.0004} \cdot exper^2 \\ &+ \underset{(0.0345)}{0.1254} \cdot metro - \underset{(0.0280)}{0.0653} \cdot south \\ &- \underset{(0.1391)}{0.4903} \cdot female + \underset{(0.0095)}{0.0217} \cdot \left(female \times educ\right) \end{aligned} $$
or, with a little bit of rearranging, to highlight the effect of gender, we get: $$ \begin{aligned} \underset{(se)}{\widehat{\log(wage)}} &= \underset{(0.1688)}{1.5876} + \underset{(0.0227)}{0.0446} \cdot educ + \underset{(0.0008)}{0.0019} \cdot educ^2 \\ &+ \underset{(0.0036)}{0.0289} \cdot exper - \underset{(0.0001)}{0.0004} \cdot exper^2 \\ &+ \underset{(0.0345)}{0.1254} \cdot metro - \underset{(0.0280)}{0.0653} \cdot south \\ &+ \left(\underset{(0.0095)}{0.0217} \cdot educ - \underset{(0.1391)}{0.4903}\right) \cdot female \end{aligned} $$
a possible interpretation could be as follows: if the person is female, then their $\log(wage)$ differs by $\left(\underset{(0.0095)}{0.0217} \cdot educ - \underset{(0.1391)}{0.4903}\right)$, compared to males (or the base non-female group), ceteris paribus.
By specifying this model we can see how much education offsets discrimination based on gender. Notice that in this case, if educ = 0
, then there is a large difference in wage - the wage is lwoer by around $100 \cdot 0.4903 = 49.03 \%$ for females.
HOWEVER, if we look at the sample data:
dt4[dt4[, "educ"] == 0, ]
We only have two cases when educ = 0
- ONE FOR FEMALES and ONE FOR MALES. Looking at the difference:
(12.50 - 9.19)/9.19
is around $36\%$, however, other factors, like metro
, south
and exper
are different, while the coefficient in our model, holds these values cosntant (i.e. the same), with only gender being different (this explains the $49.03\%$ value in our model).
Having so few datapoints does not reflect the case when educ = 0
, hence we should be careful when identifying it.
mdl_4_fit = lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south*educ + female*educ", data = dt4)
print(round(coef(summary(mdl_4_fit)), 5))
We see that the interaction variable between south
and educ
is insignificant, so we will not include it in our model.
mdl_4_fit = lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + south + metro*female*educ", data = dt4)
print(round(coef(summary(mdl_4_fit)), 5))
The $F$-test for the joint significance for education significance: $$ \begin{cases} H_0&: \beta_{educ} = \beta_{educ^2} = \beta_{female \times educ} = \beta_{metro \times educ} = \beta_{metro \times female \times educ}= 0\\ H_1&: \text{at least one of the tested parameters is not zero} \end{cases} $$
t(names(coef(mdl_4_fit)))
car::linearHypothesis(mdl_4_fit, c("educ=0", "I(educ^2)=0", "educ:female=0", "educ:metro=0", "educ:metro:female=0"))
With $p$-value < 0.05, we reject the null hypothesis and conclude that educ
is statistically significant in our model.
On the other hand, we could remove the the squared value of educ
. Though we will examine this in more detail in the collinearity task.
Furthermore, testing the significance of only the $educ$ and its polynomial $educ^2$: $$ \begin{cases} H_0&: \beta_{educ} = \beta_{educ^2} = 0\\ H_1&: \text{at least one of the tested parameters is not zero} \end{cases} $$ yields:
car::linearHypothesis(mdl_4_fit, c("educ=0", "I(educ^2)=0"))
a $p$-value < 0.05, which means that we still reject the null hypothesis and conclude that education has a significant effect on wage
.
Finally, the $R^2_{adj}$ is:
print(summary(mdl_4_fit)$adj.r.squared)
Interaction terms are not restricted to indicator variables - we can include interactions where BOTH variables are non-indivcators
Consequently, let us look at yet another interaction variable, but this time between edu
and exper
.
The motivation for including this interaction variable can be formulated as a question:
In other words, we want to include an additional variable, $educ \times exper$ in our model:
mdl_4_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + south + metro*female*educ + educ:exper", data = dt4)
print(round(coef(summary(mdl_4_fit)), 5))
The coefficient of the interaction term educ:exper
is statistically significant ($p$-value < 0.05).
INTERPRETATION:
This means that we can write our model as (note, we will keep a general notation to make it easier to see what we want to explain): $$ \begin{aligned} \log(wage) &= \beta_0 + \beta_1 educ + \beta_2 educ^2 + \beta_3 exper + \beta_4 exper^2 \\ &+ \beta_5 metro + \beta_6 south + \beta_7 west + \beta_8 midwest + \beta_9 female + \beta_{10} black \\ &+ \beta_{11} \left( educ \times exper \right) + \epsilon \end{aligned} $$
We can re-write this equation as: $$ \begin{aligned} \log(wage) &= \beta_0 + \left(\beta_1 + \beta_{11} exper \right)educ + \beta_2 educ^2 + \beta_3 exper + \beta_4 exper^2 \\ &+ \beta_5 metro + \beta_6 south + \beta_7 west + \beta_8 midwest + \beta_9 female + \beta_{10} black \\ &+ \epsilon \end{aligned} $$
or as:
So, the coefficient $\beta_{11}$ can be interpreted as the change in effectiveness of education for a one unit increase in experience.
Alternatively, rewriting the equation as: $$ \begin{aligned} \log(wage) &= \beta_0 + \beta_1 educ + \beta_2 educ^2 + \left( \beta_3 + \beta_{11} educ \right) exper + \beta_4 exper^2 \\ &+ \beta_5 metro + \beta_6 south + \beta_7 west + \beta_8 midwest + \beta_9 female + \beta_{10} black \\ &+ \epsilon \end{aligned} $$
In this case, the coefficient $\beta_{11}$ can be interpreted as the change in effectiveness of experience for a one unit increase in education.
We would also like to point out that the adjusted $R^2$ is larger than in the previous model. The $R^2_{adj}$ of the new model is slightly larger than before:
print(summary(mdl_4_fit)$adj.r.squared)
We do note one more result: the square of educ
is now insignificant - I(educ^2)
has a $p$-value of 0.635, in which case we would not reject the null hypothesis that it is insignificant.
If we drop this squared variable and compare $R_{adj}^2$, AIC and BIC values.
The unrestricted model:
print(summary(mdl_4_fit))
AIC(mdl_4_fit)
BIC(mdl_4_fit)
The restricted model:
mdl_4_R_fit <- lm(formula = "log(wage) ~ educ + exper + I(exper^2) + south + metro*female*educ + educ:exper", data = dt4)
print(summary(mdl_4_R_fit))
AIC(mdl_4_R_fit)
BIC(mdl_4_R_fit)
While the coefficient of educ
is now significant, we see that the adjusted $R^2$ is unchanged, the AIC and BIC are slightly lower (indicating a slightly better model).
All in all dropping the variable appears to not yield any noticeable improvement.
In such a case it is usefull to:
The relevant coefficients, which we want to compare, are:
coef_mat <- data.frame(
COEFS = names(coef(mdl_4_fit)),
UNRESTRICTED = coef(mdl_4_fit),
RESTRICTED = append(x = coef(mdl_4_R_fit), values = NA, after = 2)
)
coef_mat <- data.frame(coef_mat, 'CHANGE (%)' = (coef_mat[, "RESTRICTED"] / coef_mat[, "UNRESTRICTED"] - 1) * 100, check.names = FALSE)
print(coef_mat)
We see that educ
coefficient value is affected the most - inreasing by around $17\%$, while the remaining parameters (excluding the intercept) increasd between $0.17\%$ and $6.5\%$.
Generally, we may want to remove the insignificant variables. However, before deciding on the removal of this variable, let us examine, whether any linear restrictions can be applied.
Maybe re-estimating the coefficients via RLS would improve the significance of the squared educ
variable in our model?
On the other hand, looking at the coefficient signs and magnitude for educ
and exper
, we may want to verify the following hypothesis:
$$ \text{Each additional year of education has the same effect as each additional year of experience on }\log(wage) $$
Note that this concerns not only educ
and exper
, but their polynomial terms as well!
This restriction can be formulated as the following hypothesis: $$ \begin{cases} H_0&: \beta_{educ} = \beta_{exper},\text{ and } \beta_{educ^2} = \beta_{exper^2}\\\\ H_1&: \beta_{educ} \neq \beta_{exper}\text{ or } \beta_{educ^2} \neq \beta_{exper^2} \text{ or both} \end{cases} $$
Note that in TASK 8 we have already carried our a number of multiple restriction tests, but we simply tested whether multiple parameters are significant or not, we did not test, whether some parameters are statistically significantly identical to one another.
car::linearHypothesis(mdl_4_fit, c("educ-exper=0", "I(educ^2)-I(exper^2)=0"))
So, we reject the null hypothesis and conclude that education and experience have different effects on $\log(wage)$.
Nevertheless, we may still be interested to test if the non-squared coefficients are equal, that is: $$ \begin{cases} H_0&: \beta_{educ} = \beta_{exper}\\\\ H_1&: \beta_{educ} \neq \beta_{exper} \end{cases} $$
Note in this case, there is less economic reasoning for this restriction, since we are ignoring their polynomial variables.
car::linearHypothesis(mdl_4_fit, c("educ-exper=0"))
In this case we do not reject the null hypothesis that the coefficients are equal.
This conclusion allows us to re-estimate the regression via restricted least squares (RLS).
Let's look again at our initial model, which we want to apply RLS:
round(coef(summary(mdl_4_fit)), 5)
Next, we can apply the linear restriction as follows:
Based on the positions of educ
and exper
, in order to apply educ-exper=0
restriction, we need to specify the restriction matrix and value vector as:
L = matrix(c(0, 1, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0), nrow = 1, byrow = TRUE)
r = 0
Then, we can estimate the model via RLS with these restrictions:
mdl_4_rls_fit = lrmest::rls(formula = formula(mdl_4_fit), R = L, r = r, data = dt4, delt = rep(0, length(r)))
print(mdl_4_rls_fit[[1]])
Note that the standard errors and t-statistics are not calculated. Fortunately, we can do this manually following the lecture notes (or other literature references therein):
# Calculate OLS estimate:
X_d <- model.matrix(mdl_4_fit)
xtx_inv <- solve(t(X_d) %*% X_d)
beta_ols <- xtx_inv %*% t(X_d) %*% mdl_4_fit$model[, 1]
# Calculate the Adjustment component needed for the RLS:
RA_1 <- xtx_inv %*% t(L)
RA_2 <- solve(L %*% xtx_inv %*% t(L))
RA_3 <- L %*% beta_ols - r
RA <- RA_1 %*% RA_2 %*% RA_3
# RLS = OLS - 'Restriction Adjustment'
beta_rls <- beta_ols - RA
# Get RLS standard error:
y_fit <- X_d %*% beta_rls
resid <- mdl_4_fit$model[, 1] - y_fit
sigma2_rls <- sum(resid^2) / (nrow(dt4) - (length(beta_rls) - length(r)))
# Get the RLS parameter variance-covariance matrix:
D_mat <- diag(length(beta_rls)) - RA_1 %*% RA_2 %*% L
rls_vcov <- sigma2_rls * D_mat %*% xtx_inv
beta_rls_se <- sqrt(diag(rls_vcov))
print(round(data.frame(Estimate = beta_rls, Standard_error = beta_rls_se), 5))
We can see from the output that the coefficients are now equal.
Furthermore, a consequence of RLS is that the associated standard errors are smaller. Consequently, I(educ^2)
is now significant.
We will calculate the Variance Inflation Factor for each parameter (note that we do not calcualte VIF for the intercept).
print(cbind(car::vif(mdl_4_fit)))
A couple of points regarding high VIF for polynomial and indicator variables:
centering
the variables (i.e., subtracting their means) before creating the powers or the products. So, in our cases, we see that the interaction terms and indicator variables are taken for all variable combinations. Newertheless, we may be interested in checking whether educ
and exper
are collinear.
To do this, we can either define the regression model without any interaction or polynomial variables, or specify the auxillary regressions manually. We will define a new regression to save some space, but you are encouraged to verify the VIF values by calculating them manually (i.e. without the built-in VIF functions).
print(cbind(car::vif(lm(formula = "log(wage) ~ educ + exper + south + metro + female", data = dt4))))
Note that from the definition of $VIF$, the regression itself for wage
does not matter - we are using the design matrix to estimate a model for the exogeneous regressors, but we want to only use those exogeneous regressors, which we want to include in our final model.
From the resutls we see that these variables are NOT collinear. The collinearity only appears from the inclusion of polynomial and interaction variables and are not cause for concern.
In this specific case, the no collinearity result initially appears very weird, since from the exper
and educ
variable definitions for this dataset we have that:
educ
- years of educationexper
- potential experience = age - educ - 6
So, we would expect that educ
and exper
are collinear. We will examine this with more detail right now!
We will begin by taking a subset of the data with only educ
and exper
since we do not want to modify the original dataset:
dt4_new <- dt4
dt4_new$age <- dt4_new[, "exper"] + dt4_new[, "educ"] + 6
head(dt4_new)
If we were to calculate the correlation between these variables:
cor(dt4_new[, c("educ", "exper", "age")])
We would find that:
educ
and age
is very small;educ
and exper
is around -0.2 - while small it may be somewhat significant;exper
and age
is very large;So, it may very well be that age
and exper
are collinear, but not exper
and educ
. In other words:
exper
- the potential experience (from the definition: years spent not in education, assumingly spent working) - is primarily driven by ones age;exper
and age
are highly correlated - and from the definition of exper
- we should be able to use age
as a proxy variable (i.e. a substitute, or as we will later see - an instrumental variable) for exper
exper
, or age
variables;We can very easily verify this, by replacing exper
with age
:
mdl_4_age_fit <- lm(formula = "log(wage) ~ educ + I(educ^2) + age + I(age^2) + south + metro*female*educ + educ:age", data = dt4_new)
print(round(coef(summary(mdl_4_age_fit)), 5))
comparing the coefficients with the previous model:
print(round(coef(summary(mdl_4_fit)), 5))
We see that, because exper
and educ
are highly correlated - the coefficient of age
and age^2
are very similar in terms of value, sign and significance ($t$ and $p$ values).
On the other hand, because educ
and age
have a very small correlation, the coefficient educ:exper
: is insignificant.
Furthermore, if we were to replace educ
with exper
, then, since exper
and age
are highly correlated, we should run into a collinearity problem:
mdl_4_collin <- lm(formula = "log(wage) ~ exper + I(exper^2) + age + I(age^2) + south + metro*female*exper + exper:age", data = dt4_new)
print(round(coef(summary(mdl_4_collin)), 5))
What we immediately notice (compared with mdl_4_age_fit
) that by replacing this one variable in the model:
exper
is negative (more experience leads to a smaller wage, which is questionable);metro
, age^2
, metro:female
changed;Furthermore, if we were to carry out an $F$-test to check the overall significance (it is immediately available in one of the model output tables):
summary(mdl_4_collin)
We can look at the last 3 lines of this table to focus our attention on the $F$-value:
cat(paste0(tail(capture.output(summary(mdl_4_collin)), 4), collapse = "\n"))
We can also calculate the exact $p$-value with:
pf(summary(mdl_4_collin)$fstatistic[1], summary(mdl_4_collin)$fstatistic[2], summary(mdl_4_collin)$fstatistic[3], lower.tail = FALSE)
Though from the summary output p-value: < 2.2e-16
is enough information for most cases.
With $p$-value = 4.44e-106 < 0.05, we reject the null hypothesis that all of the coefficients (excpet the intercept) are insignificant, while the individual $t$-statistic and their associated $p$-values indicate that almost all of the coefficients are insignificant.
If we were to examine the VIF of the parameters from this model:
print(cbind(car::vif(lm(formula = "log(wage) ~ exper + age + south + metro + female", data = dt4_new))))
We see that exper
and age
are highly collinear. If we were to further include educ
, then we would have a perfect multicollinearity, which would result in an error:
tryCatch({
print(cbind(car::vif(lm(formula = "log(wage) ~ educ + exper + age + south + metro + female", data = dt4_new))))
}, error = function(e) {
print(e)
})
As before, in order to avoid our code breaking, we wrap the code in a tryCatch({...})
so that our code does not stop at the error. Since we know for a fact that this code produces an error because of multicollinearity, and since this is an additional example, which is not used in any latter tasks, we can safely capture the error output, print it and have our code continue running further blocks.
In practice, blindly using tryCatch
in such a way that the error is simply ignored would most surely result in a significant oversight in you analysis. Generally, error handling is important, but it is not the core focus of this course.
The error reads there are aliased coefficients in the model
. Since $R_j^2$ would be 1, for $j \in \{educ,\ exper,\ age\}$, then $VIF = \dfrac{1}{1 - R_j^2} = \dfrac{1}{0} = \infty$.
Note that in R
"aliased" refers to variables that are linearly dependent (i.e. cause perfect multicollinearity).
Fortunately, R
has a function to test for a linear dependence between terms - alias:
print(alias(lm(formula = "log(wage) ~ educ + exper + age + south + metro + female", data = dt4_new)))
Looking at the table the results show:
age
- the variable which showed to have some linear dependence with other regressors;age
. This means they're highly correlated. Note that terms can be highly correlated without being linearly dependent;educ
and exper
are highly correlated with age
. The (Intercept)
also shows a larger value, but we will not touch the intercept term as it changes the interpretation of the variables as well as acts as a sort of garbage collector in our model.So, we have determined the following:
educ
and exper
are correlated, but the correlation is not high enough to warrant collinearity - educ
has additional information, which is not included in exper
;exper
and age
are highly correlated, which results in collinearity between them - when estimating a regression model, we need to choose one of them to include in our model;Possible explanations for the fact that the correlation between educ
and exper
is smaller, even though it directly enters the formula, used to calculate exper
:
exper
increases with age
, while educ
tends to level-off (i.e. stop changing) after a certain number of years gained. For example, once someone gains a Master's degree, or a PhD, it may be very likely that they stop persuing additional degrees. As a result, their 'years spent in education' stops increasing, while they continue to age, and gain additional years of potential experience;educ
is more like a categorical variable, with cateogires corresponding to years in education, these range from 0 (the minimum) to 21 (the maximum), but since the numerical values assigned usually coincide with the number of years, it is treated like a non-categorical variable.dt4_full <- read.csv(file = "http://www.principlesofeconometrics.com/poe5/data/csv/cps5.csv", sep = ",", dec = ".", header = TRUE)
head(dt4_full)
print(paste0("Sample size: N = ", nrow(dt4_full)))
not only does the dataset contain more observations, but it also contains additional variables. The full variable list is as follows:
age
- ageasian
- =1 if asianblack
- =1 if blackdivorced
- =1 if divorcededuc
- years of educationexper
- potential experience = age - educ - 6faminc
- other family income, $\$$female
- =1 if femalehrswork
- hours worked last weekinsure
- covered by private health insurancemarried
- =1 if marriedmcaid
- =1 if covered by Medicaid last yearmcare
- =1 if covered by Medicare last yearmetro
- =1 if in metropolitan areamidwest
- =1 if midwest regionnchild
- number of own children in householdnortheast
- =1 if northeast regionsingle
- =1 if singlesouth
- =1 if south regionunion
- =1 if a union memberwage
- earnings per hour, $\$$west
- =1 if west regionwhite
- =1 if whiteIn fact, if we look at the regional indicator variables:
table(rowSums(dt4_full[, c("south", "west", "midwest")]))
table(rowSums(dt4_full[, c("south", "west", "midwest", "northeast")]))
We see that using all four indicator variables always sums up to one:
$$
\text{south}_i + \text{west}_i + \text{midwest}_i + \text{northeast}_i = 1,\quad \forall i = 1,...,N
$$
In other words, including all four of the regional variables would result in a dummy variable trap. As they are collinear. So, in our smaller dataset the $other$ region is actually the excluded midwest
column of the full dataset.
On the other hand, if we were to also examine the metro
variable instead of northeast
:
table(rowSums(dt4_full[, c("south", "west", "midwest", "metro")]))
We see that not only do some rows sum to zero - some sum up even to $2$. This clearly shows that metro
variable indicates somethign completely different than the regional variables.
If we were to look back at out initial model:
mdl_fulldt <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + west + midwest + female + black", data = dt4_full)
print(round(coef(summary(mdl_fulldt)), 4))
We see a completely different result regarding race. Furthermore, regional indicator variables are also significant for most cases, except for west
indicator.
As was mentioned during lectures, a larger sample leads to smaller standard errors and more precise estimates. If we want to account for complex interaction effects and a large amount of variables - we need a large dataset, which would cover many possible combinations of these values (i.e. a larger variable value range).
Further looking at the interaction terms:
mdl_fulldt <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + west + midwest + female*black", data = dt4_full)
print(round(coef(summary(mdl_fulldt)), 4))
We now see that the interaction term $female \times black$ is statistically significant.
We can further create even more complex models by including even more interaction terms.
mdl_fulldt <- lm(formula = "log(wage) ~ educ + I(educ^2) + exper + I(exper^2) + metro + south + west + midwest + female*black + metro*female*educ + educ:exper", data = dt4_full)
print(round(coef(summary(mdl_fulldt)), 4))
Our finalized model is the following:
print(round(coef(summary(mdl_4_fit)), 4))
We begin by testing the model residuals for autocorrelation via Breusch-Godfrey test: $$ \begin{cases} H_0&:\text{the errors are serially uncorrelated}\\ H_1&:\text{the errors are autocorrelated at lag order 2} \end{cases} $$
# Breusch-Godfrey Test
print(lmtest::bgtest(mdl_4_fit, order = 2))
We have that the $p$-value of the $LM$ statistic is greater than the $5\%$ significance level, we have no grounds to reject the null hypothesis and conclude that the residuals are not serially correlated.
Next up, we will test for heteroskedasticity in the errors: $$ \begin{cases} H_0&: \gamma_1 = 0 \text{ (residuals are homoskedastic)}\\ H_1&: \gamma_1 \neq 0 \text{ (residuals are heteroskedastic)} \end{cases} $$ For simplicity, we will carry out the Breusch-Pagan Test:
# Breusch–Pagan Test
print(lmtest::bptest(mdl_4_fit))
Because the $p$-value < 0.05, we reject the null hypothesis and conclude that the residuals are heteroskedastic.
Our test results indicated that:
As a result, we need to correct the OLS standard errors for heteroskedasticity - We can use $HC0$, $HC1$, $HC2$ or $HC3$ estimators to consistently estimate the coefficient variance.
We have no need to correct for autocorrelation, as they are not serially correlated - there is no need to use HAC, yet it is a robust method that also takes into account heteroskedasticity, so, as an example, we will use it as well.
For comparison, our current model and its coefficient standard errors:
print(round(coef(summary(mdl_4_fit)), 4))
Then, the standard errors, corrected via the different HCE methods, as well as the biased OLS (because the errors are heteroskedastic) s.e.'s can be summarised as follows
dt_hce_se <- NULL
for(hc_type in c("HC0", "HC1", "HC2", "HC3")){
dt_hce_se <- cbind(dt_hce_se, lmtest::coeftest(mdl_4_fit, vcov. = sandwich::vcovHC(mdl_4_fit, type = hc_type))[, 2])
}
dt_hce_se <- cbind(dt_hce_se, coef(summary(mdl_4_fit))[, 2])
colnames(dt_hce_se) <- c("HC0", "HC1", "HC2", "HC3", "OLS")
round(dt_hce_se, 6)
We see that the difference between the four HCE methods is not incredibly large, nevertheless, we will select HC3
and examine the coefficient summary output:
tmp_out <- lmtest::coeftest(mdl_4_fit, vcov. = sandwich::vcovHC(mdl_4_fit, type = "HC3"))
print(round(tmp_out, 4))
Note the results - $\text{educ}^2$ is still insignificant, the $p$-value of metro
decreased, the $p$-value of south
increased slightly. All in all, no significant changes.
If we wanted to also calculate the HAC correction standard errors:
print(round(data.frame(HAC = sqrt(diag(sandwich::NeweyWest(mdl_4_fit, lag = 2)))), 6))
Following the documentation, NeweyWest()
is a convenience interface to vcovHAC()
using Bartlett kernel weights.
In comparison vcovHAC()
allows chosing weights as either weightsAndrews
, or weightsLumley
, or a custom function to calculate the weights (which is where NeweyWest()
is a usefull wrapper function, since we do not need to calculate the weights separately).
And the full model output:
print(round(lmtest::coeftest(mdl_4_fit, sandwich::NeweyWest(mdl_4_fit, lag = 2))[, ], 4))
Note that we used [, ]
so that we could treat the output as a data.frame
and round the results.
While the $p$-values slightly decreased, there are still no significant changes.
Since we have estimated determined that the residuals are heteroskedastic, but not autocorrelated, we can use WLS with a generic weight function $\widehat{h}_i = \exp\left(\widehat{\log(\epsilon_i^2)}\right)$, where $\log(\epsilon_i^2)$ are the fitted values from the following residual regression $\log(\epsilon_i^2) = \alpha_0 + \alpha_1 Z_{1,i} + ... + \alpha_m Z_{m,i} + v_i$
log_resid_sq_ols <- lm.fit(y = log(mdl_4_fit$residuals^2), x = model.matrix(mdl_4_fit))
Note that lm.fit
returns a different output from lm
but we can pass the endogeneous variable as y
and the exogeneous variable design matrix (including the intercept and interaction/polynomial terms) as x
.
h_est = exp(log_resid_sq_ols$fitted.values)
Next, we can use the diagonal elements of $\widehat{\mathbf{\Omega}}^{-1} = \text{diag} \left(\widehat{h}_1^{-1},...,\widehat{h}_N^{-1} \right)$ as the weights:
mdl_4_wls_fit <- lm(formula = formula(mdl_4_fit), data = dt4, weights = 1 / h_est)
print(round(coef(summary(mdl_4_wls_fit)), 4))
Compared to our OLS results:
print(round(coef(summary(mdl_4_fit)), 4))
we see that the WLS parameters are significant.
Regarding the $R^2$ - in the WLS it is larger:
summary(mdl_4_wls_fit)$adj.r.squared
But do note, that it is calculated on the weighted (i.e. transformed) data, so it is not directly comparable to the OLS $R^2$.
In general, the coefficients themselves are not largely different, which would indicate that there our model is likely correctly specified.
On the other hand, we may be more interested in comparing the model residuals of the WLS. It would make sense to compare the WLS residuals which are from the transformed data, since the model was fitted on the transformed values:
e_star <- 1.0 / sqrt(h_est) * mdl_4_wls_fit$residuals
options(repr.plot.height = 5)
par(mfrow = c(1, 2))
plot(mdl_4_wls_fit$fitted.values, e_star, pch = 21, col = "black", bg = "cornflowerblue")
points(mdl_4_fit$fitted.values, mdl_4_fit$residuals, pch = 21, col = "black", bg = "red")
legend("topright", legend = c("WLS", "OLS"), pch = c(21, 21), pt.cex = 1.2, cex = 0.8, pt.bg = c("cornflowerblue", "red"))
#
hist(e_star, breaks = 30, col = "cornflowerblue", main = NULL)
hist(mdl_4_fit$residuals, breaks = 30, col = "red", add = TRUE)
legend("topright", legend = c("WLS", "OLS"), pch = c(22, 22), pt.cex = 1.2, cex = 0.8, pt.bg = c("cornflowerblue", "red"))
We do note that the residual variance is larger in the transformed data. Generally, we would hope that WLS (and (F)GLS) would reduce the variance of the residuals. This may indicate, that we need different weights. Newertheless, for now, we will use the WLS model.
Looking at it in a bit more detail:
options(repr.plot.height = 8)
layout(layout_mat)
plot(mdl_4_wls_fit$fitted.values, e_star, type = "p", pch = 21, bg = "cornflowerblue", main = "Residuals vs Fitted", ylab = "residuals", xlab = "fitted values", cex = 1.5)
hist(e_star, col = "cornflowerblue", breaks = 30, main = "Residual Histogram")
qqnorm(e_star, main = "Q-Q plot of residuals", pch = 21, bg = "cornflowerblue", cex = 1.5)
qqline(e_star, col = "red", lwd = 2)
Visually, the scatterplot of the residuals may be better, but we are not sure. Thankfully, we know some tests, which can help us out.
# Breusch-Godfrey Test
print(lmtest::bgtest(mdl_4_wls_fit, order = 2))
# Breusch-Pagan test for WLS
print(car::ncvTest(mdl_4_wls_fit))
see also this answer for comparison between bptest()
and ncvTest()
. Note that ncvTest()
supports either a weighted or unweighted linear model, produced by lm
.
Another discussion can be found here:
lmtest::bptest()
- the error variance is a function of a linear combination of the predictors in the model;car::ncvTest()
- the error variance is a function of the expectation of Y (variance formula ~ fitted.values). For the weighted model, ncvTest()
uses the Pearson residuals and hence takes the weights into account.Unfortunately, even specifying varformula
to use the fitted values, the result does not match that of ncvTest()
:
# BP test doesn't give the same results for the WLS case"
lmtest::bptest(mdl_4_wls_fit, varformula = ~ fitted.values(mdl_4_wls_fit), studentize = FALSE)
This is something to keep in mind in case you ever need to carry out a WLS and check its residuals.
While the $p$-value is larger - though $p = 0.09$ is cutting it close to the $5\%$ significance level - we would not reject the null hypothesis that the residuals are homoskedastic.
So, our WLS procedure did take into account all of the heteroskedasticity.
Note that since we calculated the weights using the same exogeneous variables, as in the main model, it may be very likely, that the residuals variance depends on some additional exogeneous variables, which we did not include in our main model.
Though in this case, it appears that we have accounted for most of the significant heteroskedasticity.
On the other hand, if there were still some heteroskedasticity - we would need to correct our WLS standard errors. We can do this quite easily with:
tmp_out <- lmtest::coeftest(mdl_4_wls_fit, vcov. = sandwich::vcovHC(mdl_4_wls_fit, type = "HC3"))
print(round(tmp_out, 4))
We would again return to the conclusion that we should remove $educ^2$ as it is insignificant (though we would get different results with HC0
, HC1
and HC2
).
We can conclude the following:
If, even after having carried out all of these tests and different estimation methods, we would still like to account for the any remaining heteroskedasticity, we could look at:
black
is significant in the full dataset).For interests sake, if we were to compare the residuals for the original data - they would have minor differences
options(repr.plot.height = 5)
par(mfrow = c(1, 2))
plot(mdl_4_wls_fit$fitted.values, mdl_4_wls_fit$residuals, pch = 21, col = "black", bg = "cornflowerblue")
points(mdl_4_fit$fitted.values, mdl_4_fit$residuals, pch = 21, col = "black", bg = "red")
legend("topright", legend = c("WLS", "OLS"), pch = c(21, 21), pt.cex = 1.2, cex = 0.8, pt.bg = c("cornflowerblue", "red"))
#
hist(mdl_4_fit$residuals, breaks = 30, col = "red", main = NULL)
hist(mdl_4_wls_fit$residuals, breaks = 30, col = "cornflowerblue", add = TRUE)
legend("topright", legend = c("WLS", "OLS"), pch = c(22, 22), pt.cex = 1.2, cex = 0.8, pt.bg = c("cornflowerblue", "red"))
Again, since WLS fits a model on the transformed data, we are interested if the residuals, from the fitted transformed data adhere to our (MR.3) - (MR.6) assumptions.