## 3.7 OLS Prediction and Prediction Intervals

We have examined model specification, parameter estimation and interpretation techniques. However, usually we are not only interested in identifying and quantifying the independent variable effects on the dependent variable, but we also want to predict the (unknown) value of $$Y$$ for any value of $$X$$. Prediction plays an important role in financial analysis (forecasting sales, revenue, etc.), government policies (prediction of growth rates for income, inflation, tax revenue, etc.) and so on.

Let our univariate regression be defined by the linear model: $Y = \beta_0 + \beta_1 X + \epsilon$ and let assumptions (UR.1)-(UR.4) hold. Let $$\widetilde{X}$$ be a given value of the explanatory variable.

### 3.7.1 OLS Prediction

We want to predict the value $$\widetilde{Y}$$, for this given value $$\widetilde{X}$$. In order to do that we assume that the true DGP process remains the same for $$\widetilde{Y}$$. The difference from the mean response is that when we are talking about the prediction, our regression outcome is composed of two parts: $\widetilde{\mathbf{Y}}= \mathbb{E}\left(\widetilde{\mathbf{Y}} | \widetilde{\mathbf{X}} \right) + \widetilde{\boldsymbol{\varepsilon}}$ where:

• $$\mathbb{E}\left(\widetilde{Y} | \widetilde{X} \right) = \beta_0 + \beta_1 \widetilde{X}$$ is the systematic component;
• $$\widetilde{\epsilon}$$ - the random component;

The expected value of the random component is zero. We can estimate the systematic component using the OLS estimated parameters: $\widehat{\mathbf{Y}} = \widehat{\mathbb{E}}\left(\widetilde{\mathbf{Y}} | \widetilde{\mathbf{X}} \right)= \widetilde{\mathbf{X}} \widehat{\boldsymbol{\beta}}$ $$\widehat{\mathbf{Y}}$$ is called the prediction.

#### 3.7.1.1 The Conditional Expectation is The Best Predictor

We begin by outlining the main properties of the conditional moments, which will be useful (assume that $$X$$ and $$Y$$ are random variables):

• Law of total expectation: $$\mathbb{E}\left[ \mathbb{E}\left(h(Y) | X \right) \right] = \mathbb{E}\left[h(Y)\right]$$;
• Conditional variance: $$\mathbb{V}{\rm ar} ( Y | X ) := \mathbb{E}\left( (Y - \mathbb{E}\left[ Y | X \right])^2| X\right) = \mathbb{E}( Y^2 | X) - \left(\mathbb{E}\left[ Y | X \right]\right)^2$$;
• Variance of conditional expectation: $$\mathbb{V}{\rm ar} (\mathbb{E}\left[ Y | X \right]) = \mathbb{E}\left[(\mathbb{E}\left[ Y | X \right])^2\right] - (\mathbb{E}\left[\mathbb{E}\left[ Y | X \right]\right])^2 = \mathbb{E}\left[(\mathbb{E}\left[ Y | X \right])^2\right] - (\mathbb{E}\left[Y\right])^2$$;
• Expectation of conditional variance: $$\mathbb{E}\left[ \mathbb{V}{\rm ar} (Y | X) \right] = \mathbb{E}\left[ (Y - \mathbb{E}\left[ Y | X \right])^2 \right] = \mathbb{E}\left[\mathbb{E}\left[ Y^2 | X \right]\right] - \mathbb{E}\left[(\mathbb{E}\left[ Y | X \right])^2\right] = \mathbb{E}\left[ Y^2 \right] - \mathbb{E}\left[(\mathbb{E}\left[ Y | X \right])^2\right]$$;
• Adding the third and fourth properties together gives us: $$\mathbb{V}{\rm ar}(Y) = \mathbb{E}\left[ Y^2 \right] - (\mathbb{E}\left[ Y \right])^2 = \mathbb{V}{\rm ar} (\mathbb{E}\left[ Y | X \right]) + \mathbb{E}\left[ \mathbb{V}{\rm ar} (Y | X) \right]$$.

For simplicity, assume that we are interested in the prediction of $$\mathbf{Y}$$ via the conditional expectation: $\mathbf{Y} = \mathbb{E}\left(\mathbf{Y} | \mathbf{X} \right)$ We will show that, in general, the conditional expectation is the best predictor of $$\mathbf{Y}$$.

Assume that the best predictor of $$Y$$ (a single value), given $$\mathbf{X}$$ is some function $$g(\cdot)$$, which minimizes the expected squared error: $\text{argmin}_{g(\mathbf{X})} \mathbb{E} \left[ (Y - g(\mathbf{X}))^2 \right].$ Using the conditional moment properties, we can rewrite $$\mathbb{E} \left[ (Y - g(\mathbf{X}))^2 \right]$$ as: \begin{aligned} \mathbb{E} \left[ (Y - g(\mathbf{X}))^2 \right] &= \mathbb{E} \left[ (Y + \mathbb{E} [Y|\mathbf{X}] - \mathbb{E} [Y|\mathbf{X}] - g(\mathbf{X}))^2 \right] \\ &= \mathbb{E} \left[ (Y - \mathbb{E} [Y|\mathbf{X}])^2 + 2(Y - \mathbb{E} [Y|\mathbf{X}])(\mathbb{E} [Y|\mathbf{X}] - g(\mathbf{X})) + (\mathbb{E} [Y|\mathbf{X}] - g(\mathbf{X}))^2 \right] \\ &=\mathbb{E} \left[ \mathbb{E}\left((Y - \mathbb{E} [Y|\mathbf{X}])^2 | \mathbf{X}\right)\right] + \mathbb{E} \left[ 2(\mathbb{E} [Y|\mathbf{X}] - g(\mathbf{X}))\mathbb{E}\left[Y - \mathbb{E} [Y|\mathbf{X}] |\mathbf{X}\right] + \mathbb{E} \left[ (\mathbb{E} [Y|\mathbf{X}] - g(\mathbf{X}))^2 | \mathbf{X}\right] \right] \\ &= \mathbb{E}\left[ \mathbb{V}{\rm ar} (Y | X) \right] + \mathbb{E} \left[ (\mathbb{E} [Y|\mathbf{X}] - g(\mathbf{X}))^2\right]. \end{aligned} Taking $$g(\mathbf{X}) = \mathbb{E} [Y|\mathbf{X}]$$ minimizes the above equality to the expectation of the conditional variance of $$Y$$ given $$\mathbf{X}$$: $\mathbb{E} \left[ (Y - \mathbb{E} [Y|\mathbf{X}])^2 \right] = \mathbb{E}\left[ \mathbb{V}{\rm ar} (Y | X) \right].$ Thus, $$g(\mathbf{X}) = \mathbb{E} [Y|\mathbf{X}]$$ is the best predictor of $$Y$$.

### 3.7.2 Prediction Intervals

We can defined the forecast error as $\widetilde{\boldsymbol{e}} = \widetilde{\mathbf{Y}} - \widehat{\mathbf{Y}} = \widetilde{\mathbf{X}} \boldsymbol{\beta} + \widetilde{\boldsymbol{\varepsilon}} - \widetilde{\mathbf{X}} \widehat{\boldsymbol{\beta}}$

From the distribution of the dependent variable: $\mathbf{Y} | \mathbf{X} \sim \mathcal{N} \left(\mathbf{X} \boldsymbol{\beta},\ \sigma^2 \mathbf{I} \right)$ We know that the true observation $$\widetilde{\mathbf{Y}}$$ will vary with mean $$\widetilde{\mathbf{X}} \boldsymbol{\beta}$$ and variance $$\sigma^2 \mathbf{I}$$.

Furthermore, since $$\widetilde{\boldsymbol{\varepsilon}}$$ are independent of $$\mathbf{Y}$$, it holds that: \begin{aligned} \mathbb{C}{\rm ov} (\widetilde{\mathbf{Y}}, \widehat{\mathbf{Y}}) &= \mathbb{C}{\rm ov} (\widetilde{\mathbf{X}} \boldsymbol{\beta} + \widetilde{\boldsymbol{\varepsilon}}, \widetilde{\mathbf{X}} \widehat{\boldsymbol{\beta}})\\ &= \mathbb{C}{\rm ov} (\widetilde{\boldsymbol{\varepsilon}}, \widetilde{\mathbf{X}} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbf{Y})\\ &= 0 \end{aligned} We again highlight that $$\widetilde{\boldsymbol{\varepsilon}}$$ are shocks in $$\widetilde{\mathbf{Y}}$$, which is some other realization from the DGP that is different from $$\mathbf{Y}$$ (which has shocks $$\boldsymbol{\varepsilon}$$, and was used when estimating parameters via OLS).

Because of this, the variance of the forecast error is (assuming that $$\mathbf{X}$$ and $$\widetilde{\mathbf{X}}$$ are fixed): \begin{aligned} \mathbb{V}{\rm ar}\left( \widetilde{\boldsymbol{e}} \right) &= \mathbb{V}{\rm ar}\left( \widetilde{\mathbf{Y}} - \widehat{\mathbf{Y}} \right) \\ &= \mathbb{V}{\rm ar}\left( \widetilde{\mathbf{Y}} \right) - \mathbb{C}{\rm ov} (\widetilde{\mathbf{Y}}, \widehat{\mathbf{Y}}) - \mathbb{C}{\rm ov} ( \widehat{\mathbf{Y}}, \widetilde{\mathbf{Y}})+ \mathbb{V}{\rm ar}\left( \widehat{\mathbf{Y}} \right) \\ &= \mathbb{V}{\rm ar}\left( \widetilde{\mathbf{Y}} \right) + \mathbb{V}{\rm ar}\left( \widehat{\mathbf{Y}} \right)\\ &= \sigma^2 \mathbf{I} + \widetilde{\mathbf{X}} \sigma^2 \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \widetilde{\mathbf{X}}^\top \\ &= \sigma^2 \left( \mathbf{I} + \widetilde{\mathbf{X}} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \widetilde{\mathbf{X}}^\top\right) \end{aligned}

Note that our prediction interval is affected not only by the variance of the true $$\widetilde{\mathbf{Y}}$$ (due to random shocks), but also by the variance of $$\widehat{\mathbf{Y}}$$ (since coefficient estimates, $$\widehat{\boldsymbol{\beta}}$$, are generally imprecise and have a non-zero variance), i.e. it combines the uncertainty coming from the parameter estimates and the uncertainty coming from the randomness in a new observation.

Hence, a prediction interval will be wider than a confidence interval. In practice, we replace $$\sigma^2$$ with its estimator $$\widehat{\sigma}^2 = \dfrac{1}{N-2} \sum_{i = 1}^N \widehat{\epsilon}_i^2$$.

Let $$\text{se}(\widetilde{e}_i) = \sqrt{\widehat{\mathbb{V}{\rm ar}} (\widetilde{e}_i)}$$ be the square root of the corresponding $$i$$-th diagonal element of $$\widehat{\mathbb{V}{\rm ar}} (\widetilde{\boldsymbol{e}})$$. This is also known as the standard error of the forecast. Then, the $$100 \cdot (1 - \alpha) \%$$ prediction interval can be calculated as: $\widehat{Y}_i \pm t_{(1 - \alpha/2, N-2)} \cdot \text{se}(\widetilde{e}_i)$

Example 3.27 We will generate a univariate linear regression with $$\beta_0 = 2$$, $$\beta_1 = 0.4$$, $$N = 100$$ and $$X$$ - an equally spaced sequence from an interval in $$\left[0, 20 \right]$$.
#
#
set.seed(123)
#
N <- 100
beta_0 = 2
beta_1 = 0.4
#
x <- seq(from = 0, to = 20, length.out = N)
e <- rnorm(mean = 0, sd = 2, n = N)
y <- beta_0 + beta_1 * x + e
import numpy as np
#
np.random.seed(123)
#
N = 100
beta_0 = 2
beta_1 = 0.4
#
x = np.linspace(start = 0, stop = 20, num = N)
e = np.random.normal(loc = 0, scale = 2, size = N)
y = beta_0 + beta_1 * x + e

Next, we will estimate the coefficients and their standard errors:

# Manual Calculation
#  Estimate the parameters:
x_mat <- cbind(1, x)
xtx <- t(x_mat) %*% x_mat
xty <- t(x_mat) %*% y
beta_mat_est <- solve(xtx) %*% xty
#  Calculate model fit:
y_fit <- beta_mat_est + beta_mat_est * x
#  Calculate Residuals:
resid <- y - y_fit
#  Estimate the standard errors:
sigma2_est <- sum(resid^2) / (length(x) - 2)
var_beta   <- sigma2_est * solve(t(x_mat) %*% x_mat)
std_err    <- sqrt(diag(var_beta))
# Manual Calculation
#  Estimate the parameters:
x_mat = np.column_stack((np.ones(len(x)), x))
xtx = np.dot(np.transpose(x_mat), x_mat)
xty = np.dot(np.transpose(x_mat), y)
beta_mat_est = np.dot(np.linalg.inv(xtx), xty)
#  Calculate model fit:
y_fit = beta_mat_est + beta_mat_est * x
#  Calculate the residuals:
resid = y - y_fit
#  Estimate the standard errors:
sigma2_est = sum(resid**2) / (len(x) - 2)
var_beta = sigma2_est * np.linalg.inv(np.dot(np.transpose(x_mat), x_mat))
std_err = np.sqrt(np.diag(var_beta))

For simplicity, assume that we will predict $$Y$$ for the existing values of $$X$$:

#
#
#
# Let's calculate the mean resposne (i.e. fitted) values again:
x_new  <- x_mat
y_pred <- x_new %*% beta_mat_est
# Calculate the prediction SE:
y_pred_se <- solve(t(x_mat) %*% x_mat)
y_pred_se <- x_new %*% y_pred_se %*% t(x_new)
y_pred_se <- diag(nrow(x_new)) + y_pred_se
y_pred_se <- sigma2_est * y_pred_se
y_pred_se <- sqrt(diag(y_pred_se))
# Prediction intervals for the predicted Y:
y_pred_lower <- y_pred - qt(1 - 0.05 / 2, N-2) * y_pred_se
y_pred_upper <- y_pred + qt(1 - 0.05 / 2, N-2) * y_pred_se
print(head(cbind(y_pred, y_pred_lower, y_pred_upper)))
##          [,1]      [,2]     [,3]
## [1,] 1.932214 -1.768340 5.632769
## [2,] 2.018045 -1.680416 5.716505
## [3,] 2.103875 -1.592533 5.800283
## [4,] 2.189705 -1.504691 5.884102
## [5,] 2.275535 -1.416892 5.967963
## [6,] 2.361366 -1.329134 6.051866
import scipy.stats as stats
import pandas as pd
#
# Let's calculate the predicted values:
x_new  = x_mat
y_pred = np.dot(x_new,  beta_mat_est)
# Calculate the prediction SE:
y_pred_se = np.linalg.inv(np.dot(np.transpose(x_mat), x_mat))
y_pred_se = np.dot(np.dot(x_new, y_pred_se), np.transpose(x_new))
y_pred_se = np.identity(len(x_new)) + y_pred_se
y_pred_se = sigma2_est * y_pred_se
y_pred_se = np.sqrt(np.diag(y_pred_se))
# Prediction intervals for the predicted Y:
y_pred_lower = y_pred - stats.t.ppf(q = 1 - 0.05 / 2, df = N-2) * y_pred_se
y_pred_upper = y_pred + stats.t.ppf(q = 1 - 0.05 / 2, df = N-2) * y_pred_se
print(pd.DataFrame(np.column_stack([y_pred, y_pred_lower, y_pred_upper])).head())
##           0         1         2
## 0  2.026150 -2.585366  6.637667
## 1  2.107526 -2.501381  6.716433
## 2  2.188901 -2.417448  6.795250
## 3  2.270276 -2.333567  6.874119
## 4  2.351651 -2.249738  6.953040
#
#
#
#
plot(x, y)
#
lines(x, y_pred, col = "red")
lines(x, y_pred_lower, col = "blue", lty = 2)
lines(x, y_pred_upper, col = "blue", lty = 2) import matplotlib.pyplot as plt
#
_ = plt.figure(num = 0, figsize=(10, 8))
_ = plt.plot(x, y, linestyle = "None", marker = "o",
markerfacecolor = "None", color = "black")
_ = plt.plot(x, y_pred, color = "red")
_ = plt.plot(x, y_pred_lower, color = "blue", linestyle = "--")
_ = plt.plot(x, y_pred_upper, color = "blue", linestyle = "--")
plt.show() Just like for the confidence intervals, we can get the prediction intervals from the built-in functions:

#
#
#
# Automatically:
lm_fit <- lm(y ~ x)
#
#
#
#
#
#
y_prd <- predict(lm_fit,
newdata = data.frame(x = x),
interval = c("prediction"),
level = 0.95)
print(head(y_prd))
##        fit       lwr      upr
## 1 1.932214 -1.768340 5.632769
## 2 2.018045 -1.680416 5.716505
## 3 2.103875 -1.592533 5.800283
## 4 2.189705 -1.504691 5.884102
## 5 2.275535 -1.416892 5.967963
## 6 2.361366 -1.329134 6.051866
import statsmodels.api as sm
#
# Automatically:
lm_fit = sm.OLS(y, x_mat).fit()
# Old way:
#from statsmodels.stats.outliers_influence import summary_table
#dt = summary_table(lm_fit, alpha = 0.05)
#y_prd = dt[:, 2]
#yprd_ci_lower, yprd_ci_upper = dt[:, 6:8].T
# New way:
dt = lm_fit.get_prediction(x_mat).summary_frame(alpha = 0.05)
y_prd = dt['mean']
yprd_ci_lower = dt['obs_ci_lower']
yprd_ci_upper = dt['obs_ci_upper']
print(pd.DataFrame(np.column_stack([y_prd, yprd_ci_lower, yprd_ci_upper])).head())
##           0         1         2
## 0  2.026150 -2.585366  6.637667
## 1  2.107526 -2.501381  6.716433
## 2  2.188901 -2.417448  6.795250
## 3  2.270276 -2.333567  6.874119
## 4  2.351651 -2.249738  6.953040
#
#
plot(x, y)
#
lines(x, y_prd[, "fit"], type = "l", col = "red")
lines(x, y_prd[, "lwr"], type = "l", col = "blue", lty = 2)
lines(x, y_prd[, "upr"], type = "l", col = "blue", lty = 2) _ = plt.figure(num = 1, figsize=(10, 8))
_ = plt.plot(x, y, linestyle = "None", marker = "o",
markerfacecolor = "None", color = "black")
_ = plt.plot(x, y_prd, color = "red")
_ = plt.plot(x, yprd_ci_lower, color = "blue", linestyle = "--")
_ = plt.plot(x, yprd_ci_upper, color = "blue", linestyle = "--")
plt.show() ### 3.7.3 Confidence Intervals vs Prediction Intervals

Confidence intervals tell you about how well you have determined the mean. Assume that the data really are randomly sampled from a Gaussian distribution. If you sample the data many times, and calculate a confidence interval of the mean from each sample, you’d expect about $$95\%$$ of those intervals to include the true value of the population mean. The key point is that the confidence interval tells you about the likely location of the true population parameter.

Prediction intervals tell you where you can expect to see the next data point sampled. Assume that the data really are randomly sampled from a Gaussian distribution. Collect a sample of data and calculate a prediction interval. Then sample one more value from the population. If you do this many times, you’d expect that next value to lie within that prediction interval in $$95\%$$ of the samples.The key point is that the prediction interval tells you about the distribution of values, not the uncertainty in determining the population mean.

Prediction intervals are conceptually related to confidence intervals, but they are not the same. A prediction interval relates to a realization (which has not yet been observed, but will be observed in the future), whereas a confidence interval pertains to a parameter (which is in principle not observable, e.g., the population mean).

Another way to look at it is that a prediction interval is the confidence interval for an observation (as opposed to the mean) which includes and estimate of the error. A confidence interval gives a range for $$\mathbb{E} (\boldsymbol{Y}|\boldsymbol{X})$$, whereas a prediction interval gives a range for $$\boldsymbol{Y}$$ itself. Since our best guess for predicting $$\boldsymbol{Y}$$ is $$\widehat{\mathbf{Y}} = \mathbb{E} (\boldsymbol{Y}|\boldsymbol{X})$$ - both the confidence interval and the prediction interval will be centered around $$\widetilde{\mathbf{X}} \widehat{\boldsymbol{\beta}}$$ but the prediction interval will be wider than the confidence interval.

Source: 1, 2, 3, 4.

Example 3.28 We will continue our previous example and calculate the confidence interval for the mean resposne value for the same values of $$X$$ that we estimated the regression model on.
ym_fit <- predict(lm_fit, newdata = data.frame(x = x),
interval = c("confidence"), level = 0.95)
ym_ci_lower = dt['mean_ci_lower']
ym_ci_upper = dt['mean_ci_upper']
plot(x, y)
#
lines(x, y_prd[, "fit"], type = "l", col = "red")
lines(x, ym_fit[, "lwr"], type = "l", col = "darkgreen", lty = 2)
lines(x, ym_fit[, "upr"], type = "l", col = "darkgreen", lty = 2)
lines(x, y_prd[, "lwr"], type = "l", col = "blue", lty = 2)
lines(x, y_prd[, "upr"], type = "l", col = "blue", lty = 2)
legend(x = 0, y = 12,
legend = c("actual", "OLS",
"Confidence Interval", "Prediction Interval"),
lty = c(NA, 1, 2, 2), col = c("black", "red", "darkgreen", "blue"),
pch = c(1, NA, NA, NA)) _ = plt.figure(num = 2, figsize=(10, 8))
_ = plt.plot(x, y, linestyle = "None", marker = "o", label = "actual",
markerfacecolor = "None", color = "black")
_ = plt.plot(x, y_fit, color = "red", label = "OLS")
_ = plt.plot(x, ym_ci_lower, color = "darkgreen", linestyle = "--",
label = "Confidence Interval")
_ = plt.plot(x, ym_ci_upper, color = "darkgreen", linestyle = "--")
_ = plt.plot(x, yprd_ci_lower, color = "blue", linestyle = "--",
label = "Prediction Interval")
_ = plt.plot(x, yprd_ci_upper, color = "blue", linestyle = "--")
_ = plt.legend()
plt.show() Prediction intervals must account for both: (i) the uncertainty of the population mean; (ii) the randomness (i.e. scatter) of the data. So, a prediction interval is always wider than a confidence interval.

In the time series context, prediction intervals are known as forecast intervals.

### 3.7.4 Prediction intervals when $$Y$$ is transformed

We will examine the following exponential model: $Y = \exp(\beta_0 + \beta_1 X + \epsilon)$ which we can rewrite as a log-linear model: $\log(Y) = \beta_0 + \beta_1 X + \epsilon$

Having estimated the log-linear model we are interested in the predicted value $$\widehat{Y}$$. Unfortunately, our specification allows us to calculate the prediction of the log of $$Y$$, $$\widehat{\log(Y)}$$. Nevertheless, we can obtain the predicted values by taking the exponent of the prediction, namely: $\widehat{Y} = \exp \left(\widehat{\log(Y)} \right) = \exp \left(\widehat{\beta}_0 + \widehat{\beta}_1 X\right)$ Having obtained the point predictor $$\widehat{Y}$$, we may be further interested in calculating the prediction (or, forecast) intervals of $$\widehat{Y}$$. In order to do so, we apply the same technique that we did for the point predictor - we estimate the prediction intervals for $$\widehat{\log(Y)}$$ and take their exponent.

Then, a $$100 \cdot (1 - \alpha)\%$$ prediction interval for $$Y$$ is: $\left[ \exp\left(\widehat{\log(Y)} - t_c \cdot \text{se}(\widetilde{e}_i) \right);\quad \exp\left(\widehat{\log(Y)} + t_c \cdot \text{se}(\widetilde{e}_i) \right)\right]$ or more compactly, $$\left[ \exp\left(\widehat{\log(Y)} \pm t_c \cdot \text{se}(\widetilde{e}_i) \right)\right]$$.

Example 3.29 Let $$\beta_0 = 0.2$$, $$\beta_1 = -1.8$$, $$N = 1000$$ and $$\epsilon \sim \mathcal{N}(0,(0.2)^2)$$.
set.seed(123)
#
N <- 1000
beta_0 <- 0.2
beta_1 <- -1.8
#
x <- sample(seq(from = 0, to = 1, length.out = N), replace = TRUE)
#
e <- rnorm(mean = 0, sd = 0.2, n = length(x))
y <- exp(beta_0 + beta_1 * x + e)
#
#
#
#
plot(x, y) np.random.seed(123)
#
N = 1000
beta_0 = 0.2
beta_1 = -1.8
#
x = np.random.choice(np.linspace(start = 0, stop = 1, num = N),
size = N, replace = True)
e = np.random.normal(loc = 0, scale = 0.2, size = N)
y = np.exp(beta_0 + beta_1 * x + e)
#
_ = plt.figure(num = 3, figsize = (10, 8))
_ = plt.plot(x, y, linestyle = "None", marker = "o",
markerfacecolor = "None", color = "black")
plt.show() We estimate the model via OLS and calculate the predicted values $$\widehat{\log(Y)}$$:

lm_fit <- lm(log(y) ~ x)
#
log_y_prd <- predict(lm_fit,
newdata = data.frame(x = x),
interval = c("prediction"),
level = 0.95)
lm_fit = sm.OLS(np.log(y), sm.add_constant(x)).fit()
log_y_prd = dt['mean']
log_y_prd_lower = dt['obs_ci_lower']
log_y_prd_upper = dt['obs_ci_upper']

We can plot $$\widehat{\log(Y)}$$ along with their prediction intervals:

plot(x, log(y))
#
lines(x[order(x)], log_y_prd[order(x), "fit"],
type = "l", col = "red", lwd = 3)
#
lines(x[order(x)], log_y_prd[order(x), "lwr"],
type = "l", col = "blue", lwd = 3, lty = 2)
#
lines(x[order(x)], log_y_prd[order(x), "upr"],
type = "l", col = "blue", lwd = 3, lty = 2) _ = plt.figure(num = 4, figsize=(10, 8))
_ = plt.plot(x, np.log(y), linestyle = "None", marker = "o",
markerfacecolor = "None", color = "black")
_ = plt.plot(x[np.argsort(x)], log_y_prd[np.argsort(x)],
color = "red", linewidth = 3)
_ = plt.plot(x[np.argsort(x)], log_y_prd_lower[np.argsort(x)],
color = "blue", linestyle = "--", linewidth = 3)
_ = plt.plot(x[np.argsort(x)], log_y_prd_upper[np.argsort(x)],
color = "blue", linestyle = "--", linewidth = 3)
plt.show() Finally, we take the exponent of $$\widehat{\log(Y)}$$ and the prediction interval to get the predicted value and $$95\%$$ prediction interval for $$\widehat{Y}$$:

y_prd <- exp(log_y_prd)
#
#
#
plot(x, y)
#
lines(x[order(x)], y_prd[order(x), "fit"],
type = "l", col = "red", lwd = 3)
#
lines(x[order(x)], y_prd[order(x), "lwr"],
type = "l", col = "blue", lwd = 3, lty = 2)
#
lines(x[order(x)], y_prd[order(x), "upr"],
type = "l", col = "blue", lwd = 3, lty = 2) y_prd = np.exp(log_y_prd)
y_prd_lower = np.exp(log_y_prd_lower)
y_prd_upper = np.exp(log_y_prd_upper)
#
_ = plt.figure(num = 5, figsize=(10, 8))
_ = plt.plot(x, y, linestyle = "None", marker = "o",
markerfacecolor = "None", color = "black")
_ = plt.plot(x[np.argsort(x)], y_prd[np.argsort(x)], color = "red", linewidth = 3)
_ = plt.plot(x[np.argsort(x)], y_prd_lower[np.argsort(x)],
color = "blue", linestyle = "--", linewidth = 3)
_ = plt.plot(x[np.argsort(x)], y_prd_upper[np.argsort(x)],
color = "blue", linestyle = "--", linewidth = 3)
plt.show() Alternatively, notice that for the log-linear (and similarly for the log-log) model: \begin{aligned} Y &= \exp(\beta_0 + \beta_1 X + \epsilon) \\ &= \exp(\beta_0 + \beta_1 X) \cdot \exp(\epsilon)\\ &= \mathbb{E}(Y|X)\cdot \exp(\epsilon) \end{aligned} the prediction is comprised of the systematic and the random components, but they are multiplicative, rather than additive.

Therefore we can use the properties of the log-normal distribution to derive an alternative corrected prediction of the log-linear model: $\widehat{Y}_{c} = \widehat{\mathbb{E}}(Y|X) \cdot \exp(\widehat{\sigma}^2/2) = \widehat{Y}\cdot \exp(\widehat{\sigma}^2/2)$ Because, if $$\epsilon \sim \mathcal{N}(\mu, \sigma^2)$$, then $$\mathbb{E}(\exp(\epsilon)) = \exp(\mu + \sigma^2/2)$$ and $$\mathbb{V}{\rm ar}(\epsilon) = \left[ \exp(\sigma^2) - 1 \right] \exp(2 \mu + \sigma^2)$$.

For larger samples sizes $$\widehat{Y}_{c}$$ is closer to the true mean than $$\widehat{Y}$$. On the other hand, in smaller samples $$\widehat{Y}$$ performs better than $$\widehat{Y}_{c}$$. Finally, it also depends on the scale of $$X$$. In our case:

sigma2_est <- sum(lm_fit$residuals^2) / (length(y) - 2) y_prd_c <- y_prd[, "fit"] * exp(sigma2_est / 2) # plot(x, y) lines(x[order(x)], y_prd[order(x), "fit"], type = "l", col = "red", lwd = 3) lines(x[order(x)], y_prd_c[order(x)], type = "l", col = "darkgreen", lwd = 3) legend(x = 0.8, y = 1.8, legend = c(expression(widehat(Y)), expression(widehat(Y)[c])), col = c("red", "darkgreen"), lty = c(1, 1), lwd = c(3, 3)) sigma2_est = sum(lm_fit.resid**2) / (len(y) - 2) y_prd_c = y_prd * np.exp(sigma2_est / 2) # _ = plt.figure(num = 6, figsize=(10, 8)) _ = plt.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = "None", color = "black") _ = plt.plot(x[np.argsort(x)], y_prd[np.argsort(x)], color = "red", linewidth = 3, label = "$\\widehat{Y}$") _ = plt.plot(x[np.argsort(x)], y_prd_c[np.argsort(x)], color = "darkgreen", linewidth = 3, label = "$\\widehat{Y}_c\$")
#
_ = plt.legend()
plt.show() There is a slight difference between the corrected and the natural predictor when the variance of the sample, $$Y$$, increases. Because $$\exp(0) = 1 \leq \exp(\widehat{\sigma}^2/2)$$, the corrected predictor will always be larger than the natural predictor: $$\widehat{Y}_c \geq \widehat{Y}$$. Furthermore, this correction assumes that the errors have a normal distribution (i.e. that (UR.4) holds).

The same ideas apply when we examine a log-log model.