In [1]:
#Import the required modules for vectors and matrix operations, data generation
import numpy as np
#Import the required modules for plot creation:
import matplotlib.pyplot as plt
#import the required modules for TimeSeries data generation:
import statsmodels.api as sm
#Import the required modules for test statistic calculation:
import statsmodels.stats as sm_stat
#Import the required modules for model estimation:
import statsmodels.tsa as smt

Stationary Time Series Models

We will simulate the following time series processes

  • (A) $Y_t = 1 + (1 + 0.5L)\epsilon_t$
  • (B) $Y_t = 1 + (1 + 1.3L - 0.4 L^2)\epsilon_t$
  • (C) $(1 - 1.1L) Y_t = 1 + \epsilon_t$
  • (D) $(1 - 1.1L + 0.2L^2) Y_t = 1 + \epsilon_t$
  • (E) $(1 - 1.1L + 0.2L^2) Y_t = (1 + 1.3L)\epsilon_t$

We will assume that if $t \leq 0$, then $Y_t = \epsilon_t = 0$ and $\epsilon \sim \mathcal{N}(0, 0.5^2)$


1.1 Determining the process

We will begin by rewriting the processes in their expanded forms

  • (A) $Y_t = 1 + \epsilon_t + 0.5 \epsilon_{t-1}$
  • (B) $Y_t = 1 + \epsilon_t + 1.3 \epsilon_{t-1} - 0.4 \epsilon_{t-2}$
  • (C) $Y_t = 1 + 1.1 Y_{t-1} + \epsilon_t$
  • (D) $Y_t = 1 + 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t$
  • (E) $Y_t = 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t + 1.3 \epsilon_{t-1}$

From their functional forms, we see that we have the following processes:

  • (A) $\rm MA(1)$
  • (B) $\rm MA(2)$
  • (C) $\rm AR(1)$
  • (D) $\rm AR(2)$
  • (E) $\rm ARMA(2, 1)$

To recap the theory:

  • Pure MA models are always stationary (since they contain no AR terms).
  • Pure MA models may or may not be invertible.
  • Pure AR models are always invertible (since they contain no MA terms).
  • Pure AR models may or may not be stationary.

This translates to checking the following:

  • If all the roots of $\Phi(z) = 0$ are outside the unit circle, the process is stationary;
  • If all the roots of $\Theta(x) = 0$ are outside the unit circle, the process is invertible;

We begin by checking the stationarity of (C), (D) and (E).

  • (C) $1 - 1.1 z = 0 \Rightarrow z = 1/1.1 < 1$, so the process in (C) is not stationary;
  • (D) $1 - 1.1 z + 0.2 z^2 = 0 \Rightarrow \mathcal{D} = b^2 - 4\cdot a \cdot c = (-1.1)^2 - 4 \cdot 1 \cdot 0.2 = 0.41$, $z_{1,2} = \dfrac{-b \pm \sqrt{\mathcal{D}}}{2 \cdot a}$, which gives $z_1 = 1.149$ and $z_2 = 4.351$, so the process is stationary;
  • (E) $1 - 1.1 z + 0.2 z^2 = 0$ - the same as in (D), so the process is stationary.

We can also do it via Python using poly1d.

From the documentation an example of poly1d([1, 2, 3]) represents $x^2 + 2x + 3$, so:

In [2]:
print("(C)" + str(np.poly1d([-1.1, 1], variable = 'z')))
print("Roots: " + str(np.poly1d([-1.1, 1], variable = 'z').roots))
(C) 
-1.1 z + 1
Roots: [0.90909091]
In [3]:
print("(D) and (E)\n" + str(np.poly1d([0.2, -1.1, 1], variable = 'z')))
print("Roots: " + str(np.poly1d([0.2, -1.1, 1], variable = 'z').roots))
(D) and (E)
     2
0.2 z - 1.1 z + 1
Roots: [4.35078106 1.14921894]

The invertability concept concerns only moving-average and ARMA processes, but it is calculated in a similar way as before, only now we need to find the roots of the moving-average lag operator function $\Theta(x)$:

  • (A) $1 + 0.5x = 0$
  • (B) $1 + 1.3x - 0.4 x^2 = 0$
  • (E) $1 + 1.3x = 0$

We will not repeat the manual calculations and instead immediately look at the results via Python:

In [4]:
print("(A)" + str(np.poly1d([0.5, 1], variable = 'x')))
print("Roots: " + str(np.poly1d([0.5, 1], variable = 'x').roots))
(A) 
0.5 x + 1
Roots: [-2.]
In [5]:
print("(B)\n" + str(np.poly1d([-0.4, 1.3, 1], variable = 'x')))
print("Roots: " + str(np.poly1d([-0.4, 1.3, 1], variable = 'x').roots))
(B)
      2
-0.4 x + 1.3 x + 1
Roots: [ 3.89229464 -0.64229464]
In [6]:
print("(E)\n" + str(np.poly1d([1.3, 1], variable = 'x')))
print("Roots: " + str(np.poly1d([1.3, 1], variable = 'x').roots))
(E)
 
1.3 x + 1
Roots: [-0.76923077]

We see that for (A), the process is invertible (the roots are outside the unit circle), while in (B) it is not since one of its roots is inside the unit circle. Finally, (E) is also non-invertible.

So, to recap:

  • (A) is stationary and invertible;
  • (B) is stationary and non-invertible;
  • (C) is non-stationary and invertible;
  • (D) is stationary and invertible;
  • (E) is stationary and non-invertible;

We will remind that an MA model is invertible if it is equivalent to a converging infinite order AR model. By converging, we mean that the AR coefficients decrease to 0 as we move back in time.


1.2 Process Simulation

To make it easier to follow, we will simulate each process separately:

(A) $Y_t = 1 + \epsilon_t + 0.5 \epsilon_{t-1}$

The logic behind generating this process is as follows:

  • when $t = 0$, we have $\epsilon_0 = 0$, $Y_0 = 0$
  • when $t = 1$, we have $Y_1 = 1 + \epsilon_1 + 0.5 \cdot \epsilon_0 = 1 + \epsilon_1$
  • when $t = 2$, we have $Y_2 = 1 + \epsilon_2 + 0.5 \cdot \epsilon_1$
  • when $t = 3$, we have $Y_3 = 1 + \epsilon_3 + 0.5 \cdot \epsilon_2$
  • etc.

we can carry out these calculations in Python as follows:

In [7]:
np.random.seed(123)
#
N = 150
epsilon = np.random.normal(loc = 0, scale = 0.5, size = N)
#
A_data = 1 + epsilon[0]
for j in range(1, N):
    Y_temp = 1 + epsilon[j] + 0.5 * epsilon[j - 1]
    A_data = np.append(A_data, Y_temp)

(B) $Y_t = 1 + \epsilon_t + 1.3 \epsilon_{t-1} - 0.4 \epsilon_{t-2}$

The logic behind generating this process is as follows:

  • when $t = 0$, we have $\epsilon_0 = 0$, $Y_0 = 0$
  • when $t = 1$, we have $Y_1 = 1 + \epsilon_1 + 1.3 \cdot \epsilon_0 - 0.4 \cdot \epsilon_{-1}= 1 + \epsilon_1$
  • when $t = 2$, we have $Y_2 = 1 + \epsilon_2 + 1.3 \cdot \epsilon_1 - 0.4 \cdot \epsilon_{0} = 1 + \epsilon_2 + 1.3 \epsilon_1$
  • when $t = 3$, we have $Y_3 = 1 + \epsilon_3 + 1.3 \cdot \epsilon_2 - 0.4 \cdot \epsilon_1$
  • when $t = 4$, we have $Y_4 = 1 + \epsilon_4 + 1.3 \cdot \epsilon_3 - 0.4 \cdot \epsilon_2$
  • etc.

we can carry out these calculations in Python as follows:

In [8]:
np.random.seed(123)
#
N = 150
epsilon = np.random.normal(loc = 0, scale = 0.5, size = N)
#
B_data = 1 + epsilon[0]
B_data = np.append(B_data, 1 + epsilon[1] + 1.3 * epsilon[0])
for j in range(2, N):
    Y_temp = 1 + epsilon[j] + 1.3 * epsilon[j - 1] - 0.4 * epsilon[j - 2]
    B_data = np.append(B_data, Y_temp)

We now start the loop from 2 instead of 1, since the first two elements include $\epsilon$ at values $t \leq 0$, so they have a different formula.

In [9]:
plt.figure(figsize = (15, 5))
plt.plot(B_data, label = "$Y_t = 1 + \\epsilon_t + 0.5 \\epsilon_{t-1}$ (invertible)")
plt.plot(A_data, label = "$Y_t = 1 + \\epsilon_t + 1.3 \\epsilon_{t-1} - 0.4 \\epsilon_{t-2}$ (non-invertible)")
plt.title("Moving-Average Processes")
plt.legend()
plt.show()

Notice that the plots do not exhibit large differences - you may not be able to tell whether a process is invertrible or not from the time series plots alone.

(C) $Y_t = 1 + 1.1 Y_{t-1} + \epsilon_t$

The logic behind generating this process is as follows:

  • when $t = 0$, we have $\epsilon_0 = 0$, $Y_0 = 0$
  • when $t = 1$, we have $Y_1 = 1 + 1.1 \cdot Y_{0} + \epsilon_1 = 1 + \epsilon_1$
  • when $t = 2$, we have $Y_2 = 1 + 1.1 \cdot Y_{1} + \epsilon_2 = 1 + 1.1 \cdot (1 + \epsilon_1) + \epsilon_2$
  • when $t = 3$, we have $Y_3 = 1+ 1.1 Y_{2} + \epsilon_3 = 1 + 1.1 \cdot (1 + 1.1 \cdot (1 + \epsilon_1) + \epsilon_2) + \epsilon_3$
  • etc.

we can carry out these calculations in Python as follows:

In [10]:
np.random.seed(123)
#
N = 150
epsilon = np.random.normal(loc = 0, scale = 0.5, size = N)
#
C_data = np.array(1 + epsilon[0], ndmin = 1)
for j in range(1, N):
    Y_temp = 1 + 1.1 * C_data[j - 1] + epsilon[j]
    C_data = np.append(C_data, Y_temp)

(D) $Y_t = 1 + 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t$

The logic behind generating this process is as follows:

  • when $t = 0$, we have $\epsilon_0 = 0$, $Y_0 = 0$
  • when $t = 1$, we have $Y_1 = 1 + 1.1 \cdot Y_{0} - 0.2 \cdot Y_{-1} + \epsilon_1 = 1 + \epsilon_1$
  • when $t = 2$, we have $Y_2 = 1 + 1.1 \cdot Y_{1} - 0.2 \cdot Y_0 + \epsilon_2 = 1 + 1.1 \cdot (1 + \epsilon_1) + \epsilon_2$
  • when $t = 3$, we have $Y_3 = 1+ 1.1 Y_{2} - 0.2 Y_{1} + \epsilon_3 = 1 + 1.1 \cdot (1 + 1.1 \cdot (1 + \epsilon_1) + \epsilon_2) - 0.2 \cdot (1 + \epsilon_1)+ \epsilon_3$
  • etc.

we can carry out these calculations in Python as follows:

In [11]:
np.random.seed(123)
#
N = 150
epsilon = np.random.normal(loc = 0, scale = 0.5, size = N)
#
D_data = np.array(1 + epsilon[0], ndmin = 1)
D_data = np.append(D_data, 1 + 1.1 * D_data[0] + epsilon[1])
for j in range(2, N):
    Y_temp = 1 + 1.1 * D_data[j - 1] - 0.2 * D_data[j - 2] + epsilon[j]
    D_data = np.append(D_data, Y_temp)

We start the loop from 2 instead of 1, since the first two elements include pas values of $Y$ at values $t \leq 0$, so they have a different formula.

In [12]:
fig = plt.figure(figsize = (15, 5))
fig.add_subplot(1, 2, 1).plot(C_data)
plt.title("$Y_t = 1 + 1.1 Y_{t-1} + \\epsilon_t$ (non-stationary)")
fig.add_subplot(1, 2, 2).plot(D_data)
plt.title("$Y_t = 1 + 1.1 Y_{t-1} - 0.2 Y_{t-2} + \\epsilon_t$ (stationary)")
plt.show()

(E) $Y_t = 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t + 1.3 \epsilon_{t-1}$

The logic behind generating this process is as follows:

  • when $t = 0$, we have $\epsilon_0 = 0$, $Y_0 = 0$
  • when $t = 1$, we have $Y_1 = 1.1 \cdot Y_{0} - 0.2 \cdot Y_{-1} + \epsilon_1 + 1.3 \cdot \epsilon_0 = \epsilon_1$
  • when $t = 2$, we have $Y_2 = 1.1 \cdot Y_{1} - 0.2 \cdot Y_0 + \epsilon_2 + 1.3 \cdot \epsilon_1 = 1.1 \cdot (1 + \epsilon_1) + \epsilon_2 + 1.3 \cdot \epsilon_1$
  • when $t = 3$, we have $Y_3 = 1.1 Y_{2} - 0.2 Y_{1} + \epsilon_3 + 1.3 \cdot \epsilon_2 = 1.1 \cdot (1 + 1.1 \cdot (1 + \epsilon_1) + \epsilon_2 + 1.3 \cdot \epsilon_1) - 0.2 \cdot (1 + \epsilon_1)+ \epsilon_3 + 1.3 \cdot \epsilon_2$
  • etc.

we can carry out these calculations in Python as follows:

In [13]:
np.random.seed(123)
#
N = 150
epsilon = np.random.normal(loc = 0, scale = 0.5, size = N)
#
E_data = np.array(epsilon[0], ndmin = 1)
E_data = np.append(E_data, 1.1 * E_data[0] + epsilon[1] + 1.3 * epsilon[0])
for j in range(2, N):
    Y_temp = 1.1 * E_data[j - 1] - 0.2 * E_data[j - 2] + epsilon[j] + 1.3 * epsilon[j - 1]
    E_data = np.append(E_data, Y_temp)
In [14]:
plt.figure(figsize = (15, 5))
plt.plot(E_data, label = "$Y_t = 1.1 Y_{t-1} - 0.2 Y_{t-2} + \\epsilon_t + 1.3 \\epsilon_{t-1}$ (stationary non-invertible)")
plt.plot(D_data, label = "$Y_t = 1 + 1.1 Y_{t-1} - 0.2 Y_{t-2} + \\epsilon_t$ (stationary)")
plt.title("ARMA(2, 1) Process vs AR(2) process")
plt.legend()
plt.show()

Note the difference in the mean - our $ARMA$ process does not have an intercept term.


Alternatively, we can use the built-in functions to simulate the models. We need to specify the $\Phi(z)$ and $\Theta(x)$ functions.

In [15]:
np.random.seed(123)
E_auto_spec = smt.arima_process.ArmaProcess(ar = np.array([1, -1.1, 0.2]), ma = [1, 1.3])
E_autogenr =  E_auto_spec.generate_sample(N, scale = 0.5)

Note that for puse $\rm AR$ models $\Theta(x) = 1$ and for pure $\rm MA$ models $\Phi(z) = 1$.

The nice thing about this function is that we can examine these lag functions

In [16]:
E_auto_spec.arpoly
Out[16]:
$x \mapsto \text{1.0} - \text{1.1}\,x + \text{0.2}\,x^{2}$
In [17]:
E_auto_spec.mapoly
Out[17]:
$x \mapsto \text{1.0} + \text{1.3}\,x$

as well as get their calculated roots

In [18]:
E_auto_spec.arroots
Out[18]:
array([1.14921894, 4.35078106])
In [19]:
E_auto_spec.maroots
Out[19]:
array([-0.76923077])

and check the stationarity/invertibility automatically:

In [20]:
print("Stationarity: " + str(E_auto_spec.isstationary))
print("Invertibility: " + str(E_auto_spec.isinvertible))
Stationarity: True
Invertibility: False
In [21]:
plt.figure(figsize = (20, 8))
plt.plot(E_data, label = "$Y_t = 1.1 Y_{t-1} - 0.2 Y_{t-2} + \\epsilon_t + 1.3 \\epsilon_{t-1}$ (stationary non-invertible)", color = "blue")
plt.plot(E_autogenr, label = "autogenerated ARMA(2, 1) process with the same parameters", linestyle = "-.", color = "red")
plt.title("ARMA(2, 1) Process vs AR(2) process")
plt.legend()
plt.show()

Note - we used the same innovations to generate (A) - (E) processes, and the autogenerated used different innovations simulated via the same specified seed.

In [22]:
fig = plt.figure(figsize=(15, 5))
sm.tsa.graphics.plot_acf(E_autogenr, lags = 20, ax = fig.add_subplot(121))
plt.ylabel('ACF')
sm.tsa.graphics.plot_pacf(E_autogenr, lags = 20, ax = fig.add_subplot(122), method = "ywm")
plt.ylabel('PACF')
plt.tight_layout()

NOTE on possible PACF error, need to specify either method = "ywm", or method = "ols"


1.3 The Mean of the processes

We remember that $\epsilon_t \sim \mathcal{N}(0, 0.5^2)$.

  • (A) $\mathbb{E} (Y_t) = \mathbb{E} (1 + \epsilon_t + 0.5 \epsilon_{t-1}) = 1$
In [23]:
print(A_data.mean())
1.043106649424549
  • (B) $\mathbb{E} (Y_t) = \mathbb{E} (1 + \epsilon_t + 1.3 \epsilon_{t-1} - 0.4 \epsilon_{t-2}) = 1$
In [24]:
print(B_data.mean())
1.05415481291134
  • (C) $\mathbb{E} (Y_t) = \mathbb{E} (1 + 1.1 Y_{t-1} + \epsilon_t) = 1 + 1.1 \mathbb{E} (Y_{t-1}) = 1 + 1.1 + (1.1)^2 \mathbb{E} (Y_{t-2}) = 1 + 1.1 + (1.1)^2 + (1.1)^3 \mathbb{E} (Y_{t-2}) = ... = \sum_{j = 0}^\infty (1.1)^j = \infty$.

Assuming that $Y_t = 0$ for $t \leq 0$, this can be expressed as $\mathbb{E} (Y_t) =\sum_{j = 0}^{t-1} (1.1)^j$ for an $AR(1)$ process. So, for $t = 150$ this is roughly

In [25]:
tmp_sum = 0
for i in range(0, N):
    tmp_sum = tmp_sum + (1.1)**i
print(tmp_sum)
16177168.357762082
In [26]:
print(C_data.mean())
1129986.8195605408

An example of a non-stationary process - a non-constant mean.

  • (D) $\mathbb{E} (Y_t) = \mathbb{E} (1 + 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t) = 1 + 1.1 \mathbb{E} (Y_{t-1}) - 0.2 \mathbb{E} (Y_{t-2})$. For a stationary process $\mathbb{E} (Y_t) = \mu$, which gives $\mu = 1 + 1.1 \mu - 0.2 \mu$, which gives $\mu = \dfrac{1}{1 - 1.1 + 0.2} = \dfrac{1}{0.1} = 10$.
In [27]:
print(D_data.mean())
9.7463508691281
  • (E) $\mathbb{E} (Y_t) = \mathbb{E} (1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t + 1.3 \epsilon_{t-1}) = 1.1 \mathbb{E} (Y_{t-1}) - 0.2 \mathbb{E} (Y_{t-2})$. Since $Y_t$ is stationary and invertible, we can write it as an infinite MA process. Furthermore, for a stationary process $\mathbb{E} (Y_t) = \mu$, which gives $\mu = 1.1 \mu - 0.2 \mu \Longrightarrow \mu = 0$.
In [28]:
print(E_data.mean())
0.4754671302890322

1.4 ACF and PACF plots

  • (A) $Y_t = 1 + \epsilon_t + 0.5 \epsilon_{t-1}$
In [29]:
fig = plt.figure(figsize=(15, 5))
sm.tsa.graphics.plot_acf(A_data, lags = 20, zero = False, ax = fig.add_subplot(121))
plt.ylabel('ACF')
sm.tsa.graphics.plot_pacf(A_data, lags = 20, zero = False, ax = fig.add_subplot(122), method = "ywm")
plt.ylabel('PACF')
plt.tight_layout()

We see that the ACF is abruptly cutoff after the first lag. Regarding PACF - it appears to be somewhat declining, though it is difficult to say. We expect it to be an MA(1) process.

  • (B) $Y_t = 1 + \epsilon_t + 1.3 \epsilon_{t-1} - 0.4 \epsilon_{t-2}$
In [30]:
fig = plt.figure(figsize=(15, 5))
sm.tsa.graphics.plot_acf(B_data, lags = 20, zero = False, ax = fig.add_subplot(121))
plt.ylabel('ACF')
sm.tsa.graphics.plot_pacf(B_data, lags = 20, zero = False, ax = fig.add_subplot(122), method = "ywm")
plt.ylabel('PACF')
plt.tight_layout()

We see that the ACF is abruptly cutoff after the first lag. Regarding PACF - it appears to be somewhat declining, though it is difficult to say. We would expect an MA(1) process (though in reality we know that it is an MA(2) process).

  • (C) $Y_t = 1 + 1.1 Y_{t-1} + \epsilon_t$
In [31]:
fig = plt.figure(figsize=(15, 5))
sm.tsa.graphics.plot_acf(C_data, lags = 20, zero = False, ax = fig.add_subplot(121))
plt.ylabel('ACF')
sm.tsa.graphics.plot_pacf(C_data, lags = 20, zero = False, ax = fig.add_subplot(122), method = "ywm")
plt.ylabel('PACF')
plt.tight_layout()

The ACF appears to decline, while the PACF is abruptly cutoff after the first lag. We would expect an AR(1) process.

  • (D) $Y_t = 1 + 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t$
In [32]:
fig = plt.figure(figsize=(15, 5))
sm.tsa.graphics.plot_acf(D_data, lags = 20, zero = False, ax = fig.add_subplot(121))
plt.ylabel('ACF')
sm.tsa.graphics.plot_pacf(D_data, lags = 20, zero = False, ax = fig.add_subplot(122), method = "ywm")
plt.ylabel('PACF')
plt.tight_layout()

The ACF appears to decline, while the PACF is abruptly cutoff after the first lag. We would expect an AR(1) process (though in reality it is an AR(2) process).

(Try to increase the sample size and re-examine the ACf/PACF plots - are they closer to the true theoretical cases?)

  • (E) $Y_t = 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t + 1.3 \epsilon_{t-1}$
In [33]:
fig = plt.figure(figsize=(15, 5))
sm.tsa.graphics.plot_acf(E_data, lags = 20, zero = False, ax = fig.add_subplot(121))
plt.ylabel('ACF')
sm.tsa.graphics.plot_pacf(E_data, lags = 20, zero = False, ax = fig.add_subplot(122), method = "ywm")
plt.ylabel('PACF')
plt.tight_layout()

We see both the ACF and PACF to be declining - it is an ARMA process. We cannot identify its order from these plots alone.


Note that if we have automatically generated the data using smt.arima_process.ArmaProcess, we can examine the theoretical acf/pacf plots

In [34]:
fig = plt.figure(figsize = (15, 10))
ax1 = fig.add_subplot(211)
ax1.stem(E_auto_spec.acf(lags = 20), label = "Theoretical ACF", basefmt = "white", linefmt = "r-", markerfmt = "ro-")
ax1.stem(smt.stattools.acf(E_autogenr, nlags = 19), label = "Empirical ACF", basefmt = "white", linefmt = "g-", markerfmt = "go-")
ax1.axhline(y = 0, color = "black")
plt.legend()
ax2 = fig.add_subplot(212)
ax2.stem(E_auto_spec.pacf(lags = 20), label = "Theoretical PACF", basefmt = "white", linefmt = "r-", markerfmt = "ro-")
ax2.stem(smt.stattools.pacf(E_autogenr, nlags = 19, method = "ywm"), label = "Empirical PACF", basefmt = "white", linefmt = "g-", markerfmt = "go-")
ax2.axhline(y = 0, color = "black")
plt.legend()
plt.show()

As an example, we will re-generate the process but with a larger sample size of $N = 1500$

In [35]:
np.random.seed(123)
E_autogenr_large =  E_auto_spec.generate_sample(1500, scale = 0.5)
In [36]:
fig = plt.figure(figsize = (15, 10))
ax1 = fig.add_subplot(211)
ax1.stem(E_auto_spec.acf(lags = 20), label = "Theoretical ACF", basefmt = "white", linefmt = "r-", markerfmt = "ro-")
ax1.stem(smt.stattools.acf(E_autogenr_large, nlags = 19), label = "Empirical ACF, sample size N = 1500", basefmt = "white", linefmt = "g-", markerfmt = "go-")
ax1.axhline(y = 0, color = "black")
plt.legend()
ax2 = fig.add_subplot(212)
ax2.stem(E_auto_spec.pacf(lags = 20), label = "Theoretical PACF", basefmt = "white", linefmt = "r-", markerfmt = "ro-")
ax2.stem(smt.stattools.pacf(E_autogenr_large, nlags = 19, method = "ywm"), label = "Empirical PACF, sample size N = 1500", basefmt = "white", linefmt = "g-", markerfmt = "go-")
ax2.axhline(y = 0, color = "black")
plt.legend()
plt.show()

The following result is important to note: the larger the sample size, the closer the estimates to the true values.


2.1 Model Estimation

We will estimate the models with the exact order that we used to generate the data:

  • (A) $Y_t = 1 + \epsilon_t + 0.5 \epsilon_{t-1}$
In [37]:
A_spec_1 = smt.arima_model.ARMA(A_data, order = (0, 1))
A_mdl_1 = A_spec_1.fit(trend = 'c')
In [38]:
print(A_mdl_1.summary())
                              ARMA Model Results                              
==============================================================================
Dep. Variable:                      y   No. Observations:                  150
Model:                     ARMA(0, 1)   Log Likelihood                -121.404
Method:                       css-mle   S.D. of innovations              0.543
Date:                Thu, 28 Feb 2019   AIC                            248.808
Time:                        23:16:42   BIC                            257.840
Sample:                             0   HQIC                           252.477
                                                                              
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const          1.0412      0.070     14.877      0.000       0.904       1.178
ma.L1.y        0.5830      0.084      6.934      0.000       0.418       0.748
                                    Roots                                    
=============================================================================
                  Real          Imaginary           Modulus         Frequency
-----------------------------------------------------------------------------
MA.1           -1.7153           +0.0000j            1.7153            0.5000
-----------------------------------------------------------------------------
  • (B) $Y_t = 1 + \epsilon_t + 1.3 \epsilon_{t-1} - 0.4 \epsilon_{t-2}$
In [39]:
B_spec_1 = smt.arima_model.ARMA(B_data, order = (0, 2))
B_mdl_1 = B_spec_1.fit(trend = 'c')
In [40]:
print(B_mdl_1.summary())
                              ARMA Model Results                              
==============================================================================
Dep. Variable:                      y   No. Observations:                  150
Model:                     ARMA(0, 2)   Log Likelihood                -187.015
Method:                       css-mle   S.D. of innovations              0.840
Date:                Thu, 28 Feb 2019   AIC                            382.030
Time:                        23:16:42   BIC                            394.073
Sample:                             0   HQIC                           386.923
                                                                              
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const          1.0547      0.080     13.113      0.000       0.897       1.212
ma.L1.y        0.4276      0.075      5.718      0.000       0.281       0.574
ma.L2.y       -0.2559      0.071     -3.588      0.000      -0.396      -0.116
                                    Roots                                    
=============================================================================
                  Real          Imaginary           Modulus         Frequency
-----------------------------------------------------------------------------
MA.1           -1.3106           +0.0000j            1.3106            0.5000
MA.2            2.9812           +0.0000j            2.9812            0.0000
-----------------------------------------------------------------------------

Note that the coefficients are not close to the true values (except for the constant and the overall coefficient signs). This is because our true process is not invertible. Hence, the software attepts to fit an invertible model, which results in different coefficients.

  • (C) $Y_t = 1 + 1.1 Y_{t-1} + \epsilon_t$
In [41]:
C_spec_1 = smt.arima_model.ARMA(C_data, order = (1, 0))
C_mdl_1 = C_spec_1.fit(trend = 'c')
In [42]:
print(C_mdl_1.summary())
                              ARMA Model Results                              
==============================================================================
Dep. Variable:                      y   No. Observations:                  150
Model:                     ARMA(1, 0)   Log Likelihood               -2095.353
Method:                       css-mle   S.D. of innovations         275467.476
Date:                Thu, 28 Feb 2019   AIC                           4196.706
Time:                        23:16:42   BIC                           4205.738
Sample:                             0   HQIC                          4200.375
                                                                              
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const        1.13e+06   2.59e+07      0.044      0.965   -4.96e+07    5.18e+07
ar.L1.y        0.9996      0.001    682.766      0.000       0.997       1.002
                                    Roots                                    
=============================================================================
                  Real          Imaginary           Modulus         Frequency
-----------------------------------------------------------------------------
AR.1            1.0004           +0.0000j            1.0004            0.0000
-----------------------------------------------------------------------------

Note the coefficient values for the non-stationary process.

  • (D) $Y_t = 1 + 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t$
In [43]:
D_spec_1 = smt.arima_model.ARMA(D_data, order = (2, 0))
D_mdl_1 = D_spec_1.fit(trend = 'c')
In [44]:
print(D_mdl_1.summary())
                              ARMA Model Results                              
==============================================================================
Dep. Variable:                      y   No. Observations:                  150
Model:                     ARMA(2, 0)   Log Likelihood                -134.551
Method:                       css-mle   S.D. of innovations              0.586
Date:                Thu, 28 Feb 2019   AIC                            277.103
Time:                        23:16:42   BIC                            289.145
Sample:                             0   HQIC                           281.995
                                                                              
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const          8.2087      2.598      3.160      0.002       3.117      13.300
ar.L1.y        1.2268      0.081     15.165      0.000       1.068       1.385
ar.L2.y       -0.2424      0.083     -2.937      0.004      -0.404      -0.081
                                    Roots                                    
=============================================================================
                  Real          Imaginary           Modulus         Frequency
-----------------------------------------------------------------------------
AR.1            1.0211           +0.0000j            1.0211            0.0000
AR.2            4.0398           +0.0000j            4.0398            0.0000
-----------------------------------------------------------------------------

Note the const coefficient of the model is close to the process mean. This is not the intercept!

  • (E) $Y_t = 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t + 1.3 \epsilon_{t-1}$
In [45]:
E_spec_1 = smt.arima_model.ARMA(E_data, order = (2, 1))
E_mdl_1 = E_spec_1.fit(trend = 'nc')
In [46]:
print(E_mdl_1.summary())
                              ARMA Model Results                              
==============================================================================
Dep. Variable:                      y   No. Observations:                  150
Model:                     ARMA(2, 1)   Log Likelihood                -162.225
Method:                       css-mle   S.D. of innovations              0.703
Date:                Thu, 28 Feb 2019   AIC                            332.449
Time:                        23:16:42   BIC                            344.492
Sample:                             0   HQIC                           337.342
                                                                              
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1.y        1.0358      0.088     11.829      0.000       0.864       1.207
ar.L2.y       -0.1487      0.088     -1.691      0.093      -0.321       0.024
ma.L1.y        0.8466      0.041     20.776      0.000       0.767       0.927
                                    Roots                                    
=============================================================================
                  Real          Imaginary           Modulus         Frequency
-----------------------------------------------------------------------------
AR.1            1.1579           +0.0000j            1.1579            0.0000
AR.2            5.8066           +0.0000j            5.8066            0.0000
MA.1           -1.1811           +0.0000j            1.1811            0.5000
-----------------------------------------------------------------------------

Note that we use trend to decide whether to include a constant or not. c includes constant, nc - no constant.

We can also see the roots for stationarity and invertibility. Again, the fitted model attempts to fit a stationary and invertible process, hence the MA coefficient is different for this process (which is generated from a non-invertible data generating process).

We can also extract specific information about the model:

In [47]:
print("Included trend (this is actually the constant): ", E_mdl_1.k_trend)
print("Included constat: ", E_mdl_1.k_constant)
print("Number of AR parameters: ", E_mdl_1.k_ar)
print("Number of MA parameters" , E_mdl_1.k_ma)
print("Included exogenous variables: ", E_mdl_1.k_exog)
Included trend (this is actually the constant):  0
Included constat:  0
Number of AR parameters:  2
Number of MA parameters 1
Included exogenous variables:  0

as well as specific parameters

In [48]:
print("All of the parameters: ", E_mdl_1.params)
print("All of the p-values: ", E_mdl_1.pvalues)
print("Only AR parameters: ", E_mdl_1.arparams)
print("Only MA parameters: ", E_mdl_1.maparams)
All of the parameters:  [ 1.03584648 -0.14873211  0.84664303]
All of the p-values:  [4.20333223e-23 9.29576875e-02 1.38602032e-45]
Only AR parameters:  [ 1.03584648 -0.14873211]
Only MA parameters:  [0.84664303]

Interestingly, on the larger sample size the autoregressive parameters are indeed close to the true values

In [49]:
print(smt.arima_model.ARMA(E_autogenr_large, order = (2, 1)).fit(trend = 'nc').summary())
                              ARMA Model Results                              
==============================================================================
Dep. Variable:                      y   No. Observations:                 1500
Model:                     ARMA(2, 1)   Log Likelihood               -1443.342
Method:                       css-mle   S.D. of innovations              0.632
Date:                Thu, 28 Feb 2019   AIC                           2894.684
Time:                        23:16:43   BIC                           2915.937
Sample:                             0   HQIC                          2902.601
                                                                              
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1.y        1.1010      0.030     36.920      0.000       1.043       1.159
ar.L2.y       -0.2045      0.030     -6.881      0.000      -0.263      -0.146
ma.L1.y        0.7826      0.019     40.846      0.000       0.745       0.820
                                    Roots                                    
=============================================================================
                  Real          Imaginary           Modulus         Frequency
-----------------------------------------------------------------------------
AR.1            1.1569           +0.0000j            1.1569            0.0000
AR.2            4.2260           +0.0000j            4.2260            0.0000
MA.1           -1.2777           +0.0000j            1.2777            0.5000
-----------------------------------------------------------------------------

However, the moving-average parameters are not, since the process is non-invertible.

The estimated model is:

$ \widehat{Y}_t = 1.1010 \cdot \widehat{Y}_{t-1} - 0.2045 \cdot \widehat{Y}_{t-2} + e_t + 0.7826 \cdot e_{t-1} $

where $e_t = Y_t - 1.1010 \cdot \widehat{Y}_{t-1} + 0.2045 \cdot \widehat{Y}_{t-2} - 0.7826 \cdot e_{t-1}$ and $e_0 = 0$.

We will see more on model fitting in the last task.


2.2 Automatic order selection

We will automatically etimate the unknown parameters as well as the lag order. Note the documentation: This method can be used to tentatively identify the order of an ARMA process, provided that the time series is stationary and invertible.

  • (A) $Y_t = 1 + \epsilon_t + 0.5 \epsilon_{t-1}$
In [50]:
A_est_auto = sm.tsa.stattools.arma_order_select_ic(A_data, ic='aic', trend='c')
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:508: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:606: RuntimeWarning: overflow encountered in exp
  newparams = ((1-np.exp(-params))/
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:607: RuntimeWarning: overflow encountered in exp
  (1+np.exp(-params))).copy()
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:607: RuntimeWarning: invalid value encountered in true_divide
  (1+np.exp(-params))).copy()
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:608: RuntimeWarning: overflow encountered in exp
  tmp = ((1-np.exp(-params))/
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:609: RuntimeWarning: overflow encountered in exp
  (1+np.exp(-params))).copy()
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:609: RuntimeWarning: invalid value encountered in true_divide
  (1+np.exp(-params))).copy()
In [51]:
print("(A) process order: ", A_est_auto.aic_min_order)
(A) process order:  (1, 1)

The automatically selected model is:

In [52]:
A_mdl_auto = smt.arima_model.ARMA(A_data, order = A_est_auto.aic_min_order).fit(trend = 'c')
print(A_mdl_auto.summary().tables[1])
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const          1.0419      0.062     16.741      0.000       0.920       1.164
ar.L1.y       -0.2248      0.123     -1.833      0.069      -0.465       0.016
ma.L1.y        0.7393      0.083      8.959      0.000       0.578       0.901
==============================================================================
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:508: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)

If we wanted, we could restrict the models by specifying the maximum order of the $MA$ or the $AR$ (or both) parts:

In [53]:
A_est_auto = sm.tsa.stattools.arma_order_select_ic(A_data, ic='aic', trend='c', max_ar = 0)
print("(A) process order: ", A_est_auto.aic_min_order)
(A) process order:  (0, 2)
  • (B) $Y_t = 1 + \epsilon_t + 1.3 \epsilon_{t-1} - 0.4 \epsilon_{t-2}$
In [54]:
B_est_auto = sm.tsa.stattools.arma_order_select_ic(B_data, ic='aic', trend='c')
In [55]:
print("(B) process order: ", B_est_auto.aic_min_order)
(B) process order:  (1, 1)

The automatically selected model is:

In [56]:
B_mdl_auto = smt.arima_model.ARMA(B_data, order = B_est_auto.aic_min_order).fit(trend = 'c')
print(B_mdl_auto.summary().tables[1])
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const          1.0542      0.089     11.817      0.000       0.879       1.229
ar.L1.y       -0.3954      0.101     -3.904      0.000      -0.594      -0.197
ma.L1.y        0.8184      0.055     14.846      0.000       0.710       0.926
==============================================================================
  • (C) $Y_t = 1 + 1.1 Y_{t-1} + \epsilon_t$
In [57]:
C_est_auto = sm.tsa.stattools.arma_order_select_ic(C_data, ic='aic', trend='c')
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:488: HessianInversionWarning: Inverting hessian failed, no bse or cov_params available
  'available', HessianInversionWarning)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:634: RuntimeWarning: divide by zero encountered in log
  invarcoefs = -np.log((1-params)/(1+params))
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tools\numdiff.py:243: RuntimeWarning: invalid value encountered in subtract
  **kwargs)).imag/2./hess[i, j]
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:634: RuntimeWarning: invalid value encountered in log
  invarcoefs = -np.log((1-params)/(1+params))
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:650: RuntimeWarning: overflow encountered in exp
  newparams = ((1-np.exp(-params))/(1+np.exp(-params))).copy()
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:650: RuntimeWarning: invalid value encountered in true_divide
  newparams = ((1-np.exp(-params))/(1+np.exp(-params))).copy()
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:651: RuntimeWarning: overflow encountered in exp
  tmp = ((1-np.exp(-params))/(1+np.exp(-params))).copy()
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\tsatools.py:651: RuntimeWarning: invalid value encountered in true_divide
  tmp = ((1-np.exp(-params))/(1+np.exp(-params))).copy()
In [58]:
print("(C) process order: ", C_est_auto.aic_min_order)
(C) process order:  (1, 2)

Since our process is non-stationary, we will get an error when estimating. To make the error more compact, we will wrapt the function in a try-catch block:

In [59]:
try:
    C_mdl_auto = smt.arima_model.ARMA(C_data, order = C_est_auto.aic_min_order).fit(trend = 'c')
    print(C_mdl_auto.summary().tables[1])
except ValueError as err:
        print('Handling run-time error:', err)
Handling run-time error: The computed initial AR coefficients are not stationary
You should induce stationarity, choose a different model order, or you can
pass your own start_params.

So our suggestions are to either (a) induce stationarity; or (b) choose a different model order. Since we know that it is non-stationary, we may try to specify an $\rm ARMA(2, 2)$ model and hope that the $\rm AR$ coefficients approximately stationary, or a pure $\rm MA$ model. We will go with a pure $MA$ process, since they are always stationary (though they may not be the best fit on the data). We opt to choose an order automatically.

Note that for larger orders the calculations take a while, so we restrict to $q_\max = 6$ (default max is 2 for $\rm MA$ and $p_\max = 4$ for $\rm AR$).

In [60]:
C_mdl_auto = sm.tsa.stattools.arma_order_select_ic(C_data, ic='aic', trend='c', max_ar = 0, max_ma = 6)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:488: HessianInversionWarning: Inverting hessian failed, no bse or cov_params available
  'available', HessianInversionWarning)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:508: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:488: HessianInversionWarning: Inverting hessian failed, no bse or cov_params available
  'available', HessianInversionWarning)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:508: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:508: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:488: HessianInversionWarning: Inverting hessian failed, no bse or cov_params available
  'available', HessianInversionWarning)
In [61]:
print("(C) process order: ", C_mdl_auto.aic_min_order)
(C) process order:  (0, 5)
In [62]:
C_mdl_auto = smt.arima_model.ARMA(C_data, order = C_mdl_auto.aic_min_order).fit(trend = 'c')
print(C_mdl_auto.summary().tables[1])
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const        1.13e+06   2.86e+05      3.946      0.000    5.69e+05    1.69e+06
ma.L1.y        4.7261        nan        nan        nan         nan         nan
ma.L2.y        9.1501        nan        nan        nan         nan         nan
ma.L3.y        9.0670      0.003   3145.519      0.000       9.061       9.073
ma.L4.y        4.5973      0.002   1994.273      0.000       4.593       4.602
ma.L5.y        0.9542      0.001    665.206      0.000       0.951       0.957
==============================================================================
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\base\model.py:508: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\statsmodels\tsa\arima_model.py:1455: RuntimeWarning: invalid value encountered in sqrt
  return np.sqrt(np.diag(-inv(hess)))
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\scipy\stats\_distn_infrastructure.py:877: RuntimeWarning: invalid value encountered in greater
  return (self.a < x) & (x < self.b)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\scipy\stats\_distn_infrastructure.py:877: RuntimeWarning: invalid value encountered in less
  return (self.a < x) & (x < self.b)
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\scipy\stats\_distn_infrastructure.py:1831: RuntimeWarning: invalid value encountered in less_equal
  cond2 = cond0 & (x <= self.a)

Note again that the non-stationarity of the process is quite a problem even looking at the results - some standard errors are nan.

  • (D) $Y_t = 1 + 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t$
In [63]:
D_est_auto = sm.tsa.stattools.arma_order_select_ic(D_data, ic='aic', trend='c')
In [64]:
print("(D) process order: ", D_est_auto.aic_min_order)
(D) process order:  (1, 1)

The automatically selected model is:

In [65]:
D_mdl_auto = smt.arima_model.ARMA(D_data, order = D_est_auto.aic_min_order).fit(trend = 'c')
print(D_mdl_auto.summary().tables[1])
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const          8.1185      2.745      2.958      0.004       2.739      13.498
ar.L1.y        0.9820      0.019     51.786      0.000       0.945       1.019
ma.L1.y        0.2638      0.089      2.975      0.003       0.090       0.438
==============================================================================
  • (E) $Y_t = 1.1 Y_{t-1} - 0.2 Y_{t-2} + \epsilon_t + 1.3 \epsilon_{t-1}$
In [66]:
E_est_auto = sm.tsa.stattools.arma_order_select_ic(E_data, ic='aic', trend='nc')
In [67]:
print("(E) process order: ", E_est_auto.aic_min_order)
(E) process order:  (2, 2)

Note that we would expect to have problems with (B) and (C), since the function requires stationarity and invertibility.

Note that in the case of (A), specifying max_ar = 0 returns a result without any warnings. This is a current drawback of this method as it does not automatically distringuish when an order of 0 is appropriate for either MA, or AR components. Update: after a package update some warnings may have disappeared.

Either way, in all cases the final suggested models are ARMA models, instead of pure AR or pure MA models.

The automatically selected model is:

In [68]:
E_mdl_auto = smt.arima_model.ARMA(E_data, order = E_est_auto.aic_min_order).fit(trend = 'nc')
print(E_mdl_auto.summary().tables[1])
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1.y        1.8674      0.045     41.834      0.000       1.780       1.955
ar.L2.y       -0.8886      0.041    -21.683      0.000      -0.969      -0.808
ma.L1.y       -0.0674      0.060     -1.130      0.260      -0.184       0.050
ma.L2.y       -0.8044      0.056    -14.356      0.000      -0.914      -0.695
==============================================================================

For the larger sample size

In [69]:
print("(E) process with larger sample order: ", sm.tsa.stattools.arma_order_select_ic(E_autogenr_large, ic='aic', trend='nc').aic_min_order)
(E) process with larger sample order:  (2, 1)

The suggested process is actually the same order as the true underlying process. Note that because of the non-invertibility of the process this may not always be the case.

If the process was both stationary and ivnertible, then it is very likely that for a larger sample, the automated order selection would suggest a model order, which is close to the true process, as long as the true underlying process is an $\rm ARMA(p, q)$ process.


2.3 Stationarity/Invertibility

We will show with (A) since it is both stationary and invertible.

  • (A) $Y_t = 1 + \epsilon_t + 0.5 \epsilon_{t-1}$

rewriting as an $AR(\infty)$ process yields $Y_t = \alpha + \sum_{j = 1}^\infty (-1)^{j+1} \theta^j Y_{t-j} + \epsilon_t$ with coefficients:

In [70]:
theta_coef = 0.5
for j in range(2, 10):
    theta_coef = np.append(theta_coef, (-1)**(j+1)*0.5**j)
print(theta_coef)    
[ 0.5        -0.25        0.125      -0.0625      0.03125    -0.015625
  0.0078125  -0.00390625  0.00195312]

A manually specified arbitrary approximation could be, for example, an $\rm AR(10)$ process:

In [71]:
A_spec_2 = smt.arima_model.ARMA(A_data, order = (10, 0))
A_mdl_2 = A_spec_2.fit(trend = 'c')
In [72]:
print(A_mdl_2.summary().tables[0])
                              ARMA Model Results                              
==============================================================================
Dep. Variable:                      y   No. Observations:                  150
Model:                    ARMA(10, 0)   Log Likelihood                -116.997
Method:                       css-mle   S.D. of innovations              0.526
Date:                Thu, 28 Feb 2019   AIC                            257.994
Time:                        23:17:05   BIC                            294.122
Sample:                             0   HQIC                           272.671
                                                                              
==============================================================================
In [73]:
print(A_mdl_2.summary().tables[1])
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
const          1.0411      0.077     13.584      0.000       0.891       1.191
ar.L1.y        0.5070      0.082      6.216      0.000       0.347       0.667
ar.L2.y       -0.3166      0.091     -3.468      0.001      -0.496      -0.138
ar.L3.y        0.3289      0.095      3.466      0.001       0.143       0.515
ar.L4.y       -0.2270      0.098     -2.308      0.022      -0.420      -0.034
ar.L5.y        0.1655      0.099      1.672      0.097      -0.029       0.360
ar.L6.y       -0.1706      0.100     -1.710      0.089      -0.366       0.025
ar.L7.y        0.0634      0.099      0.637      0.525      -0.132       0.258
ar.L8.y       -0.0907      0.097     -0.936      0.351      -0.281       0.099
ar.L9.y        0.0983      0.094      1.046      0.297      -0.086       0.283
ar.L10.y       0.0891      0.083      1.072      0.286      -0.074       0.252
==============================================================================
In [74]:
print(A_mdl_2.summary().tables[2])
                                    Roots                                     
==============================================================================
                   Real          Imaginary           Modulus         Frequency
------------------------------------------------------------------------------
AR.1             1.1914           -0.0000j            1.1914           -0.0000
AR.2             0.9694           -0.6337j            1.1581           -0.0921
AR.3             0.9694           +0.6337j            1.1581            0.0921
AR.4             0.3457           -1.1340j            1.1855           -0.2029
AR.5             0.3457           +1.1340j            1.1855            0.2029
AR.6            -0.3935           -1.1564j            1.2215           -0.3022
AR.7            -0.3935           +1.1564j            1.2215            0.3022
AR.8            -0.9955           -0.7541j            1.2489           -0.3968
AR.9            -0.9955           +0.7541j            1.2489            0.3968
AR.10           -2.1472           -0.0000j            2.1472           -0.5000
------------------------------------------------------------------------------

We can also try to automatically fit the best AR model:

In [75]:
A_est_auto = sm.tsa.stattools.arma_order_select_ic(A_data, ic='aic', trend='c', max_ma = 0)
print("(A) process order: ", A_est_auto.aic_min_order)
(A) process order:  (4, 0)
In [76]:
A_spec_2 = smt.arima_model.ARMA(A_data, order = A_est_auto.aic_min_order)
A_mdl_2 = A_spec_2.fit(trend = 'c')
In [77]:
print(A_mdl_2.summary2())
                          Results: ARMA
==================================================================
Model:              ARMA             BIC:                 271.6677
Dependent Variable: y                Log-Likelihood:      -120.80 
Date:               2019-02-28 23:17 Scale:               1.0000  
No. Observations:   150              Method:              css-mle 
Df Model:           5                Sample:              0       
Df Residuals:       145                                   0       
Converged:          1.0000           S.D. of innovations: 0.541   
No. Iterations:     10.0000          HQIC:                260.943 
AIC:                253.6039                                      
--------------------------------------------------------------------
            Coef.    Std.Err.      t      P>|t|     [0.025    0.975]
--------------------------------------------------------------------
const       1.0426     0.0662   15.7606   0.0000    0.9129    1.1722
ar.L1.y     0.4917     0.0808    6.0831   0.0000    0.3333    0.6502
ar.L2.y    -0.2719     0.0879   -3.0929   0.0024   -0.4443   -0.0996
ar.L3.y     0.2522     0.0883    2.8555   0.0049    0.0791    0.4254
ar.L4.y    -0.1385     0.0812   -1.7050   0.0903   -0.2976    0.0207
--------------------------------------------------------------------------
                 Real          Imaginary         Modulus         Frequency
--------------------------------------------------------------------------
AR.1           -0.5792           -1.4278          1.5408           -0.3113
AR.2           -0.5792            1.4278          1.5408            0.3113
AR.3            1.4901           -0.9065          1.7442           -0.0870
AR.4            1.4901            0.9065          1.7442            0.0870
==================================================================

Note the different coefficients. This is because the lower level model is a sort of approximation of the higher order model.

We leave the remaining cases as excercises.


From our automatically simulated data, we can transform it into a pure AR, or pure MA process of different orders:

In [78]:
E_auto_spec.arma2ma(lags = 5)
Out[78]:
array([1.    , 2.4   , 2.44  , 2.204 , 1.9364])
In [79]:
E_auto_spec.arma2ar(lags = 5)
Out[79]:
array([ 1.    , -2.4   ,  3.32  , -4.316 ,  5.6108])

Though looking at the documentation (here and here ), thie should be used with caution as it lacks documentation and examples.


3 Residual Diagnostics: (3.1), (3.2) and (3.3)

Remember that our underlying assumption is that $\epsilon_t \sim WN$. In other words the residuals:

  • must be zero-mean;
  • must have a constant variance;
  • must not be autocorrelated;

The zero-mean property is guaranteed by the estimation method (the proof is similar to the OLS estimation method, which minimizes the sum of squared residuals in the univariate cros-sectional regression case). We can examine the variance by checkign for ARCH effects. For the purpose of these exercises, we will examine the variance only visually.

We note that, unlike forecast in R, the statsmodels package in Python does not have the tsdisplay function. Fortunately, we can always create one ourselves for simplicity:

In [80]:
import pandas as pd
In [81]:
def tsdisplay(y, figsize = (14, 8), title = "", lags = 20):
    tmp_data = pd.Series(y)
    fig = plt.figure(figsize = figsize)
    #Plot the time series
    tmp_data.plot(ax = fig.add_subplot(311), title = "$Time\ Series\ " + title + "$", legend = False)
    #Plot the ACF:
    sm.graphics.tsa.plot_acf(tmp_data, lags = lags, zero = False, ax = fig.add_subplot(323))
    plt.xticks(np.arange(1,  lags + 1, 1.0))
    #Plot the PACF:
    sm.graphics.tsa.plot_pacf(tmp_data, lags = 20, zero = False, ax = fig.add_subplot(324))
    plt.xticks(np.arange(1,  lags + 1, 1.0))
    #Plot the QQ plot of the data:
    sm.qqplot(tmp_data, line='s', ax = fig.add_subplot(325)) 
    plt.title("QQ Plot")
    #Plot the residual histogram:
    fig.add_subplot(326).hist(tmp_data, bins = 40, normed = 1)
    plt.title("Histogram")
    #Fix the layout of the plots:
    plt.tight_layout()
    plt.show()

Furthermore, the tsdiag, available in R, does not have an analog in statsmodels. Neverhteless, we can create our own:

In [82]:
def tsdiag(y, figsize = (14, 8), title = "", lags = 30):
    tmp_data = pd.Series(y)
    tmp_data.index += 1
    tmp_acor = list(sm_stat.diagnostic.acorr_ljungbox(tmp_data, lags = lags, boxpierce = True))
    # Plot Ljung-Box and Box-Pierce statistic p-values:
    plt.plot(range(1, len(tmp_acor[0]) + 1), tmp_acor[1], 'bo', label = "Ljung-Box values")
    plt.plot(range(1, len(tmp_acor[0]) + 1), tmp_acor[3], 'go', label = "Box-Pierce values")
    plt.xticks(np.arange(1,  len(tmp_acor[0]) + 1, 1.0))
    plt.axhline(y = 0.05, color = "red", label = "5% critical value")
    plt.title("$Time\ Series\ " + title + "$")
    plt.legend()
    plt.show()
    # Return the statistics:
    col_index = ["Ljung-Box: X-squared", "Ljung-Box: p-value", "Box-Pierce: X-squared", "Box-Pierce: p-value"]
    return pd.DataFrame(tmp_acor, index = col_index, columns = range(1, len(tmp_acor[0]) + 1))

We now examine the residuals of our models:

  • (A) $Y_t = 1 + \epsilon_t + 0.5 \epsilon_{t-1}$

The residuals of the true model:

In [83]:
tsdisplay(A_mdl_1.resid, title = ":\ Residuals\ Of\ Y_t = 1 + \epsilon_t + 0.5 \epsilon_{t-1}")
c:\users\andrius\appdata\local\programs\python\python37\lib\site-packages\matplotlib\axes\_axes.py:6521: MatplotlibDeprecationWarning: 
The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.
  alternative="'density'", removal="3.1")