Suppose \(\epsilon_1, \dots, \epsilon_N\) are i.i.d. \(N(0,1)\) random variables. Suppose \(Y_0 = 0\), and \(Y_i = \theta Y_{i-1} + \epsilon_i\), \(i=1, \dots, N\), and \(|\theta|<1\). Find the maximum likelihood estimate of \(\theta\).
\[ Y_0 = 0 \]
\[ Y_1 = \theta Y_0 + \epsilon_1 \implies \epsilon_1 = Y_1 - \theta Y_0 \]
\[ Y_2 = \theta Y_1 + \epsilon_2 \implies \epsilon_2 = Y_2 - \theta Y_1 \]
\[ \vdots \]
\[ Y_N = \theta Y_{N-1} + \epsilon_N \implies \epsilon_N = Y_{N} - \theta Y_{N-1} \]
Recall \(\epsilon_i\) i.i.d. \(N(0,1)\), hence \(Y_{i}-\theta Y_{i-1}\) also i.i.d \(N(0,1)\). As such,
\[ f(\epsilon_i)=\frac{1}{\sqrt{(2\pi)}}\exp(-\frac{1}{2} \epsilon_i^2) = \frac{1}{\sqrt{(2\pi)}}\exp(-\frac{1}{2} (Y_{i}-\theta Y_{i-1})^2) \]
Therefore, the likelihood function is \[\begin{equation} \begin{split} \mathcal{L}(\theta,y) &= \prod_i^N \frac{1}{\sqrt{(2\pi)}}\exp(-\frac{1}{2} (Y_{i}-\theta Y_{i-1})^2) \\ &= \left( \frac{1}{\sqrt{(2\pi)}} \right)^N\exp(-\frac{1}{2} \sum_i^N (Y_{i}-\theta Y_{i-1})^2) \end{split} \end{equation}\]
Applying \(\ln\) to the likelihood function results in \[ \ln \mathcal{L}(\theta,y) = \frac{-N}{2} \ln{2 \pi} -\frac{1}{2} \sum_i^N (Y_{i}-\theta Y_{i-1})^2. \]
The MLE of \(\theta\) is \(\hat{\theta} = \arg \max_{\theta} \ln \mathcal{L}(\theta,y)\). So,
\[\begin{equation} \begin{split} \frac{\partial \ln \mathcal{L}(\theta,y)}{\partial \theta} &= -\frac{1}{2} (2)\sum_i^N (Y_{i}-\theta Y_{i-1})^{2-1}(-Y_{i-1}) \\ &= \sum_i^N (Y_{i-1}Y_i-\theta Y_{i-1}^2) \\ &= \sum_i^N Y_{i-1}Y_i-\theta \sum_i^NY_{i-1}^2 \stackrel{\text{set}}{=} 0. \end{split} \end{equation}\]
Therefore, the maximum likelihood estimator for \(\theta\) is: \[\begin{equation} \hat{\theta} = \frac{\sum_i^N Y_{i-1}Y_i}{ \sum_i^NY_{i-1}^2}. \end{equation}\]