Lecture 3 Notes - Characteristics of Time Series
Review / basic concepts ¶ This week - Measures of dependence ¶ Mean and variance ¶ Last week and in your lab, we saw the concept of how mean and variance differ for a white noise process, a random walk, and a random walk with drift. The mean and variance functions of a time series are useful descriptors because they helps us determine something about the drift and the spread of the data that we should expect over time.
The mean function is defined as μ x t = E ( x t ) \mu_{xt} = \mathbb{E}(x_t) μ x t = E ( x t )
The variance function is σ t 2 = Var ( x t ) = E [ ( x t − μ t ) 2 ] \sigma^2_t = \operatorname{Var}(x_t) = \mathbb{E}[(x_t-\mu_t)^2] σ t 2 = Var ( x t ) = E [( x t − μ t ) 2 ] .
Let’s revisit some examples from before:
White noise ¶ For a white noise time series, μ w t = E ( w t ) = 0 \mu_{wt} = \mathbb{E}(w_t)=0 μ wt = E ( w t ) = 0 for all t t t . Var ( w t ) = 1 \operatorname{Var}(w_t) = 1 Var ( w t ) = 1 (for a Gaussian white noise series).
Moving average ¶ What if we apply a 3-point moving average? Although this induces some correlation structure, it actually does not change the mean function at all:
μ v t = E ( v t ) = 1 3 [ E ( w t − 1 ) + E ( w t ) + E ( w t + 1 ) ] = 0 \mu_{vt} = \mathbb{E}(v_t) = \frac{1}{3}[\mathbb{E}(w_{t-1})+\mathbb{E}(w_{t})+\mathbb{E}(w_{t+1})] = 0 μ v t = E ( v t ) = 3 1 [ E ( w t − 1 ) + E ( w t ) + E ( w t + 1 )] = 0
Random walk with drift ¶ For the random walk with drift, the mean function is just the line μ x t = δ t + ∑ j = 1 t E ( w j ) = δ t \mu_{xt} = \delta t + \displaystyle\sum_{j=1}^t E(w_j) = \delta t μ x t = δ t + j = 1 ∑ t E ( w j ) = δ t .
Signal plus noise ¶ And for a signal plus noise (as an example, we’ll use this sinusoid):
μ x t = E ( x t ) = E [ A cos ( 2 π ω t + ϕ ) + w t ] = E [ A cos ( 2 π ω t + ϕ ) ] + E [ w t ] = E [ A cos ( 2 π ω t + ϕ ) ] \begin{aligned}
\mu_{xt} = \mathbb{E}(x_t) &= \mathbb{E} [A \cos(2\pi \omega t + \phi) + w_t] \\
&= \mathbb{E} [A \cos(2\pi \omega t + \phi)] + \mathbb{E}[w_t] \\
&= \mathbb{E} [A \cos(2\pi \omega t + \phi)]
\end{aligned} μ x t = E ( x t ) = E [ A cos ( 2 πω t + ϕ ) + w t ] = E [ A cos ( 2 πω t + ϕ )] + E [ w t ] = E [ A cos ( 2 πω t + ϕ )] Autocovariance ¶ What if we instead want to know something about the dependence between two points s s s and t t t within the same time series. We’ll call this autocovariance , γ \gamma γ .
γ x ( s , t ) = cov ( x s , x t ) = E [ ( x s − μ s ) ( x t − μ t ) ] \gamma_x(s,t) = \operatorname{cov}(x_s,x_t) = \mathbb{E}[(x_s-\mu_s)(x_t-\mu_t)] γ x ( s , t ) = cov ( x s , x t ) = E [( x s − μ s ) ( x t − μ t )]
When s = t s=t s = t , we have:
γ x ( t , t ) = E [ ( x t − μ t ) 2 ] = Var ( x t ) \gamma_x(t,t) = \mathbb{E}[(x_t-\mu_t)^2] = \operatorname{Var}(x_t) γ x ( t , t ) = E [( x t − μ t ) 2 ] = Var ( x t )
For white noise, there should be no dependence between differing time points. By definition, E ( w t ) = 0 \mathbb{E}(w_t)=0 E ( w t ) = 0 and:
γ w ( s , t ) = cov ( w s , w t ) = { σ w 2 , if s = t , 0 , s ≠ t \gamma_w(s,t) = \operatorname{cov}(w_s,w_t) =
\begin{cases}
\sigma^2_w, & \text{if } s=t, \\
0, & s \neq t
\end{cases} γ w ( s , t ) = cov ( w s , w t ) = { σ w 2 , 0 , if s = t , s = t Covariance of linear combinations ¶ If we have random variables U = ∑ j = 1 m a j X j U=\displaystyle\sum_{j=1}^m a_j X_j U = j = 1 ∑ m a j X j and V = ∑ k = 1 r b k Y k V=\displaystyle\sum_{k=1}^r b_k Y_k V = k = 1 ∑ r b k Y k that are linear combinations of (finite variance) random variables X j {X_j} X j and Y k {Y_k} Y k , then the covariance of these is:
cov ( U , V ) = ∑ j = 1 m ∑ k = 1 r a j b k cov ( X j , Y k ) \operatorname{cov}(U,V) = \displaystyle\sum_{j=1}^m \displaystyle\sum_{k=1}^r a_j b_k \operatorname{cov}(X_j, Y_k) cov ( U , V ) = j = 1 ∑ m k = 1 ∑ r a j b k cov ( X j , Y k )
Also, var ( U ) = cov ( U , U ) \operatorname{var}(U) = \operatorname{cov}(U,U) var ( U ) = cov ( U , U ) .
So how can we use this? Now we can try this for the moving average example.
γ v ( s , t ) = cov ( v s , v t ) = cov ( 1 3 ( w s − 1 + w s + w s + 1 ) , 1 3 ( w t − 1 + w t + w t + 1 ) ) = 1 9 cov ( w s − 1 + w s + w s + 1 , w t − 1 + w t + w t + 1 ) \begin{aligned}
\gamma_v(s,t) = \operatorname{cov}(v_s, v_t) &= \operatorname{cov}(\tfrac{1}{3}(w_{s-1}+w_s+w_{s+1}), \tfrac{1}{3}(w_{t-1}+w_t+w_{t+1})) \\
&= \tfrac{1}{9} \operatorname{cov}(w_{s-1}+w_s+w_{s+1}, w_{t-1}+w_t+w_{t+1})
\end{aligned} γ v ( s , t ) = cov ( v s , v t ) = cov ( 3 1 ( w s − 1 + w s + w s + 1 ) , 3 1 ( w t − 1 + w t + w t + 1 )) = 9 1 cov ( w s − 1 + w s + w s + 1 , w t − 1 + w t + w t + 1 ) When s = t s=t s = t , we have:
γ v ( t , t ) = cov ( v t , v t ) = cov ( 1 3 ( w t − 1 + w t + w t + 1 ) , 1 3 ( w t − 1 + w t + w t + 1 ) ) = 1 9 ( cov ( w t − 1 , w t − 1 ) + cov ( w t , w t ) + cov ( w t + 1 , w t + 1 ) + 2 cov ( w t − 1 , w t ) ) + 2 cov ( w t , w t + 1 ) ) + 2 cov ( w t − 1 , w t + 1 ) ) = 1 9 ( σ w 2 + σ w 2 + σ w 2 + 0 + 0 + 0 ) = 3 9 σ w 2 = 1 3 σ w 2 \begin{aligned}
\gamma_v(t,t) &= \operatorname{cov}(v_t, v_t) \\
&= \operatorname{cov}(\tfrac{1}{3}(w_{t-1}+w_t+w_{t+1}), \tfrac{1}{3}(w_{t-1}+w_t+w_{t+1})) \\
&= \tfrac{1}{9} (\operatorname{cov}(w_{t-1},w_{t-1}) + \operatorname{cov}(w_{t},w_{t}) + \operatorname{cov}(w_{t+1},w_{t+1}) \\
&\quad + 2\operatorname{cov}(w_{t-1}, w_t)) + 2\operatorname{cov}(w_{t}, w_{t+1})) + 2\operatorname{cov}(w_{t-1}, w_{t+1}))\\
&= \tfrac{1}{9} (\sigma_w^2 + \sigma_w^2 + \sigma_w^2 + 0 +0 +0)\\
&= \tfrac{3}{9} \sigma_w^2 \\
&= \tfrac{1}{3} \sigma_w^2
\end{aligned} γ v ( t , t ) = cov ( v t , v t ) = cov ( 3 1 ( w t − 1 + w t + w t + 1 ) , 3 1 ( w t − 1 + w t + w t + 1 )) = 9 1 ( cov ( w t − 1 , w t − 1 ) + cov ( w t , w t ) + cov ( w t + 1 , w t + 1 ) + 2 cov ( w t − 1 , w t )) + 2 cov ( w t , w t + 1 )) + 2 cov ( w t − 1 , w t + 1 )) = 9 1 ( σ w 2 + σ w 2 + σ w 2 + 0 + 0 + 0 ) = 9 3 σ w 2 = 3 1 σ w 2 When s = t + 1 s=t+1 s = t + 1 , we have:
γ v ( t + 1 , t ) = cov ( 1 3 ( w t + w t + 1 + w t + 2 ) , 1 3 ( w t − 1 + w t + w t + 1 ) ) = 1 9 ( cov ( w t , w t ) + cov ( w t + 1 , w t + 1 ) ) + cov ( w t , w t − 1 ) + cov ( w t , w t + 1 ) + cov ( w t + 1 , w t − 1 ) + cov ( w t + 1 , w t ) + cov ( w t + 2 , w t − 1 ) + cov ( w t + 2 , w t ) + cov ( w t + 2 , w t + 1 ) ) = 1 9 ( cov ( w t , w t ) + cov ( w t + 1 , w t + 1 ) ) = 2 9 σ w 2 \begin{aligned}
\gamma_v(t+1,t) &= \operatorname{cov}(\tfrac{1}{3}(w_{t}+w_{t+1}+w_{t+2}), \tfrac{1}{3}(w_{t-1}+w_t+w_{t+1})) \\
&= \tfrac{1}{9}(\operatorname{cov}(w_t,w_t)\\
&\quad +\operatorname{cov}(w_{t+1},w_{t+1}))\\
&\quad +\operatorname{cov}(w_{t},w_{t-1})\\
&\quad +\operatorname{cov}(w_{t},w_{t+1})\\
&\quad +\operatorname{cov}(w_{t+1},w_{t-1})\\
&\quad +\operatorname{cov}(w_{t+1},w_{t})\\
&\quad +\operatorname{cov}(w_{t+2},w_{t-1})\\
&\quad +\operatorname{cov}(w_{t+2},w_{t})\\
&\quad +\operatorname{cov}(w_{t+2},w_{t+1}))\\
&= \tfrac{1}{9}(\operatorname{cov}(w_t,w_t)+\operatorname{cov}(w_{t+1},w_{t+1}))\\
&= \tfrac{2}{9}\sigma^2_w
\end{aligned} γ v ( t + 1 , t ) = cov ( 3 1 ( w t + w t + 1 + w t + 2 ) , 3 1 ( w t − 1 + w t + w t + 1 )) = 9 1 ( cov ( w t , w t ) + cov ( w t + 1 , w t + 1 )) + cov ( w t , w t − 1 ) + cov ( w t , w t + 1 ) + cov ( w t + 1 , w t − 1 ) + cov ( w t + 1 , w t ) + cov ( w t + 2 , w t − 1 ) + cov ( w t + 2 , w t ) + cov ( w t + 2 , w t + 1 )) = 9 1 ( cov ( w t , w t ) + cov ( w t + 1 , w t + 1 )) = 9 2 σ w 2 If we then follow this for more lags, we get:
γ v ( s , t ) = { 3 9 σ w 2 , if s = t , 2 9 σ w 2 , if ∣ s − t ∣ = 1 , 1 9 σ w 2 , if ∣ s − t ∣ = 2 , 0 , if ∣ s − t ∣ > 2 \gamma_v(s,t) =
\begin{cases}
\tfrac{3}{9}\sigma^2_w, & \text{if } s=t, \\
\tfrac{2}{9}\sigma^2_w, & \text{if } |s-t|=1, \\
\tfrac{1}{9}\sigma^2_w, & \text{if } |s-t|=2, \\
0, & \text{if } |s-t|>2
\end{cases} γ v ( s , t ) = ⎩ ⎨ ⎧ 9 3 σ w 2 , 9 2 σ w 2 , 9 1 σ w 2 , 0 , if s = t , if ∣ s − t ∣ = 1 , if ∣ s − t ∣ = 2 , if ∣ s − t ∣ > 2 Why is this interesting? The autocovariance depends only on the lag between s s s and t t t and not on the absolute location of these time points. We’ll come back to this when we talk about stationarity .