Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Lecture 3 Notes - Characteristics of Time Series

Review / basic concepts

This week - Measures of dependence

Mean and variance

Last week and in your lab, we saw the concept of how mean and variance differ for a white noise process, a random walk, and a random walk with drift. The mean and variance functions of a time series are useful descriptors because they helps us determine something about the drift and the spread of the data that we should expect over time.

The mean function is defined as μxt=E(xt)\mu_{xt} = \mathbb{E}(x_t)

The variance function is σt2=Var(xt)=E[(xtμt)2]\sigma^2_t = \operatorname{Var}(x_t) = \mathbb{E}[(x_t-\mu_t)^2].

Let’s revisit some examples from before:

White noise

For a white noise time series, μwt=E(wt)=0\mu_{wt} = \mathbb{E}(w_t)=0 for all tt. Var(wt)=1\operatorname{Var}(w_t) = 1 (for a Gaussian white noise series).

Moving average

What if we apply a 3-point moving average? Although this induces some correlation structure, it actually does not change the mean function at all:

μvt=E(vt)=13[E(wt1)+E(wt)+E(wt+1)]=0\mu_{vt} = \mathbb{E}(v_t) = \frac{1}{3}[\mathbb{E}(w_{t-1})+\mathbb{E}(w_{t})+\mathbb{E}(w_{t+1})] = 0

Random walk with drift

For the random walk with drift, the mean function is just the line μxt=δt+j=1tE(wj)=δt\mu_{xt} = \delta t + \displaystyle\sum_{j=1}^t E(w_j) = \delta t.

Signal plus noise

And for a signal plus noise (as an example, we’ll use this sinusoid):

μxt=E(xt)=E[Acos(2πωt+ϕ)+wt]=E[Acos(2πωt+ϕ)]+E[wt]=E[Acos(2πωt+ϕ)]\begin{aligned} \mu_{xt} = \mathbb{E}(x_t) &= \mathbb{E} [A \cos(2\pi \omega t + \phi) + w_t] \\ &= \mathbb{E} [A \cos(2\pi \omega t + \phi)] + \mathbb{E}[w_t] \\ &= \mathbb{E} [A \cos(2\pi \omega t + \phi)] \end{aligned}

Autocovariance

What if we instead want to know something about the dependence between two points ss and tt within the same time series. We’ll call this autocovariance, γ\gamma.

γx(s,t)=cov(xs,xt)=E[(xsμs)(xtμt)]\gamma_x(s,t) = \operatorname{cov}(x_s,x_t) = \mathbb{E}[(x_s-\mu_s)(x_t-\mu_t)]

When s=ts=t, we have:

γx(t,t)=E[(xtμt)2]=Var(xt)\gamma_x(t,t) = \mathbb{E}[(x_t-\mu_t)^2] = \operatorname{Var}(x_t)

For white noise, there should be no dependence between differing time points. By definition, E(wt)=0\mathbb{E}(w_t)=0 and:

γw(s,t)=cov(ws,wt)={σw2,if s=t,0,st\gamma_w(s,t) = \operatorname{cov}(w_s,w_t) = \begin{cases} \sigma^2_w, & \text{if } s=t, \\ 0, & s \neq t \end{cases}

Covariance of linear combinations

If we have random variables U=j=1majXjU=\displaystyle\sum_{j=1}^m a_j X_j and V=k=1rbkYkV=\displaystyle\sum_{k=1}^r b_k Y_k that are linear combinations of (finite variance) random variables Xj{X_j} and Yk{Y_k}, then the covariance of these is:

cov(U,V)=j=1mk=1rajbkcov(Xj,Yk)\operatorname{cov}(U,V) = \displaystyle\sum_{j=1}^m \displaystyle\sum_{k=1}^r a_j b_k \operatorname{cov}(X_j, Y_k)

Also, var(U)=cov(U,U)\operatorname{var}(U) = \operatorname{cov}(U,U).

So how can we use this? Now we can try this for the moving average example.

γv(s,t)=cov(vs,vt)=cov(13(ws1+ws+ws+1),13(wt1+wt+wt+1))=19cov(ws1+ws+ws+1,wt1+wt+wt+1)\begin{aligned} \gamma_v(s,t) = \operatorname{cov}(v_s, v_t) &= \operatorname{cov}(\tfrac{1}{3}(w_{s-1}+w_s+w_{s+1}), \tfrac{1}{3}(w_{t-1}+w_t+w_{t+1})) \\ &= \tfrac{1}{9} \operatorname{cov}(w_{s-1}+w_s+w_{s+1}, w_{t-1}+w_t+w_{t+1}) \end{aligned}

When s=ts=t, we have:

γv(t,t)=cov(vt,vt)=cov(13(wt1+wt+wt+1),13(wt1+wt+wt+1))=19(cov(wt1,wt1)+cov(wt,wt)+cov(wt+1,wt+1)+2cov(wt1,wt))+2cov(wt,wt+1))+2cov(wt1,wt+1))=19(σw2+σw2+σw2+0+0+0)=39σw2=13σw2\begin{aligned} \gamma_v(t,t) &= \operatorname{cov}(v_t, v_t) \\ &= \operatorname{cov}(\tfrac{1}{3}(w_{t-1}+w_t+w_{t+1}), \tfrac{1}{3}(w_{t-1}+w_t+w_{t+1})) \\ &= \tfrac{1}{9} (\operatorname{cov}(w_{t-1},w_{t-1}) + \operatorname{cov}(w_{t},w_{t}) + \operatorname{cov}(w_{t+1},w_{t+1}) \\ &\quad + 2\operatorname{cov}(w_{t-1}, w_t)) + 2\operatorname{cov}(w_{t}, w_{t+1})) + 2\operatorname{cov}(w_{t-1}, w_{t+1}))\\ &= \tfrac{1}{9} (\sigma_w^2 + \sigma_w^2 + \sigma_w^2 + 0 +0 +0)\\ &= \tfrac{3}{9} \sigma_w^2 \\ &= \tfrac{1}{3} \sigma_w^2 \end{aligned}

When s=t+1s=t+1, we have:

γv(t+1,t)=cov(13(wt+wt+1+wt+2),13(wt1+wt+wt+1))=19(cov(wt,wt)+cov(wt+1,wt+1))+cov(wt,wt1)+cov(wt,wt+1)+cov(wt+1,wt1)+cov(wt+1,wt)+cov(wt+2,wt1)+cov(wt+2,wt)+cov(wt+2,wt+1))=19(cov(wt,wt)+cov(wt+1,wt+1))=29σw2\begin{aligned} \gamma_v(t+1,t) &= \operatorname{cov}(\tfrac{1}{3}(w_{t}+w_{t+1}+w_{t+2}), \tfrac{1}{3}(w_{t-1}+w_t+w_{t+1})) \\ &= \tfrac{1}{9}(\operatorname{cov}(w_t,w_t)\\ &\quad +\operatorname{cov}(w_{t+1},w_{t+1}))\\ &\quad +\operatorname{cov}(w_{t},w_{t-1})\\ &\quad +\operatorname{cov}(w_{t},w_{t+1})\\ &\quad +\operatorname{cov}(w_{t+1},w_{t-1})\\ &\quad +\operatorname{cov}(w_{t+1},w_{t})\\ &\quad +\operatorname{cov}(w_{t+2},w_{t-1})\\ &\quad +\operatorname{cov}(w_{t+2},w_{t})\\ &\quad +\operatorname{cov}(w_{t+2},w_{t+1}))\\ &= \tfrac{1}{9}(\operatorname{cov}(w_t,w_t)+\operatorname{cov}(w_{t+1},w_{t+1}))\\ &= \tfrac{2}{9}\sigma^2_w \end{aligned}

If we then follow this for more lags, we get:

γv(s,t)={39σw2,if s=t,29σw2,if st=1,19σw2,if st=2,0,if st>2\gamma_v(s,t) = \begin{cases} \tfrac{3}{9}\sigma^2_w, & \text{if } s=t, \\ \tfrac{2}{9}\sigma^2_w, & \text{if } |s-t|=1, \\ \tfrac{1}{9}\sigma^2_w, & \text{if } |s-t|=2, \\ 0, & \text{if } |s-t|>2 \end{cases}

Why is this interesting? The autocovariance depends only on the lag between ss and tt and not on the absolute location of these time points. We’ll come back to this when we talk about stationarity.