Reading: Chapter 1.2 – Shumway and Stoffer
Review / basic concepts¶
Most time series are not iid
Instead, most have dependence on time
Sometimes called “autocorrelation” - the is correlated with
Some have drift
The purpose of time series analysis is to develop mathematical models to provide plausible explanations for sample data
Stochastic process¶
Systems or phenomena that evolve randomly over time
Many time series are modeled as realizations of stochastic processes, even though they contain components that are deterministic / predictable.
These are addressed in more detail in Stat150!
White noise¶
Many real-world time series are a combination of underlying signal plus noise . In some (nice) cases, is white noise.
White noise is a special case of uncorrelated variables in sequence, e.g. where . White noise, unlike most real time series, is iid, with mean 0 and variance . One very useful case is white noise from a Gaussian distribution, where we can write: .
If all time series could be described in this way, classical statistics would suffice.
What does white noise look like?

Moving average¶
One way of smoothing a time series (including white noise) is to average the value at a time point with its neighbors and (or an even larger window from to ). For example:
or more generally
for odd
This is inherently a low-pass filter (lets low frequency signals pass, gets rid of high frequency). It preserves trends slower than samples, and suppresses oscillations with period samples.
When do we use this?
We might use this to reveal trends in noisy time series data, remove fluctuations we consider “noise”, or do simple online smoothing (especially if we choose the window to include only data in the past). However, this is the most basic form of smoothing and is typically replaced by more complex methods such as exponential moving averages, Kalman filters, median filter, etc.
Autoregression¶
Another flavor of dataset we might see is data that comes from an autoregressive process. Autoregression = regression or prediction based on past values of the same time series (“auto”).
This might look something like:
You will see what this looks like in Lab 1. Because the data at relies on and (the prior two data points), this is an AR(2) process. Generating data in this way can result in oscillatory behavior.
Random Walk¶
Another simple but important / helpful extension to the idea of white noise is the random walk. The simple random walk is defined as:
with initial condition .
Equivalently,
The expected value of any time point , so the mean does not vary over time.
The variance of a random walk, on the other hand, is additive and grows linearly with time:
Are random walks iid? No! Each time point is not independent, but depends on the past time point. They’re also not identically distributed, since variance is changing with time.
What are some real examples?
Stock prices (over short time scales - days, not months/years)
Brownian motion (continous time) - diffusion of molecules
Null models for decision making (no evidence accumulation)
Random Walk with Drift¶
More frequently, we see the concept of the random walk with drift (). This extends the idea of the random walk.
with initial condition .
Equivalently,

Here, the expected value is related directly to the drift term: . However, if we know the drift and we can condition on a prior observations (, we can get a conditional expectation:
Again, the variance scales with the number of time points as noise is accumulating step by step.
What are some real examples?
Stock prices over longer time scales
Decision making (drift diffusion models)
Atmospheric concentrations of CO2 (drift represents human influences, stochastic noise reflects natural variability)
Drift diffusion models¶
Drift diffusion models are a cognitive model explaining how people accumulate evidence to make decisions. In these models, is a decision variable, and the drift represents the mean evidence gained per unit time. Usually we also set boundaries for a binary choice: -1 for an incorrect choice or 1 for a correct choice. We can then calculate the reaction time for the decision, which is the first time at which hits either of the choice values, at which point the random walk stops. For no drift, we expect a long reaction time and a 50/50 probability of the correct or incorrect choice. High confidence / high information can be represented by a high value of . For example, your values may be lower if you are doing a visual discrimination task under a lot of noise (uncertainty). values could also be increased by motivation or by higher certainty information.
An example of neurons performing something that looks like evidence accumulation is shown from Gold & Shadlen (2007). This is from a decision making task in which a monkey watches movies of moving dots, where a certain percentage of the dots move coherently, making the task either very easy (all the dots moving the same way) or not (very few coherent dots).

Signal in noise¶
More generally, we can see other examples of periodic signals contaminated by white noise. For example:
Where is the amplitude of the signal, is the frequency of the oscillation, and is a phase shift.
The ratio of the amplitude of the signal to the standard deviation of the noise determines the SNR - signal to noise ratio. The larger the SNR, the easier it is to recover our signal.
Later, we will use various forms of regression to try to recover these signals!
Next week:¶
Measures of dependence! Read SS Chapter 1, sections 1.3-1.7.