Have a Question?
Phone: +1 (888) 427-9486
+1 (312) 257-3777
Contact Us
A Correlogram tale
Attachment | Size | |
---|---|---|
Correlogram.xlsx | ||
Correlogram_Analysis.pdf |
In data analysis, we usually start with the descriptive statistical properties of the sample data (e.g. mean, standard deviation, skew, kurtosis, empirical distribution, etc.). These calculations are certainly useful, but they do not account for the order of the observations in the sample data.
Time series analysis demands that we pay attention to order, and thus requires a different type of descriptive statistics: time series descriptive statistics, or simply correlogram analysis. The correlogram analysis examines the time-spatial dependency within the sample data, and focuses on the empirical auto-covariance, auto-correlation, and related statistical tests. Finally, the correlogram is a cornerstone for identifying the model and model order(s).
What does a plot for auto correlation (ACF) and/or partial auto-correlation (PACF) tell us about the underlying process dynamics?
This tutorial is a bit more theoretical than prior tutorials in the same series, but we will do our best to drive the intuitions home for you.
Background
First, we’ll start with a definition for the auto-correlation function, simplify it, and investigate the theoretical ACF for an ARMA-type of process.
Auto-correlation function (ACF)
By definition, the auto correlation for lag k is expressed as follows:
Where
- auto-correlation function for lag k
- auto-covariance for lag k
- time series variance (un-conditional)
- stationary time series unconditional mean
Furthermore, let’s assume that is generated from a weak-stationary process with a zero mean (i.e.).
For finite sample data, the empirical auto-correlation is expressed as follows:
And
Where
- number of non-missing observations in the time series
- the time series sample average
NOTE: Using the sample auto-correlation estimate and error , we can easily perform a one-sample mean test to examine its statistical significance, but what about a joint test for a set of auto-correlation factors? For that, we use the Ljung-Box Test (white-noise test).
The Ljung-Box test is discussed in great details as part of our “White-noise tutorial.” Please refer to that paper for more details.
Example 1 - MA(q) model
Let’s start with a simple moving average model with order q.
Where
Now, let’s compute the ACF for different lags.
The ACF plot for an MA(q) process is non-zero for the first q lags.
INTUITION
- The MA has a finite memory of size q
- The ACF plot shows the memory size requirement of the model
- An ARMA model with finite memory can be fully described using an MA type of model. The coefficients’ values of the MA model are the values of the auto-correlation function.
Example 2 - AR(1) model
Next, let’s look at a simple auto-regressive (AR) model of order 1.
Where
- = long-run process (un-conditional) average
- long-run unconditional process mean
- characteristic roots of the AR(p) model
- An AR process can be represented by an infinite MA process
- The AR has infinite memory, but the effect diminishes over time
- Exponential smoothing functions are special cases of an AR process, and they also possess infinite memory
- long-run conditional process mean (from the AR component
- inverse of the characteristic roots of the AR(p) model
- Intuition: The ACF values can be thought of as the coefficient values of the equivalent MA model.
- Intuition: The conditional variance has no barrier (effect) on the auto-correlation calculations.
- Intuition: The long-run mean also does not have any barrier (effect) on the auto-correlations.
Let’s compute the auto-correlation function of an AR(1) process:
Assuming , then AR(1) can be represented as an infinite MA model as below:
The ACF plot for an AR type of process is infinite, but the decays exponentially.
Example 3 - AR(p) model
Now, let’s get a bit more ambitious and look for an AR model with order p.
Where
Let’s compute the auto-correlation function of an AR(p) process. Using partial-fraction decomposition, we break an AR(p) process into a set of p AR(1) process.
Let’s assume all characteristic roots fall outside the unit circle, and therefore the
This ACF plot is also infinite, but the actual shape can follow different patterns.
INTUITION TWO:
Example 4 - ARMA (p,q) model
By now, we see what the ACF plot of a pure MA and AR process looks like, but what about a mixture of the two models?
Question: why do we need to consider a mixture model like ARMA, since we can represent any model as an MA or an AR model? Answer: we are trying to reduce the memory requirement and the complexity of the process by super-imposing the two models.
Where
Using the MA(q) auto-correlation formula, we can compute the ARMA(p,q) auto-correlation functions for their MA representation.
This is getting intense! Some of you might be wondering why we haven’t used VAR or a state space representation to simplify the notations. I made a point to stay in the time domain, and avoided any new ideas or math tricks as they would not serve our intentions here: Implying the exact AR/MA order using the ACF values by themselves, which is anything but precise.
Partial Auto-correlation function (PACF)
By now, we have seen that identifying the model order (MA or the AR) is non-trivial for non-simple cases, so we need another tool – partial auto-correlation function (PACF).
The partial auto correlation function (PACF) plays an important role in data analysis aimed at identifying the extent of the lag in an autoregressive model. The use of this function was introduced as part of the Box-Jenkins approach to time series modeling, whereby one could determine the appropriate lags p in an AR(p) model or in an extended ARIMA(p,d,q) model by plotting the partial auto correlation functions.
Simply put, the PACF for lag k is the regression coefficient for the kth term, as shown below:
The PACF assumes the underlying model is an AR(k) and uses multiple regressions to compute the last regression coefficient.
Please note that , and that
Quick intuition: the PACF values can be thought of (roughly speaking) as the coefficient values of the equivalent AR model.
How is the PACF helpful to us? Assuming we have an AR(p) process, then the PACF will have significant values for the first p lags, and will drop to zero afterwards.
What about the MA process? The MA process has non-zero PACF values for a (theoretically) infinite number of lags.
Example 4: MA (1)
Assuming (invertible), the MA process can be represented as infinite AR:
The PACF of an MA is expected to have significant values to a large number of lags.
How about MA(q)? The same conclusion follows through here, so that the MA(q) can be inverted to an infinite AR sequence.
Conclusion
In this tutorial, we discussed the auto and partial auto correlation functions and their role in identifying the order of the underlying ARMA process: ACF for the MA order and the PACF for the AR order.
Furthermore, we showed how more than one model can be used to generate the same ACF (and PACF) plots (i.e. correlogram).
With the exception of trivial cases, the process of identifying the proper model order is never clear-cut. The process demands that we entertain several candidate models, fit their parameters, validate the assumptions, compare their fit and finally select the best one.
Attachment | Size |
---|---|
Correlogram.xlsx | 282.47 KB |
Correlogram_Analysis.pdf | 378.6 KB |