Autoregressive conditional heteroskedasticity

Autoregressive conditional heteroskedasticity

In econometrics, autoregressive conditional heteroskedasticity (ARCH) models are used to characterize and model observed time series. They are used whenever there is reason to believe that, at any point in a series, the error terms will have a characteristic size or variance. In particular ARCH models assume the variance of the current error term or innovation to be a function of the actual sizes of the previous time periods' error terms: often the variance is related to the squares of the previous innovations.

Such models are often called ARCH models (Engle, 1982),[1] although a variety of other acronyms are applied to particular structures of model which have a similar basis. ARCH models are employed commonly in modeling financial time series that exhibit time-varying volatility clustering, i.e. periods of swings followed by periods of relative calm. ARCH-type models are sometimes considered to be part of the family of stochastic volatility models but strictly this is incorrect since at time t the volatility is completely pre-determined (deterministic) given previous values.[2]


  • ARCH(q) model Specification 1
  • GARCH 2
    • GARCH(p, q) model specification 2.1
  • NGARCH 3
  • IGARCH 4
  • EGARCH 5
  • GARCH-M 6
  • QGARCH 7
  • TGARCH model 9
  • fGARCH 10
  • COGARCH 11
  • References 12
  • Further reading 13

ARCH(q) model Specification

Suppose one wishes to model a time series using an ARCH process. Let ~\epsilon_t~ denote the error terms (return residuals, with respect to a mean process), i.e. the series terms. These ~\epsilon_t~ are split into a stochastic piece z_t and a time-dependent standard deviation \sigma_t characterizing the typical size of the terms so that

~\epsilon_t=\sigma_t z_t ~

The random variable z_t is a strong white noise process. The series \sigma_t^2 is modelled by

\sigma_t^2=\alpha_0+\alpha_1 \epsilon_{t-1}^2+\cdots+\alpha_q \epsilon_{t-q}^2 = \alpha_0 + \sum_{i=1}^q \alpha_{i} \epsilon_{t-i}^2

where ~\alpha_0>0~ and \alpha_i\ge 0,~i>0.

An ARCH(q) model can be estimated using ordinary least squares. A methodology to test for the lag length of ARCH errors using the Lagrange multiplier test was proposed by Engle (1982). This procedure is as follows:

  1. Estimate the best fitting autoregressive model AR(q) y_t = a_0 + a_1 y_{t-1} + \cdots + a_q y_{t-q} + \epsilon_t = a_0 + \sum_{i=1}^q a_i y_{t-i} + \epsilon_t .
  2. Obtain the squares of the error \hat \epsilon^2 and regress them on a constant and q lagged values:
    \hat \epsilon_t^2 = \hat \alpha_0 + \sum_{i=1}^{q} \hat \alpha_i \hat \epsilon_{t-i}^2
    where q is the length of ARCH lags.
  3. The null hypothesis is that, in the absence of ARCH components, we have \alpha_i = 0 for all i = 1, \cdots, q . The alternative hypothesis is that, in the presence of ARCH components, at least one of the estimated \alpha_i coefficients must be significant. In a sample of T residuals under the null hypothesis of no ARCH errors, the test statistic T'R² follows \chi^2 distribution with q degrees of freedom, where T' is the number of equations in the model which fits the residuals vs the lags (i.e. T'=T-q ). If T'R² is greater than the Chi-square table value, we reject the null hypothesis and conclude there is an ARCH effect in the ARMA model. If T'R² is smaller than the Chi-square table value, we do not reject the null hypothesis.


If an autoregressive moving average model (ARMA model) is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH), Bollerslev (1986)) model.

In that case, the GARCH (p, q) model (where p is the order of the GARCH terms ~\sigma^2 and q is the order of the ARCH terms ~\epsilon^2 ) is given by

\sigma_t^2=\alpha_0 + \alpha_1 \epsilon_{t-1}^2 + \cdots + \alpha_q \epsilon_{t-q}^2 + \beta_1 \sigma_{t-1}^2 + \cdots + \beta_p\sigma_{t-p}^2 = \alpha_0 + \sum_{i=1}^q \alpha_i \epsilon_{t-i}^2 + \sum_{i=1}^p \beta_i \sigma_{t-i}^2

Generally, when testing for heteroskedasticity in econometric models, the best test is the White test. However, when dealing with time series data, this means to test for ARCH errors (as described above) and GARCH errors (below).

Exponentially weighted moving average (EWMA) is an alternative model in a separate class of exponential smoothing models. It can be an alternative to GARCH modelling as it has some attractive properties such as a greater weight upon more recent observations but also some drawbacks such as an arbitrary decay factor that introduce subjectivity into the estimation.

GARCH(p, q) model specification

The lag length p of a GARCH(p, q) process is established in three steps:

  1. Estimate the best fitting AR(q) model
    y_t = a_0 + a_1 y_{t-1} + \cdots + a_q y_{t-q} + \epsilon_t = a_0 + \sum_{i=1}^q a_i y_{t-i} + \epsilon_t .
  2. Compute and plot the autocorrelations of \epsilon^2 by
    \rho =
  3. The asymptotic, that is for large samples, standard deviation of \rho (i) is 1/\sqrt{T} . Individual values that are larger than this indicate GARCH errors. To estimate the total number of lags, use the Ljung-Box test until the value of these are less than, say, 10% significant. The Ljung-Box Q-statistic follows \chi^2 distribution with n degrees of freedom if the squared residuals \epsilon^2_t are uncorrelated. It is recommended to consider up to T/4 values of n. The null hypothesis states that there are no ARCH or GARCH errors. Rejecting the null thus means that such errors exist in the conditional variance.


Nonlinear GARCH (NGARCH) also known as Nonlinear Asymmetric GARCH(1,1) (NAGARCH) was introduced by Engle and Ng in 1993.
~\sigma_{t}^2= ~\omega + ~\alpha (~\epsilon_{t-1} - ~\theta ~\sigma_{t-1})^2 + ~\beta ~\sigma_{t-1}^2

~\alpha , ~\beta \geq 0 ; ~\omega > 0.
For stock returns, parameter ~ \theta is usually estimated to be positive; in this case, it reflects the leverage effect, signifying that negative returns increase future volatility by a larger amount than positive returns of the same magnitude.[3][4]

This model shouldn't be confused with the NARCH model, together with the NGARCH extension, introduced by Higgins and Bera in 1992.


Integrated Generalized Autoregressive Conditional Heteroskedasticity IGARCH is a restricted version of the GARCH model, where the persistent parameters sum up to one, and therefore there is a unit root in the GARCH process. The condition for this is

\sum^p_{i=1} ~\beta_{i} +\sum_{i=1}^q~\alpha_{i} = 1 .


The exponential generalized autoregressive conditional heteroskedastic (EGARCH) model by Nelson (1991) is another form of the GARCH model. Formally, an EGARCH(p,q):


where g(Z_{t})=\theta Z_{t}+\lambda(|Z_{t}|-E(|Z_{t}|)), \sigma_{t}^{2} is the conditional variance, \omega, \beta, \alpha, \theta and \lambda are coefficients, and Z_{t} may be a standard normal variable or come from a generalized error distribution. The formulation for g(Z_{t}) allows the sign and the magnitude of Z_{t} to have separate effects on the volatility. This is particularly useful in an asset pricing context.[5]

Since \log\sigma_{t}^{2} may be negative there are no (fewer) restrictions on the parameters.


The GARCH-in-mean (GARCH-M) model adds a heteroskedasticity term into the mean equation. It has the specification:

y_t = ~\beta x_t + ~\lambda ~\sigma_t + ~\epsilon_t

The residual ~\epsilon_t is defined as

~\epsilon_t = ~\sigma_t ~\times z_t


The Quadratic GARCH (QGARCH) model by Sentana (1995) is used to model asymmetric effects of positive and negative shocks.

In the example of a GARCH(1,1) model, the residual process ~\sigma_t is

~\epsilon_t = ~\sigma_t z_t

where z_t is i.i.d. and

~\sigma_t^2 = K + ~\alpha ~\epsilon_{t-1}^2 + ~\beta ~\sigma_{t-1}^2 + ~\phi ~\epsilon_{t-1}


Similar to QGARCH, The Glosten-Jagannathan-Runkle GARCH (GJR-GARCH) model by Glosten, Jagannathan and Runkle (1993) also models asymmetry in the ARCH process. The suggestion is to model ~\epsilon_t = ~\sigma_t z_t where z_t is i.i.d., and

~\sigma_t^2 = K + ~\delta ~\sigma_{t-1}^2 + ~\alpha ~\epsilon_{t-1}^2 + ~\phi ~\epsilon_{t-1}^2 I_{t-1}

where I_{t-1} = 0 if ~\epsilon_{t-1} \ge 0 , and I_{t-1} = 1 if ~\epsilon_{t-1} < 0 .

TGARCH model

The Threshold GARCH (TGARCH) model by Zakoian (1994) is similar to GJR GARCH, and the specification is one on conditional standard deviation instead of conditional variance:

~\sigma_t = K + ~\delta ~\sigma_{t-1} + ~\alpha_1^{+} ~\epsilon_{t-1}^{+} + ~\alpha_1^{-} ~\epsilon_{t-1}^{-}

where ~\epsilon_{t-1}^{+} = ~\epsilon_{t-1} if ~\epsilon_{t-1} > 0 , and ~\epsilon_{t-1}^{+} = 0 if ~\epsilon_{t-1} \le 0 . Likewise, ~\epsilon_{t-1}^{-} = ~\epsilon_{t-1} if ~\epsilon_{t-1} \le 0 , and ~\epsilon_{t-1}^{-} = 0 if ~\epsilon_{t-1} > 0 .


Hentschel's fGARCH model,[6] also known as Family GARCH, is an omnibus model that nests a variety of other popular symmetric and asymmetric GARCH models including APARCH, GJR, AVGARCH, NGARCH, etc.


In 2004, Claudia Klüppelberg, Alexander Lindner and Ross Maller proposed a continuous-time generalization of the discrete-time GARCH(1,1) process. The idea is to start with the GARCH(1,1) model equations

\epsilon_t = \sigma_t z_t,
\sigma_t^2 = \alpha_0 + \alpha_1 \epsilon^2_{t-1} + \beta_1 \sigma^2_{t-1} = \alpha_0 + \alpha_1 \sigma_{t-1}^2 z_{t-1}^2 + \beta_1 \sigma^2_{t-1},

and then to replace the strong white noise process z_t by the infinitesimal increments \mathrm{d}L_t of a Lévy process (L_t)_{t\geq0} , and the squared noise process z^2_t by the increments \mathrm{d}[L,L]^\mathrm{d}_t , where

[L,L]^\mathrm{d}_t = \sum_{s\in[0,t]} (\Delta L_t)^2,\quad t\geq0,

is the purely discontinuous part of the quadratic variation process of L . The result is the following system of stochastic differential equations:

\mathrm{d}G_t = \sigma_{t-} \,\mathrm{d}L_t,
\mathrm{d}\sigma_t^2 = (\beta - \eta \sigma^2_t)\,\mathrm{d}t + \varphi \sigma_{t-}^2 \,\mathrm{d}[L,L]^\mathrm{d}_t,

where the positive parameters \beta , \eta and \varphi are determined by \alpha_0 , \alpha_1 and \beta_1 . Now given some initial condition (G_0,\sigma^2_0) , the system above has a pathwise unique solution (G_t,\sigma^2_t)_{t\geq0} which is then called the continuous-time GARCH (COGARCH) model.[7]


  1. ^ Engle, Robert F. (1982). "Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation".  
  2. ^  
  3. ^ Engle, R.F.; Ng, V.K. (1991). "Measuring and testing the impact of news on volatility". Journal of Finance 48 (5): 1749–1778.  
  4. ^ Posedel, Petra (2006). "Analysis Of The Exchange Rate And Pricing Foreign Currency Options On The Croatian Market: The Ngarch Model As An Alternative To The Black Scholes Model" (PDF). Financial Theory and Practice 30 (4): 347–368. 
  5. ^ St. Pierre, Eilleen F. (1998). "Estimating EGARCH-M Models: Science or Art". The Quarterly Review of Economics and Finance 38 (2): 167–180.  
  6. ^ Hentschel, Ludger (1995). "All in the family Nesting symmetric and asymmetric GARCH models". Journal of Financial Economics 39 (1): 71–104.  
  7. ^ Klüppelberg, C.; Lindner, A.; Maller, R. (2004). "A continous-time GARCH process driven by a Lévy process: stationarity and second-order behaviour". Journal of Applied Probability 41 (3): 601–622.  

Further reading

  • Bollerslev, Tim (2008). "Glossary to ARCH (GARCH)" (PDF). working paper. 
  • Enders, W. (2004). "Modelling Volatility". Applied Econometrics Time Series (Second ed.). John-Wiley & Sons. pp. 108–155.  
  •   (the paper which sparked the general interest in ARCH models)
  • Engle, Robert F. (1995). ARCH: selected readings. Oxford University Press.  
  • Engle, Robert F. (2001). "GARCH 101: The Use of ARCH/GARCH Models in Applied Econometrics".   (a short, readable introduction)
  • Gujarati, D. N. (2003). Basic Econometrics. pp. 856–862. 
  • Hacker, R. S.; Hatemi-J, A. (2005). "A Test for Multivariate ARCH Effects". Applied Economics Letters 12 (7): 411–417.  
  • Nelson, D. B. (1991). "Conditional Heteroskedasticity in Asset Returns: A New Approach".