# GARCH estimation using maximum likelihood

[This article was first published on

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

**We think therefore we R**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

In my previous post I presented my findings from my finance project under the guidance of Dr Susan Thomas. The results in my paper suggested that there are macroeconomic variables, particularly the INR/USD exchange rates, that help us understand the dynamics of stock returns. Although the results that I obtained were significant at 5% level, the weight of my assertion needs some more robust check before we consider the matter closed. One possible source of discrepancy we identified was that the error terms could be heteroskedastic. Meaning that one of our assumption of classical linear regression model (CLRM) estimation viz. homoskedasticity is violated. The resultant coefficient estimated, in case of heteroskedasticity of the error terms, can have underestimated standard errors, which in turn might lead to false acceptance/rejection of our null hypothesis. There are several parametric as well as non-parametric ways to remove the effect of heteroskedasticity in the error terms on the coefficient estimates. However, the method that I adopted to correct for this effect in my model was the path breaking conditional heteroskedasticity modelling propagated by Robert Engle for which he was awarded the Nobel prize in 2003.

The idea behind auto regressive conditional heteroskedasticity (ARCH) model is quite simple and straightforward. One needs to model the mean equation (this is your regression model) and along with that there has to be a specification for the modelling the squares of the errors. Suppose we have a simple AR regression:

z_{t }= φ*z_{t-1} + e_{t }(Mean equation)

Now to model the conditional variance of the error terms all we need to do is a simultaneous estimation of the squares of the error terms in the following manner:

e

_{t}= ѵ_{t}h_{t}^{1/2}, where ѵ_{t}~_{ }NID(0,1)h

_{t }= α_{0 }+ α_{1}e_{t-1}^{2 }(Variance equation)The above specification of the mean and the variance equations is termed as a AR(1) ARCH(1) specification. This simultaneous estimation takes into account the particular form of heteroskedasticity (ARCH(1) in this case) and estimates the ‘φ’ coefficient accordingly. Now the above simple specification tends to pose another problem of lag selection, what lag should be considered for the variance equation. To deal with this and several other shortcomings of the simple ARCH model, Bollerslev(1986) proposed a generalized ARCH model (GARCH). The only difference being that the variance equation now becomes:

h

_{t }= α_{0 }+ α_{1}e_{t-1}^{2}+ βh_{t-1}_{}

Which is nothing but a GARCH(1,1) model. The beauty of this specification is that a GARCH(1,1) model can be expressed as an ARCH(∞) model. For those who are interested in learning more about ARCH and GARCH processes and the mathematics behind them here are Dr Krishnan‘s notes that provide an in-depth understanding on the matter. The reason why the ARCH and GARCH models rose to such prominence was because they offered us a method of not just correcting the standard errors of the estimates (as like other robust correction methods) but also provided a functional form for modelling the variance. This idea of modelling variance was heavily adopted by financial researchers. Considering the standard deviation (or variance) of asset prices as a crude estimate of volatility, financial researchers used ARCH and GARCH models to model asset price and/or other high frequency index volatility.

Now that the idea behind the GARCH modelling is clear let us plunge right into writing the conditional maximum likelihood function for a GARCH process.

Utkarsh was the generous one who provided me with the basic structure of the codes that I then customized to solve this problem at hand. Well now let me confess that a similar result or in fact a more accurate estimates of the coefficients can be obtained using unconditional maximum likelihood estimates that are offered by any of the high priced computer packages like EVIEWS, RATS, SAS etc but my motivation (apart from learning ofcourse) was to figure out a way that could give me the flexibility of modelling any functional form of heteroskedasticity. The above codes have been presented for the estimation of a GARCH(1,1) model but I could do a simple manipulation in the definition of h

_{t }to fit any arbitrary functional form.P.S Sincere thanks to Utkarsh for letting me share the codes. Any feedback/suggestions are welcome.

To

**leave a comment**for the author, please follow the link and comment on their blog:**We think therefore we R**.R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.