Nonlinear Gmm with R – Example with a logistic regression

[This article was first published on Econometrics and Free Software, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

In this post, I will explain how you can use the R gmm package to estimate a non-linear model, and more specifically a logit model. For my research, I have to estimate Euler equations using the Generalized Method of Moments. I contacted Pierre Chaussé, the creator of the gmm library for help, since I was having some difficulties. I am very grateful for his help (without him, I'd still probably be trying to estimate my model!).

Theoretical background, motivation and data set

I will not dwell in the theory too much, because you can find everything you need here. I think it’s more interesting to try to understand why someone would use the Generalized Method of Moments instead of maximization of the log-likelihood. Well, in some cases, getting the log-likelihood can be quite complicated, as can be the case for arbitrary, non-linear models (for example if you want to estimate the parameters of a very non-linear utility function). Also, moment conditions can sometimes be readily available, so using GMM instead of MLE is trivial. And finally, GMM is… well, a very general method: every popular estimator can be obtained as a special case of the GMM estimator, which makes it quite useful.

Another question that I think is important to answer is: why this post? Well, because that’s exactly the kind of post I would have loved to have found 2 months ago, when I was beginning to work with the GMM. Most posts I found presented the gmm package with very simple and trivial examples, which weren’t very helpful. The example presented below is not very complicated per se, but much more closer to a real-world problem than most stuff that is out there. At least, I hope you will find it useful!

For illustration purposes, I'll use data from Marno Verbeek's A guide to modern Econometrics, used in the illustration on page 197. You can download the data from the book's companion page here under the section Data sets or from the Ecdat package in R. I use the data set from Gretl though, as the dummy variables are numeric (instead of class factor) which makes life easier when writing your own functions. You can get the data set here.

Implementation in R

I don't estimate the exact same model, but only use a subset of the variables available in the data set. Keep in mind that this post is just for illustration purposes.

First load the gmm package and load the data set:

require("gmm")
data <- read.table("path/to/data/benefits.R", header = T)

attach(data)

We can then estimate a logit model with the glm() function:

native <- glm(y ~ age + age2 + dkids + dykids + head + male + married + rr +  rr2, family = binomial(link = "logit"), na.action = na.pass)

summary(native)

## 
## Call:
## glm(formula = y ~ age + age2 + dkids + dykids + head + male + 
##     married + rr + rr2, family = binomial(link = "logit"), na.action = na.pass)
## 
## Deviance Residuals: 
##    Min      1Q  Median      3Q     Max  
## -1.889  -1.379   0.788   0.896   1.237  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)   
## (Intercept) -1.00534    0.56330   -1.78   0.0743 . 
## age          0.04909    0.02300    2.13   0.0328 * 
## age2        -0.00308    0.00293   -1.05   0.2924   
## dkids       -0.10922    0.08374   -1.30   0.1921   
## dykids       0.20355    0.09490    2.14   0.0320 * 
## head        -0.21534    0.07941   -2.71   0.0067 **
## male        -0.05988    0.08456   -0.71   0.4788   
## married      0.23354    0.07656    3.05   0.0023 **
## rr           3.48590    1.81789    1.92   0.0552 . 
## rr2         -5.00129    2.27591   -2.20   0.0280 * 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 6086.1  on 4876  degrees of freedom
## Residual deviance: 5983.9  on 4867  degrees of freedom
## AIC: 6004
## 
## Number of Fisher Scoring iterations: 4

Now comes the interesting part: how can you estimate such a non-linear model with the gmm() function from the gmm package?

For every estimation with the Generalized Method of Moments, you will need valid moment conditions. It turns out that in the case of the logit model, this moment condition is quite simple:

$$ E[X' * (Y-\Lambda(X'\theta))] = 0 $$

where \( \Lambda() \) is the logistic function. Let's translate this condition into code. First, we need the logistic function:

logistic <- function(theta, data) {
    return(1/(1 + exp(-data %*% theta)))
}

and let's also define a new data frame, to make our life easier with the moment conditions (don't forget to add a column of ones to the matrix, hence the 1 after y):

dat <- data.matrix(cbind(y, 1, age, age2, dkids, dykids, head, male, married, 
    rr, rr2))

and now the moment condition itself:

moments <- function(theta, data) {
    y <- as.numeric(data[, 1])
    x <- data.matrix(data[, 2:11])
    m <- x * as.vector((y - logistic(theta, x)))
    return(cbind(m))
}

The moment condition(s) are given by a function which returns a matrix with as many columns as moment conditions (same number of columns as parameters for just-identified models).

To use the gmm() function to estimate our model, we need to specify some initial values to get the maximization routine going. One neat trick is simply to use the coefficients of a linear regression; I found it to work well in a lot of situations:

init <- (lm(y ~ age + age2 + dkids + dykids + head + male + married + rr + rr2))$coefficients

And finally, we have everything to use gmm():

my_gmm <- gmm(moments, x = dat, t0 = init, type = "iterative", crit = 1e-25, wmatrix = "optimal", method = "Nelder-Mead", control = list(reltol = 1e-25, maxit = 20000))

summary(my_gmm)

## 
## Call:
## gmm(g = moments, x = dat, t0 = init, type = "iterative", wmatrix = "optimal", 
##     crit = 1e-25, method = "Nelder-Mead", control = list(reltol = 1e-25, 
##         maxit = 20000))
## 
## 
## Method:  iterative 
## 
## Kernel:  Quadratic Spectral
## 
## Coefficients:
##              Estimate    Std. Error  t value     Pr(>|t|)  
## (Intercept)  -0.9090571   0.5751429  -1.5805761   0.1139750
## age           0.0394254   0.0231964   1.6996369   0.0891992
## age2         -0.0018805   0.0029500  -0.6374640   0.5238227
## dkids        -0.0994031   0.0842057  -1.1804799   0.2378094
## dykids        0.1923245   0.0950495   2.0234150   0.0430304
## head         -0.2067669   0.0801624  -2.5793498   0.0098987
## male         -0.0617586   0.0846334  -0.7297189   0.4655620
## married       0.2358055   0.0764071   3.0861736   0.0020275
## rr            3.7895781   1.8332559   2.0671300   0.0387219
## rr2          -5.2849002   2.2976075  -2.3001753   0.0214383
## 
## J-Test: degrees of freedom is 0 
##                 J-test               P-value            
## Test E(g)=0:    0.00099718345776501  *******            
## 
## #############
## Information related to the numerical optimization
## Convergence code =  10 
## Function eval. =  17767 
## Gradian eval. =  NA

Please, notice the options crit=1e-25,method="Nelder-Mead",control=list(reltol=1e-25,maxit=20000): these options mean that the Nelder-Mead algorithm is used, and to specify further options to the Nelder-Mead algorithm, the control option is used. This is very important, as Pierre Chaussé explained to me: non-linear optimization is an art, and most of the time the default options won't cut it and will give you false results. To add insult to injury, the Generalized Method of Moments itself is very capricious and you will also have to play around with different initial values to get good results. As you can see, the Convergence code equals 10, which is a code specific to the Nelder-Mead method which indicates «degeneracy of the Nelder–Mead simplex.» . I'm not sure if this is a bad thing though, but other methods can give you better results. I'd suggest you try always different maximization routines with different starting values to see if your estimations are robust. Here, the results are very similar to what we obtained with the built-in function glm() so we can stop here.

Should you notice any error whatsoever, do not hesitate to tell me.

To leave a comment for the author, please follow the link and comment on their blog: Econometrics and Free Software.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)