Some Intuition About the Theory of Statistical Learning

[This article was first published on Freakonometrics » R-english, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

While I was working on the Theory of Statistical Learning, and the concept of consistency, I found the following popular graph (e.g. from  thoses slides, here in French)

The curve below is the error on the training sample, as a function of the size of the training sample. Above, it is the error on a validation sample. Our learning process is consistent if the two converge.

I was wondering if it was possible to generate such a graph, with some data, and some statistical model. And indeed, it is rather simple, and it gives nice intuition about possible interpretations. Consider some (simple) classification problem. Here, we consider a logistic regression. We generate a sample of size http://latex.codecogs.com/gif.latex?n, we fit our model, we compute the misclassification rate, then we generate another sample of size http://latex.codecogs.com/gif.latex?n, we use our previous model to make some prediction, and we compute the misclassifiation rate. And we play with http://latex.codecogs.com/gif.latex?n.

missclassification <- function(n){
  U=data.frame(X1=runif(n),X2=runif(n))
  p=(U[,1]+U[,2])/2
  U$Y=rbinom(n,size=1,prob=p)
  reg=glm(Y~X1+X2,data=U,family=binomial)
  pd=function(x1,x2) predict(reg,newdata=data.frame(X1=x1,X2=x2),type="response")>.5
  x=seq(0,1,length=101)
  z=outer(x,x,pd)
  cl2=c(rgb(1,0,0,.4),rgb(0,0,1,.4))
  cl1=c("red","blue")
  image(x,x,z,col=cl2,xlab="",ylab="",main="Training Sample")
  points(U$X1,U$X2,pch=19,col=cl1[1+U$Y])
 
  V=data.frame(X1=runif(n),X2=runif(n))
  p=(V[,1]+V[,2])/2
  V$Y=rbinom(n,size=1,prob=p)
  screen(4)
  image(x,x,z,col=cl2,xlab="",ylab="",main="Validation Sample")
  points(V$X1,V$X2,pch=19,col=cl1[1+V$Y])
 
  MissClassU=mean(abs(pd(U$X1,U$X2)-U$Y))
  MissClassV=mean(abs(pd(V$X1,V$X2)-V$Y))
return(c(MissClassU,MissClassV))
}

If we plot it, we get (in purple, it is the training sample, and in black, the validation sample)

The graph is not exactly the same as above, but it is probably due to the randomness of our samples. If we generate hundreds of samples, it should be just fine.

MCU=rep(NA,500)
MCV=rep(NA,500)
n=250
  for(i in 1:500){
    U=data.frame(X1=runif(n),X2=runif(n))
    p=(U[,1]+U[,2])/2
    U$Y=rbinom(n,size=1,prob=p)
    reg=glm(Y~X1+X2,data=U,family=binomial)
    pd=function(x1,x2) predict(reg,newdata=data.frame(X1=x1,X2=x2),type="response")>.5
    MCU[i]=mean(abs(pd(U$X1,U$X2)-U$Y))
 
    V=data.frame(X1=runif(n),X2=runif(n))
    p=(V[,1]+V[,2])/2
    V$Y=rbinom(n,size=1,prob=p)
    MCV[i]=mean(abs(pd(V$X1,V$X2)-V$Y))
  }
  MissClassV=mean(MCU)
  MissClassU=mean(MCV)

To leave a comment for the author, please follow the link and comment on their blog: Freakonometrics » R-english.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)