# Classification from scratch, bagging and forests 10/8

June 8, 2018
By

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Tenth post of our series on classification from scratch. Today, we’ll see the heuristics of the algorithm inside bagging techniques.

Often, bagging is associated with trees, to generate forests. But actually, it is possible using bagging for any kind of model. Recall that bagging means “boostrap aggregation”. So, consider a model $m:\mathcal{X}\rightarrow \mathcal{Y}$. Let $\widehat{m}_{S}$ denote the estimator of $m$ obtained from sample $S=\{y_i,\mathbf{x}_i\}$ with $i=\{1,\cdots,n\}$.

Consider now some boostrap sample, $S_b=\{y_i,\mathbf{x}_i\}$ with $i$ is randomly drawn from $\{1,\cdots,n\}$ (with replacement). Based on that sample, estimate $\widehat{m}_{S_b}$. Then draw many samples, and consider the agregation of the estimators obtained, using either a majority rule, or using the average of probabilities (if a probabilist model was considered). Hence$$\widehat{m}^{bag}(\mathbf{x})=\frac{1}{B}\sum_{b=1}^B \widehat{m}_{S_b}(\mathbf{x})$$

## Bagging logistic regression #1

Consider the case of the logistic regression. To generate a bootstrap sample, it is natural to use the technique describe above. I.e. draw pairs $(y_i,\mathbf{x}_i)$ randomly, uniformly (with probability $1/n$) with replacement. Consider here the small dataset, just to visualize. For the b part of bagging, use the following code

 1 2 3 4 5  L_logit = list() n = nrow(df) for(s in 1:1000){ df_s = df[sample(1:n,size=n,replace=TRUE),] L_logit[[s]] = glm(y~., df_s, family=binomial)}

Then we should aggregate over the 1000 models, to get the agg part of bagging,

 1 2 3  p = function(x){ nd=data.frame(x1=x[1], x2=x[2]) unlist(lapply(1:1000,function(z) predict(L_logit[[z]],newdata=nd,type="response")))}

We now have a prediction for any new observation

 1 2 3 4 5 6  vu = seq(0,1,length=101) vv = outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y))))) image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5) contour(vu,vu,vv,levels = .5,add=TRUE) ## Bagging logistic regression #2 Another technique that can be used to generate a bootstrap sample is to keep all $\mathbf{x}_i$‘s, but for each of them, to draw (randomly) a value for $y$, with$$Y_{i,b}\sim\mathcal{B}(\widehat{m}_{S}(\mathbf{x}_i))$$since$$\widehat{m}(\mathbf{x})=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}].$$Thus, the code for the b part of bagging algorithm is now  1 2 3 4 5 6 7 8  L_logit = list() n = nrow(df) reg = glm(y~x1+x2, df, family=binomial) for(s in 1:100){ df_s = df df_s$y = factor(rbinom(n,size=1,prob=predict(reg,type="response")),labels=0:1) L_logit[[s]] = glm(y~., df_s, family=binomial) }

The agg part of bagging algorithm remains unchanged. Here we obtain

 1 2 3 4 5 6  vu = seq(0,1,length=101) vv = outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y))))) image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5) contour(vu,vu,vv,levels = .5,add=TRUE) Of course, we can use that code we check the prediction obtain on the observations we have in our sample. Just to change, consider here the myocarde data. The entiere code is here  1 2 3 4 5 6 7 8 9 10 11  L_logit = list() reg = glm(as.factor(PRONO)~., myocarde, family=binomial) for(s in 1:1000){ myocarde_s = myocarde myocarde_s$PRONO = 1*rbinom(n,size=1,prob=predict(reg,type="response")) L_logit[[s]] = glm(as.factor(PRONO)~., myocarde_s, family=binomial) } p = function(x){ nd=data.frame(FRCAR=x[1], INCAR=x[2], INSYS=x[3], PRDIA=x[4], PAPUL=x[4], PVENT=x[5], REPUL=x[6]) unlist(lapply(1:1000,function(z) predict(L_logit[[z]],newdata=nd,type="response")))}

For the first observation, with our 1000 simulated datasets, and our 1000 models, we obtained the following estimation for the probability to die.

## Fronm bags to forest

Here, we grew a lot of trees, but it is not stricto sensus a random forest algorithm, as introduced in 1995, in Random decision forests. Actually, the difference is in the creation of decision trees. To understand what happens, get back to the previous post on classification trees. As we’ve seen, when we have a node, we look at possible splits : we consider all possible variable, and all possible threshold. The startegy here will be to draw randomly $k$ variables out of $p$ (with of course k

To be continued…

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.