# Classification from scratch, logistic with kernels 3/8

**R-english – Freakonometrics**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric.

## kernel based estimated, from scratch

I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate [latex]\hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x})[/latex]. Heuritically, we want to compute the (conditional) expected value on the neighborhood of [latex]\mathbf{x}[/latex]. If we consider some spatial model, where [latex]\mathbf{x}[/latex] is the location, we want the expected value of some variable [latex]Y[/latex], “on the neighborhood” of [latex]\mathbf{x}[/latex]. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of [latex]\mathcal{X}[/latex] (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider [latex display=”true”]\hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}[/latex]or the moving regressogram [latex display=”true”]\hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}[/latex]In that case, the neighborhood is defined as the interval [latex](x\pm h)[/latex]. That’s nice, but clearly very simplistic. If [latex]\mathbf{x}_i=\mathbf{x}[/latex] and [latex]\mathbf{x}_j=\mathbf{x}-h+\varepsilon[/latex] (with [latex]\varepsilon>0[/latex]), both observations are used to compute the conditional expected value. But if [latex]\mathbf{x}_{j’}=\mathbf{x}-h-\varepsilon[/latex], only [latex]\mathbf{x}_i[/latex] is considered. Even if the distance between [latex]\mathbf{x}_{j}[/latex] and [latex]\mathbf{x}_{j’}[/latex] is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between [latex]\mathbf{x}_{i}[/latex]‘s and [latex]\mathbf{x}[/latex].Use[latex display=”true”]\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}[/latex]where (classically)[latex display=”true”]k_h(x)=k\left(\frac{x}{h}\right)[/latex]for some kernel [latex]k[/latex] (a non-negative function that integrates to one) and some bandwidth [latex]h[/latex]. Usually, kernels are denoted with capital letter [latex]K[/latex], but I prefer to use [latex]k[/latex], because it can be interpreted as the density of some random noise we add to all observations (independently).

Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that[latex display=”true”]\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)[/latex]

Now, use the fact that the expected value can be defined as[latex display=”true”]m(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}[/latex]Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by[latex display=”true”]\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)[/latex]while the denominator is estimated by[latex display=”true”]\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)[/latex]In a general setting, we still use product kernels between [latex]Y[/latex] and [latex]\mathbf{X}[/latex] and write [latex display=”true”]\widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}[/latex]for some symmetric positive definite bandwidth matrix [latex]\mathbf{H}[/latex], and [latex display=”true”]k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})[/latex]

Now that we know what kernel estimates are, let us use them. For instance, assume that [latex]k[/latex] is the density of the [latex]\mathcal{N}(0,1)[/latex] distribution. At point [latex]x[/latex], with a bandwidth [latex]h[/latex] we get the following code

mean_x = function(x,bw){ w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1) weighted.mean(myocarde$PRONO,w)} u = seq(5,55,length=201) v = Vectorize(function(x) mean_x(x,3))(u) plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19)

and of course, we can change the bandwidth.

v = Vectorize(function(x) mean_x(x,2))(u) plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19)

We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point [latex]x[/latex], so the smaller the neighborhood, the better.

## Using ksmooth R function

Actually, there is a function in R to compute this kernel regression.

reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1)) plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="") points(myocarde$INSYS,myocarde$PRONO,pch=19)

We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before

g=function(bk=3){ reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk) f=function(bm){ v = Vectorize(function(x) mean_x(x,bm))(reg$x) z=reg$y-v sum((z[!is.na(z)])^2)} optim(bk,f)$par} x=seq(1,10,by=.1) y=Vectorize(g)(x) plot(x,y) abline(0,exp(-1),col="red") abline(0,.37,col="blue")

There is a slope of [latex]0.37[/latex], which is actually [latex]e^{-1}[/latex]. Coincidence ? I don’t know to be honest…

## Application in higher dimension

Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels

u = seq(0,1,length=101) p = function(x,y){ bw1 = .2; bw2 = .2 w = dnorm((df$x1-x)/bw1, mean=0,sd=1)* dnorm((df$x2-y)/bw2, mean=0,sd=1) weighted.mean(df$y=="1",w) } v = outer(u,u,Vectorize(p)) image(u,u,v,col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)

We get the following prediction

Here, the different colors are probabilities.

## k-nearest neighbors

An alternative is to consider a neighborhood not defined using a distance to point [latex]\mathbf{x}[/latex] but the [latex]k[/latex]-neighbors, with the [latex]n[/latex] observations we got.[latex display=”true”]\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i[/latex]

where [latex]\omega_{i,k}(\mathbf{x})=n/k[/latex] if [latex]i\in\mathcal{I}_{\mathbf{x}}^k[/latex] with

[latex display=”true”]\mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\}[/latex]

The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance

Sigma = var(myocarde[,1:7]) Sigma_Inv = solve(Sigma) d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)} k_closest = function(i,k){ vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv) vect = Vectorize(vect_dist)((1:nrow(myocarde))) which((rank(vect)))}

Here we have a function to find the [latex]k[/latex] closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for [latex]y_i[/latex] is the same as the one the majority of the neighbors.

k_majority = function(k){ Y=rep(NA,nrow(myocarde)) for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2] return(Y)}

But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels),

k_mean = function(k){ Y=rep(NA,nrow(myocarde)) for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)]) return(Y)}

We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors

cbind(OBSERVED=myocarde$PRONO, MAJORITY=k_majority(7),PROPORTION=k_mean(7)) OBSERVED MAJORITY PROPORTION [1,] 1 1 0.7142857 [2,] 0 1 0.5714286 [3,] 0 0 0.1428571 [4,] 1 1 0.5714286 [5,] 0 1 0.7142857 [6,] 0 0 0.2857143 [7,] 1 1 0.7142857 [8,] 1 0 0.4285714 [9,] 1 1 0.7142857 [10,] 1 1 0.8571429 [11,] 1 1 1.0000000 [12,] 1 1 1.0000000

Here, we got a prediction for an observed point, located at [latex]\boldsymbol{x}_i[/latex], but actually, it is possible to seek the [latex]k[/latex] closest neighbors of any point [latex]\boldsymbol{x}[/latex]. Back on our univariate example (to get a graph), we have

mean_x = function(x,k=9){ w = rank(abs(myocarde$INSYS-x),ties.method ="random") mean(myocarde$PRONO[which(w<=9)])} u=seq(5,55,length=201) v=Vectorize(function(x) mean_x(x,3))(u) plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="") points(myocarde$INSYS,myocarde$PRONO,pch=19)

That’s not very smooth, but we do not have a lot of points either.

If we use that technique on our two-dimensional dataset, we obtain the following

Sigma_Inv = solve(var(df[,c("x1","x2")])) u = seq(0,1,length=51) p = function(x,y){ k = 6 vect_dist = function(j) d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv) vect = Vectorize(vect_dist)(1:nrow(df)) idx = which(rank(vect)<=k) return(mean((df$y==1)[idx]))} v = outer(u,u,Vectorize(p)) image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)

This is the idea of local inference, using either kernel on a neighborhood of [latex]\mathbf{x}[/latex] or simply using the [latex]k[/latex] nearest neighbors. Next time, we will investigate penalized logistic regressions…

**leave a comment**for the author, please follow the link and comment on their blog:

**R-english – Freakonometrics**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.