# Linear Regression, with Map-Reduce

**R-english – Freakonometrics**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Sometimes, with big data, matrices are too big to handle, and it is possible to use tricks to numerically still do the map. Map-Reduce is one of those. With several cores, it is possible to split the problem, to map on each machine, and then to agregate it back at the end.

Consider the case of the linear regression, \(\mathbf{y}=\mathbf{X}\mathbf{\beta}+\mathbf{\varepsilon}\) (with classical matrix notations). The OLS estimate of \(\mathbf{\beta}\) is \(\widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}\). To illustrate, consider a not too big dataset, and run some regression.

lm(dist~speed,data=cars)$coefficients (Intercept) speed -17.579095 3.932409 y=cars$dist X=cbind(1,cars$speed) solve(crossprod(X,X))%*%crossprod(X,y) [,1] [1,] -17.579095 [2,] 3.932409

How is this computed in R? Actually, it is based on the QR decomposition of \(\mathbf{X}\), \(\mathbf{X}=\mathbf{Q}\mathbf{R}\), where \(\mathbf{Q}\) is an orthogonal matrix (ie \(\mathbf{Q}^T\mathbf{Q}=\mathbb{I}\)). Then \(\widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{R}^{-1}\mathbf{Q}^T\mathbf{y}\)

solve(qr.R(qr(as.matrix(X)))) %*% t(qr.Q(qr(as.matrix(X)))) %*% y [,1] [1,] -17.579095 [2,] 3.932409

So far, so good, we get the same output. Now, what if we want to parallelise computations. Actually, it is possible.

Consider \(m\) blocks

m = 5

and split vectors and matrices

\(\mathbf{y}=\left[\begin{matrix}\mathbf{y}_1\\\mathbf{y}_2\\\vdots \\\mathbf{y}_m\end{matrix}\right]\) and \(\mathbf{X}=\left[\begin{matrix}\mathbf{X}_1\\\mathbf{X}_2\\\vdots\\\mathbf{X}_m\end{matrix}\right]=\left[\begin{matrix}\mathbf{Q}_1^{(1)}\mathbf{R}_1^{(1)}\\\mathbf{Q}_2^{(1)}\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{Q}_m^{(1)}\mathbf{R}_m^{(1)}\end{matrix}\right]\)

To split vectors and matrices, use (eg)

Xlist = list() for(j in 1:m) Xlist[[j]] = X[(j-1)*10+1:10,] ylist = list() for(j in 1:m) ylist[[j]] = y[(j-1)*10+1:10]

and get small QR recomposition (per subset)

QR1 = list() for(j in 1:m) QR1[[j]] = list(Q=qr.Q(qr(as.matrix(Xlist[[j]]))),R=qr.R(qr(as.matrix(Xlist[[j]]))))

Consider the QR decomposition of \(\mathbf{R}^{(1)}\) which is the first step of the reduce part\(\mathbf{R}^{(1)}=\left[\begin{matrix}\mathbf{R}_1^{(1)}\\\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{R}_m^{(1)}\end{matrix}\right]=\mathbf{Q}^{(2)}\mathbf{R}^{(2)}\)where\(\mathbf{Q}^{(2)}=\left[\begin{matrix}\mathbf{Q}^{(2)}_1\\\mathbf{Q}^{(2)}_2\\\vdots\\\mathbf{Q}^{(2)}_m\end{matrix}\right]\)

R1 = QR1[[1]]$R for(j in 2:m) R1 = rbind(R1,QR1[[j]]$R) Q1 = qr.Q(qr(as.matrix(R1))) R2 = qr.R(qr(as.matrix(R1))) Q2list=list() for(j in 1:m) Q2list[[j]] = Q1[(j-1)*2+1:2,]

Define – as step 2 of the reduce part\(\mathbf{Q}^{(3)}_j=\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j\)

and\(\mathbf{V}_j=\mathbf{Q}^{(3)T}_j\mathbf{y}_j\)

Q3list = list() for(j in 1:m) Q3list[[j]] = QR1[[j]]$Q %*% Q2list[[j]] Vlist = list() for(j in 1:m) Vlist[[j]] = t(Q3list[[j]]) %*% ylist[[j]]

and finally set – as the step 3 of the reduce part\(\widehat{\mathbf{\beta}}=[\mathbf{R}^{(2)}]^{-1}\sum_{j=1}^m\mathbf{V}_j\)

sumV = Vlist[[1]] for(j in 2:m) sumV = sumV+Vlist[[j]] solve(R2) %*% sumV [,1] [1,] -17.579095 [2,] 3.932409

It looks like we’ve been able to parallelise our linear regression…

**leave a comment**for the author, please follow the link and comment on their blog:

**R-english – Freakonometrics**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.