Linear Regression, with Map-Reduce

[This article was first published on R-english – Freakonometrics, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Sometimes, with big data, matrices are too big to handle, and it is possible to use tricks to numerically still do the map. Map-Reduce is one of those. With several cores, it is possible to split the problem, to map on each machine, and then to agregate it back at the end.

Consider the case of the linear regression, [latex]\mathbf{y}=\mathbf{X}\mathbf{\beta}+\mathbf{\varepsilon}[/latex] (with classical matrix notations). The OLS estimate of [latex]\mathbf{\beta}[/latex] is [latex]\widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}[/latex]. To illustrate, consider a not too big dataset, and run some regression.

(Intercept)       speed 
 -17.579095    3.932409
[1,] -17.579095
[2,]   3.932409

How is this computed in R? Actually, it is based on the QR decomposition of [latex]\mathbf{X}[/latex], [latex]\mathbf{X}=\mathbf{Q}\mathbf{R}[/latex], where [latex]\mathbf{Q}[/latex] is an orthogonal matrix (ie [latex]\mathbf{Q}^T\mathbf{Q}=\mathbb{I}[/latex]). Then [latex]\widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{R}^{-1}\mathbf{Q}^T\mathbf{y}[/latex]

solve(qr.R(qr(as.matrix(X)))) %*% t(qr.Q(qr(as.matrix(X)))) %*% y
[1,] -17.579095
[2,]   3.932409

So far, so good, we get the same output. Now, what if we want to parallelise computations. Actually, it is possible.

Consider [latex]m[/latex] blocks

m = 5

and split vectors and matrices
[latex display=”true”]\mathbf{y}=\left[\begin{matrix}\mathbf{y}_1\\\mathbf{y}_2\\\vdots \\\mathbf{y}_m\end{matrix}\right][/latex] and [latex display=”true”]\mathbf{X}=\left[\begin{matrix}\mathbf{X}_1\\\mathbf{X}_2\\\vdots\\\mathbf{X}_m\end{matrix}\right]=\left[\begin{matrix}\mathbf{Q}_1^{(1)}\mathbf{R}_1^{(1)}\\\mathbf{Q}_2^{(1)}\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{Q}_m^{(1)}\mathbf{R}_m^{(1)}\end{matrix}\right][/latex]
To split vectors and matrices, use (eg)

Xlist = list()
for(j in 1:m) Xlist[[j]] = X[(j-1)*10+1:10,]
ylist = list()
for(j in 1:m) ylist[[j]] = y[(j-1)*10+1:10]

and get small QR recomposition (per subset)

QR1 = list()
for(j in 1:m) QR1[[j]] = list(Q=qr.Q(qr(as.matrix(Xlist[[j]]))),R=qr.R(qr(as.matrix(Xlist[[j]]))))

Consider the QR decomposition of [latex]\mathbf{R}^{(1)}[/latex] which is the first step of the reduce part[latex display=”true”]\mathbf{R}^{(1)}=\left[\begin{matrix}\mathbf{R}_1^{(1)}\\\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{R}_m^{(1)}\end{matrix}\right]=\mathbf{Q}^{(2)}\mathbf{R}^{(2)}[/latex]where[latex display=”true”]\mathbf{Q}^{(2)}=\left[\begin{matrix}\mathbf{Q}^{(2)}_1\\\mathbf{Q}^{(2)}_2\\\vdots\\\mathbf{Q}^{(2)}_m\end{matrix}\right][/latex]

R1 = QR1[[1]]$R
for(j in 2:m) R1 = rbind(R1,QR1[[j]]$R)
Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*2+1:2,]

Define – as step 2 of the reduce part[latex display=”true”]\mathbf{Q}^{(3)}_j=\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j[/latex]
and[latex display=”true”]\mathbf{V}_j=\mathbf{Q}^{(3)T}_j\mathbf{y}_j[/latex]

Q3list = list()
for(j in 1:m) Q3list[[j]] = QR1[[j]]$Q %*% Q2list[[j]]
Vlist = list()
for(j in 1:m) Vlist[[j]] = t(Q3list[[j]]) %*% ylist[[j]] 

and finally set – as the step 3 of the reduce part[latex display=”true”]\widehat{\mathbf{\beta}}=[\mathbf{R}^{(2)}]^{-1}\sum_{j=1}^m\mathbf{V}_j[/latex]

sumV = Vlist[[1]]
for(j in 2:m) sumV = sumV+Vlist[[j]]
solve(R2) %*% sumV
[1,] -17.579095
[2,]   3.932409

It looks like we’ve been able to parallelise our linear regression…

To leave a comment for the author, please follow the link and comment on their blog: R-english – Freakonometrics. offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)