Deep Learning at Stanford

[This article was first published on Revolutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

by Joseph Rickert

Last week,I had the opportunity to participate in the Second Academy of Science and Engineering (ASE) Conference on Big Data Science and Computing at Stanford University. Since the conference was held simultaneously with the two other conferences, one on Social Computing and the other on Cyber Security, it was definitely not an R crowd, and not even a  typical Big Data crowd. Talks from the three programs were intermixed throughout the day so at any given moment you could find yourself looking for common ground in a conversation with mostly R aware, but language impartial fellow attendees. I don’t know whether this method of organization was the desperate result of necessity or genius, but I thought it worked out very well and made for a stimulating interaction dynamic. The ASE conference must have been difficult program to set up. The organizers, however, did a wonderful job mashing talks and themes together to make for an excellent experience.

There were several very good talks at the conference, however, the tutorial on Deep Learning and Natural Language Processing given by Richard Socher was truly outstanding. Richard is a PhD student in Stanford’s Computer Science Department studying under Chris Manning and Andrew Ng. Very rarely do you come across such a polished speaker with complete and casual command of complex material. And, while the delivery was impressive the content was jaw dropping. Richard walked through the Deep Learning methodology and tools being developed in Stanford’s AI lab and showed a number of areas where the Deep Learning techniques are yielding notable results; for example, a system for single sentence sentiment detection that improved positive/negative sentence classification by 5.4%. Have a look at Andrew Ng’s or Christopher Manning’s lists of publications to get a good idea of the outstanding work that is being done in this area.

A key concept covered in the tutorial is the ability to represent natural language structures, parsing trees for example, in a finite dimensional vector space and to build the theoretical and software tools in such a way that same method can be use to deconstruct and represent other hierarchies. The following slide indicates how a structures build for Natural Language Processing (NLP) can also be used to represent images.

Parsing_images

This ability to bring a powerful, integrated set of tools to many different areas seems to be a key reason why neural nets and Deep Learning are suddenly getting so much attention. In a tutorial similar to the one Richard gave on Saturday, Richard and Chris Manning attribute the recent resurgence of Deep Learning to three factors:

  • New methods for unsupervised pre-training: Restricted Boltzmann Machines (RBMs), autoencoders and contrastive estimation
  • More efficient parameter estimation methods
  • Better understanding of model regularization

The software used in the NLP and Deep Learning work at Stanford seems to be mostly based on Python and C. (See theano and Senna for example.) So far, it does not appear that much Deep Learning work at all is being done with R. However, things are looking up. 0xdata’s H20 Deep Learning implementation is showing impressive results, and the this algorithm is available in the h20 R package. Also, the R package darch and the very recent deepnet package, both of which offer implementations of Restricted Boltzman Machines, indicate that Deep Learning researchers are working in R.

Finally, to get a quick overview of the area have a look at  the book, Deep Learning: Methods and Applications by Li Deng and Dony Yu of Microsoft Research is available online.

To leave a comment for the author, please follow the link and comment on their blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)