Training Neural Networks with Backpropagation. Original Publication.

May 17, 2017
By

(This article was first published on Data R Value, and kindly contributed to R-bloggers)

Neural networks have been a very important area of scientific study that has evolved by different disciplines such as mathematics, biology, psychology, computer science, etc.
The study of neural networks leapt from theory to practice with the emergence of computers.
Training a neural network by adjusting the weights of the connections is computationally very expensive so its application to practical problems took until the mid-80s when a more efficient algorithm was discovered.

That algorithm is now known as back-propagation errors or simply backpropagation.

One of the most cited articles on this algorithm is:

Learning representations by back-propagating errors
David E. Rumelhart*, Geoffrey E. Hinton & Ronald J. Williams*
Nature 323, 533 – 536 (09 October 1986)

Although it is a very technical article, anyone who wants to study and understand neural networks is obliged to pass through this material.

I share the entire article in:
https://github.com/pakinja/Data-R-Value

To leave a comment for the author, please follow the link and comment on their blog: Data R Value.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.

Sponsors

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)