Training Neural Networks with Backpropagation. Original Publication.

[This article was first published on Data R Value, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Neural networks have been a very important area of scientific study that has evolved by different disciplines such as mathematics, biology, psychology, computer science, etc.
The study of neural networks leapt from theory to practice with the emergence of computers.
Training a neural network by adjusting the weights of the connections is computationally very expensive so its application to practical problems took until the mid-80s when a more efficient algorithm was discovered.

That algorithm is now known as back-propagation errors or simply backpropagation.

One of the most cited articles on this algorithm is:

Learning representations by back-propagating errors
David E. Rumelhart*, Geoffrey E. Hinton & Ronald J. Williams*
Nature 323, 533 – 536 (09 October 1986)



Although it is a very technical article, anyone who wants to study and understand neural networks is obliged to pass through this material.

I share the entire article in:
https://github.com/pakinja/Data-R-Value

To leave a comment for the author, please follow the link and comment on their blog: Data R Value.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)