Fall 99
Instructor: Genevieve Orr
Lecture Notes prepared by Genevieve Orr, Nici Schraudolph, and Fred Cummins
Our goal is to introduce students to a powerful class of model, the Neural Network. In fact, this is a broad term which includes many diverse models and approaches. We will first motivate networks by analogy to the brain. The analogy is loose, but serves to introduce the idea of parallel and distributed computation.
We then introduce one kind of network in detail: the feedforward network trained by backpropagation of error. We discuss model architectures, training methods and data representation issues. We hope to cover everything you need to know to get backpropagation working for you. A range of applications and extensions to the basic model will be presented in the final section of the module.
Lecture 1: Introduction
Lecture 2: Classification
Lecture 3: Optimizing Linear Networks
Lecture 4: The Backprop Toolbox
Lecture 5: Unsupervised Learning
Lecture 6: Reinforcement Learning
Lecture 7: Advanced Topics
Review for Midterm:
Tutorials:
Simulators and code:
Data Sources:
Related stuff of interest:
count: