Current Slide

Small screen detected. You are viewing the mobile version of SlideWiki. If you wish to edit slides you will need to use a larger device.

Back-Propagation Learning

  • The errors (and therefore the learning) propagate backwards from the output layer to the hidden layers.
  • Learning at the output layer is the same as for single-layer perceptron:

        Wj ← Wj + α × Err × g'(in) × xj
  • Hidden layer neurons get a "blame" assigned for the error (back-propagation of error), giving greater responsibility to neurons connected by stronger weight.
  • Back-propagation of error updates the weights of the hidden layer; the principle thus stays the same.

Speaker notes:

Content Tools

Sources

There are currently no sources for this slide.