Current Slide

Small screen detected. You are viewing the mobile version of SlideWiki. If you wish to edit slides you will need to use a larger device.

Perceptron Learning

  • The squared error for an example with input x and desired output y is:
  • \[ E = \frac{1}{2} Err^{2} = \frac{1}{2} (y-g_{w}(x))^{2} \]
  • Perform optimization search by gradient descent:
  • \[ \frac{\partial E}{\partial W_{j}} = Err \times \frac{\partial Err}{\partial W_{j}} = Err \times \frac{\partial }{\partial W_{j}} (y-g (\sum_{j=0}^{n} W_{j}x_{j})) = - Err \times g'(in) \times x_{j} \]
  • Simple weight update rule: W j ← W j + α × Err × g'(in) × x j
  • Positive error ⇒ increase network output
      • increase weights on positive inputs,
      • decrease on negative inputs

Speaker notes:

Content Tools

Sources

There are currently no sources for this slide.