Current Slide

Small screen detected. You are viewing the mobile version of SlideWiki. If you wish to edit slides you will need to use a larger device.

Why Is SVM Effective on High Dimensional Data?

  • The complexity of trained classifier is characterized by the # of support vectors rather than the dimensionality of the data
  • The support vectors are the essential or critical training examples —they lie closest to the decision boundary (MMH)
  • If all other training examples are removed and the training is repeated, the same separating hyperplane would be found
  • The number of support vectors found can be used to compute an (upper) bound on the expected error rate of the SVM classifier, which is independent of the data dimensionality
  • Thus, an SVM with a small number of support vectors can have good generalization, even when the dimensionality of the data is high

Speaker notes:

Content Tools

Sources

There are currently no sources for this slide.