Current Slide

Small screen detected. You are viewing the mobile version of SlideWiki. If you wish to edit slides you will need to use a larger device.

Classification Is to Derive the Maximum Posteriori

  • Let D be a training set of tuples and their associated class labels, and each tuple is represented by an n-D attribute vector X = (x1, x2, …, xn)
  • Suppose there are m classes C1, C2, …, Cm.
  • Classification is to derive the maximum posteriori, i.e., the maximal P(Ci|X)
  • This can be derived from Bayes’ theorem

\[P(C_{i}|X)=\frac{P(X|C_{i})P(C_{i})}{P(X)}\]

  • Since P(X) is constant for all classes, only 

\[P(C_{i}|X)=P(X|C_{i})P(C_{i})\]

needs to be maximized


Speaker notes:

Content Tools

Sources

There are currently no sources for this slide.