Small screen detected. You are viewing the mobile version of SlideWiki. If you wish to edit slides you will need to use a larger device.
Multiplying lots of probabilities, which are between 0 and 1 by definition, can result in floating-point underflow.
Since log(xy) = log(x) + log(y), it is better to perform all computations by summing logs of probabilities rather than multiplying probabilities.
Class with highest final un-normalized log probability score is still the most probable.
Note that model is now just max of sum of weights…
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License