|
Naive Bayes
Name |
Naive Bayes |
Description |
The Naive Bayes Classifier method is applicable to the task of
concept learning.
- It reads a set of examples in attribute-value representation and
- uses Bayes theorem to estimate the posterior probabilities of all
classifications.
- For each instance of the example language a classification with highest
posterior probability is chosen as the prediction.
For the Naive Bayes method the following assumption is essential:
Let x = < x1, ..., xn > be an instance of
the example language and c ∈ C a possible classification.
Then Prob(x | c) = ∏i∈{1,..,n}
Prob(xi | c).
This assumption is justified, if the attributes are independent from each
other.
Using this assumption the classification c ∈ C with maximum posterior
probability Prob(c | x) is the one that maximizes the expression
P(c) * ∏i∈{1,..,n} Prob(xi | c).
The learner estimates the required probabilities by calculating the corresponding
frequencies observed in the example set.
|
Dm Step |
Concept Learning
|
Method Type |
Method
|
Theories |
Bayesian Learning
|
|
|