|
WEIGHTED-MAJORITY
Name |
WEIGHTED-MAJORITY |
Description |
The WEIGHTED-MAJORITY algorithm addresses the task of on-line
learning.
Given a set of prediction rules (or advice from several experts), the
algorithm maintains a weight for each rule (or expert), specifying
how to form the algorithm's prediction from the set of predictions.
The algorithm runs in rounds. Every round
- the predictions of all rules are calculated first, and
- compared to the real outcome in step 2.
- Step 3 is to update the weights. This is done by decreasing
the weights of rules that predicted wrong.
So the algorithm favours rules with high accuracy.
Every round a prediction is made, before the real outcome can be
observed. The aim is to make as few mistakes as possible.
Let
- wi denote the weight of rule i,
- n denote the number of rules, and
- Hi(e) denote the prediction of rule number i for example e.
The algorithm for a binary prediction task, where the task is
to predict if 0 or 1, e.g. if an instance belongs to a concept or
not, or if it will rain tomorrow:
- For i = 1 to n do: wi = 1
- Repeat
- Read an unclassified example e.
- W0 = Σ{i:Hi(x)=0}wi
- W1 = Σ{i:Hi(x)=1}wi
- If (W0 > W1) output prediction 0.
- If (W0 < W1) output prediction 1.
- If (W0 = W1) output a random prediction.
- Compare the prediction of every single rule with the true outcome,
and for each rule i that predicted wrong:
wi = β * wi, with β ∈
[0, 1[.
β is a fixed parameter, specifying how drastically to punish
wrong predictions.
By choosing β = 0 we receive the CANDIDATE-ELIMINATION
algorithm, where each hypothesis inconsistent with an example is
permanently deleted.
|
Dm Step |
On-Line Learning
|
Method Type |
Algorithm
|
|
|