|
Loss Function
Name |
Loss Function |
Description |
Loss functions are used to measure a prediction or classification
of a single example to a real valued error, in order to be able to
sum up the errors of a hypothesis, e.g. output by a learning algorithm
after reading a set of classified examples.
To evaluate the quality of a hypothesis, a part of the set of all
available classified examples have to be separated from the examples
used to train. This set of separated examples is called the test
set. This general method is suited for both, concept learning
and function approximation. The hypothesis chosen by the learning
algorithm, after reading the set of remaining examples, is then used
to classify the test set's examples, or to predict the target values
of these examples, depending on the learning scenario.
The error of each single prediction / classification is now measured
by the loss function Q. Simple and wide spread forms of
Q are:
-
For Classification:
Q(x, h) := |
1 if h(x) ≠ t(x) |
0 if h(x) = t(x) |
-
For function approximation:
Q is often chosen as the squared difference between the predicted
value h(x) and the correct value t(x):
Q(x, h) := (t(x) - h(x))2
|
|
|