Description |
Most hypothesis languages can be regarded as a specific subset
of all functions.
In concept learning each hypothesis maps the example
language to a set with the two possible values "belongs to the
concept" or "does not belong to the concept".
Formally this can be stated as
h: LE → {0, 1},
where h is the hypothesis, LE denotes the example language
(or the universe), and "1" and "0" encode
"belongs to the concept" and "does not belong to the
concept".
In special cases of SVMs or neural networks, linear
functions are used as hypotheses for concept learning.
They describe a hyperplane in a usually high dimensional space,
distinguishing positive from negative examples.
All that has to be done to determine if an instance belongs to the
concept, due to the hypothesis, is to find out, on which side of
the hyperplane the instance is located.
In function approximation hypotheses map the example language
to continous domains, e.g.
h: LE → ℝ.
But these continous values may of course represent probabilities or
confidence for an example belonging to a learned concept.
Due to this understanding, concept learning is a special case of
the task of function approximation.
|