Lazy Learning

Publication Aha/97a: Lazy Learning
Name Lazy Learning
Description

The notion of Lazy Learning subsumes a family of algorithms, that

  • store the complete set of given (classified) examples of an underlying example language and
  • delay all further calculations, until requests for classifying yet unseen instances are received.

Of course, the time required for classifying an unseen instance will be higher, if all calculations have to be done when (and each time) a request occurs. The advantage of such algorithms is, that they do not have to output a single hypothesis, assigning a fixed class to each instance of the example language, but they can use different approximations for the target concept/function, which are constructed to be locally good. That way examples similar to a requested instance receive higher attention during the classification process.

This method requires a similarity measure, to evaluate the importance of examples, for classifying an unseen instance. If examples are described by a real-valued vector, then similarity could e.g. be measured by the usual vector distance, which raises the question, if all attributes should have the same impact. To reduce the unbeneficial impact of irrelevant attributes, and to assign each attribute the proper degree of impact, are key issues, using this method.
Specialization k-NEAREST NEIGHBOR
Dm Step Concept Learning
Function Approximation
Method Type Method