Hauptnavigation

PhD Thesis Wrobel

Concept Formation and Knowledge Revision A Demand-Driven Approach to Representation Change (PhD Thesis by Stefan Wrobel)

A fundamental assumption of work in Artificial Intelligence and Machine Learning is that knowledge is expressed in a computer with the help of knowledge representations. Since the proper choice of such representations is a difficult task that fundamentally affects the capabilities of a system, the problem of automatic {\em representation change} is an important topic in current research in Machine Learning.

In this thesis, our particular perspective on this topic is to examine representation change as a {\em concept formation} task. Regarding AI as an interdisciplinary field, our work draws on existing psychological results about the nature of human concepts and concept formation to determine the scope of concept formation phenomena, and to identify potential components of computational concept formation models: computational concept formation can usefully be understood as a process that is triggered in a demand-driven fashion by the representational needs of the learning system, and constrained by the particular context in which the demand for a new concept arises.

As the basis for our computational approach, we have selected a first-order logical representation formalism. We formally examine the properties of our representation, which includes selected higher-order logical statements and handles inconsistencies gracefully, and show that it is a sound basis for our concept formation approach. As the relevant context for concept formation, we identify the knowledge revision activities of the learning system. We present a detailed analysis of the revision problem for first-order theories including a base revision operator which we prove to be minimal. We then demonstrate how concept formation can be triggered in a demand driven fashion from within the knowledge revision process whenever the existing representation does not permit the plausible reformulation of an exception set. We demonstrate the usefulness of the approach both theoretically and empirically. The work concludes with a discussion of some fundamental criticisms of AI work, dealing with the inability of symbolic systems to form ``truly new'' features for concepts. We show that groundedness does not capture the essential properties required from a system's connection to the world, and propose the concept of embeddedness instead.