next up previous
Next: Adding Preferences Up: Well-Founded Semantics for Extended Previous: Introduction: Why Dynamic

Well-Founded Semantics for Extended Logic Programs

A (propositional) extended logic program consists of rules of the form

where the and c are propositional literals, i.e., either propositional atoms or such atoms preceded by the classical negation sign. The symbol denotes negation by failure (weak negation), denotes classical (strong) negation. For convenience we will sometimes use a rule schema to represent a set of propositional rules, namely the set of all ground instances of the schema.

Extended logic programs are very useful for knowledge representation purposes, see for instance [1] for a number of illustrative examples. Two major semantics for extended logic programs have been defined: (1) answer set semantics [8], an extension of stable model semantics, and (2) a version of well-founded semantics [19]. The second approach can be viewed as an efficient approximation of the first.

Let us first introduce answer sets. We say a rule of the form above is defeated by a literal l if for some . We say r is defeated by a set of literals X if X contains a literal that defeats r. Furthermore, we call the rule obtained by deleting weakly negated preconditions from r the monotonic counterpart of r and denote it with . We also apply Mon to sets of rules with the obvious meaning.

A literal l is a consequence of a program P under answer set semantics, denoted , iff l is contained in all answer sets of P.

The second major semantics for extended logic programs, well-founded semantics, is an inherently skeptical semantics that refrains from drawing conclusions whenever there is a potential conflict. The original formulation of well-founded semantics for general logic programs by Gelder, Ross and Schlipf (1991) is based on a certain partial model. Przymusinski reconstructed this definition in 3-valued logic [18]. The formulation using an anti-monotone operator was first given by Baral and Subrahmanian (1991) for general logic programs together with a corresponding definition for default logic. The straightforward extension of this formulation (respectively, the restriction of the default logic definition) to extended logic programs that will be introduced now was used by several authors, e.g. [1,13].gif Note that in this paper we will only consider the literals that are true in the corresponding 3-valued semantics.

Like answer set semantics the well-founded semantics for extended logic programs is based on the operator However, the operator is used in a totally different way. Since is anti-monotone the function is monotone. According to the famous Knaster-Tarski theorem [21] every monotone operator has a least fixpoint. The set of well-founded conclusions of a program P, denoted , is defined to be this least fixpoint of . The fixpoint can be approached from below by iterating on the empty set. In case P is finite this iteration is guaranteed to actually reach the fixpoint.

The intuition behind this use of the operator is as follows: whenever is applied to a set of literals X known to be true it produces the set of all literals that are still potentially derivable. Applying it to such a set of potentially derivable literals it produces a set of literals known to be true, often larger than the original set X. Starting with the empty set and iterating until the fixpoint is reached thus produces a set of true literals. It can be shown that every well-founded conclusion is a conclusion under the answer set semantics. Well-founded semantics can thus be viewed as an approximation of answer set semantics.

Unfortunately it turns out that for many programs the set of well-founded conclusions is extremely small and provides a very poor approximation of answer set semantics. Consider the following program which has also been discussed by Baral and Gelfond (1994):

1)
2)
3)

The set of well-founded conclusions is empty since equals Lit, the set of all literals, and the Lit-reduct of contains no rule at all. This is surprising since, intuitively, the conflict between 2) and 3) has nothing to do with and b.

This problem arises whenever the following conditions hold:

  1. a complementary pair of literals is provable from the monotonic counterparts of the rules of a program P, and
  2. there is at least one proof for each of the complementary literals whose rules are not defeated by , where consists of the ``strict'' rules in P, i.e., those without negation as failure.
In this case well-founded semantics concludes l iff . It should be obvious that such a situation is not just a rare limiting case. To the contrary, it can be expected that many commonsense knowledge bases will give rise to such undesired behaviour. For instance, assume a knowledge base contains information that birds normally fly and penguins normally don't, expressed as the set of ground instances of the following rule schemata:
1)
2)
Assume further that the knowledge base contains the information that Tweety is a penguin bird. Now if neither nor follows from strict rules in the knowledge base we are in the same situation as with : well-founded semantics does not draw any ``defeasible'' conclusion, i.e. a conclusion derived from a rule with weak negation in the body, at all.

We want to show that a minor reformulation of the fixpoint operator can overcome this intolerable weakness and leads to much better results. Consider the following operator

where denotes the minimal set of literals closed under the (classical) rules R. is thus like without the requirement of logical closedness. Now define

Again we iterate on the empty set to obtain the well-founded conclusions of a program P which we will denote .

Consider the effects of this modification on our example . . Rule 1) is contained in the -reduct of and thus . Since b is also the only literal contained in all answer sets of our approximation actually coincides with answer set semantics in this case.

In the Tweety example both and are provable from the -reduct of the knowledge base. However, this has no influence on whether a rule not containing the weak negation of one of these two literals in the body is used to produce or not. The effect of the conflicting information about Tweety's flying ability is thus kept local and does not have the disastrous consequences it has in the original formulation of well-founded semantics.

It is not difficult to see that the new monotone operator is equivalent to the original one whenever P does not contain negation as failure. In this case the X-reduct of P, for arbitrary X, is equivalent to P and for this reason it does not make any difference whether to use or as the operator to be applied first in the definition of . The same is obviously true for programs without classical negation: for such programs Cn can never produce complementary pairs of literals and for this reason the logical closedness condition is obsolete.

In the general case the new operator produces more conclusions than the original one:

Proof: We have , thus . From this the result follows immediately. It remains to be shown that the new operator produces no unwanted results, i.e., that our new semantics can still be viewed as an approximation of answer set semantics.

Proof: The proposition is trivially satisfied whenever P has no answer set at all, or when Lit is the single answer set of P. So assume P possesses a non-empty set of consistent answer sets, the only remaining possibility according to results in [8].

To show that iterating on the empty set cannot produce a literal it suffices to show that implies .

Let A be an arbitrary answer set and assume . Since we have . Since by assumption A is consistent we have . Therefore . For the rest of the paper a minor reformulation turns out to be convenient. Instead of using the monotonic counterparts of undefeated rules we will work with the original rules and extend the definitions of the two operators Cn and Cl accordingly, requiring that weakly negated preconditions be neglected, i.e., for an arbitrary set of rules P with weak negation we define and . We can now equivalently characterize and by the equations

where denotes the set of rules not defeated by X.

Before we turn to the treatment of preferences we give an alternative characterization of based on the following notion:

With this new notion we can obviously characterize as follows:

It is this last formulation that we will modify later. More precisely, the notion of X-safeness will be weakened to handle preferences adequately.



next up previous
Next: Adding Preferences Up: Well-Founded Semantics for Extended Previous: Introduction: Why Dynamic



Gerd Brewka
Thu Feb 8 10:26:15 MET 1996