next up previous
Next: An Empirical Evaluation of Up: Learning from Case Failure Previous: Analyzing the Failure of

Constructing Reasons for Retrieval Failure

 
  
Figure 9: The path failure explanation at the root of the tree is computed as $e_1^1 = d_1^{-1} ( d_2^{-1} \cdot\cdot\cdot ( d_f^{-1} ( e_1 ))\cdot\cdot\cdot) $.
\begin{figure*} \begin{center} \begin{tabular} {cc\vert cc} \subfigure[The Regre...  ...pt \epsfbox{/ud/ai1/laurie/figs/f2.epsf} }\end{tabular}\end{center}\end{figure*}

DERSNLP+EBL constructs explanations for retrieval failures through the use of explanation-based learning techniques which allow the planner to explain the failures of individual plans in the planner's search space. A leaf node plan represents an analytical failure when it contains a set of inconsistent constraints which prevent the plan from being further refined into a solution. An analytical failure is explained in terms of these constraints [24]. Leaf node failure explanations identify a minimal set of constraints in the plan which are together inconsistent. DERSNLP+EBL forms explanations for each of the analytical failures that occur in the subtree directly under the skeletal plan. These are regressed up the failing search paths and are collected at the root of the tree to form a reason for the retrieval failure (See Figure 9a). The regressed explanation is in terms of the new problem specification. It contains a subset of interacting goals, as well as initial state conditions relevant to those goals.

Since a plan failure is explained by a subset of its constraints, failure explanations are represented in the same manner as the plan itself. Recall that DERSNLP+EBL represents its plans as a 6-tuple, $ \langle {\cal S}, {\cal O}, {\cal B}, {\cal L} , {\cal E} , {\cal C} \rangle $ (See Section 2). The explanation for the failure occurring at a leaf node contains only the constraints which contribute to an inconsistency. These inconsistencies appear when new constraints are added which conflict with existing constraints. As discussed in Section 2, DERSNLP+EBL makes two types of decisions, establishment and resolution. Each type of decision may result in a plan failure. An establishment decision represents a choice as to a method of achieving an open condition, either through a new/existing step, or by adding a causal link from the initial state. When an attempt is made to achieve a condition by linking to an initial state effect, and this condition is not satisfied in the initial state, the plan then contains a contradiction. An explanation for the failure is constructed which identifies the two conflicting constraints:

\begin{displaymath} \langle \emptyset , \emptyset , \emptyset , \{ \langle t_I ,...  ...ngle \}, \{ \langle t_I, \neg p \rangle \} , \emptyset \rangle \end{displaymath}

The precondition of a resolution decision is a threat to a causal link. DERSNLP+EBL uses two methods of resolving a threat, promotion and demotion, each of which adds a step ordering to the plan. When either decision adds an ordering which conflicts with an existing ordering, an explanation of the failure identifies the conflict:

\begin{displaymath} \langle \emptyset ,\{ s \prec s', s'\prec s \} , \emptyset , \emptyset, \emptyset, \emptyset \rangle \end{displaymath}

Each of the conflicting constraints in the failure explanation is regressed through the final decision, and the results are sorted according to type to form the new regressed explanation. This process is illustrated graphically in Figure 9b. In this example, a new link from the initial state results in a failure. The explanation, e1 is:

\begin{displaymath} \langle \emptyset , \emptyset , \emptyset , \{ \langle t_I, ...  ...\!\!-\!\!OB\,\,\, OB2\,\,\, l_d) \rangle \}, \emptyset \rangle \end{displaymath}

When e1 is regressed through the final decision, df, to obtain a new explanation, the initial state effect regresses to itself. However, since the link in the explanation was added by the decision, df, this link regresses to the open condition which was a precondition of adding the link. The new explanation, e1f, is therefore

\begin{displaymath} \langle \emptyset , \emptyset , \emptyset , \emptyset , \{ \...  ...e (AT\!\!-\!\!OB\,\,\, OB2\,\,\, l_d) , t_G \rangle \} \rangle \end{displaymath}

The regression process continues up the failing path until it reaches the root of the search tree. When all of the paths in the subtree underneath the skeletal plan have failed, the failure reason at the root of the tree provides the reason for the failure of the retrieved cases. It represents a combined explanation for all of the path failures. The case failure reason contains only the aspects of the new problem which were responsible for the failure. It may contain only a subset of the problem goals. Also, any of the initial state effects that are present in a leaf node explanation, are also present in the reason for case failure[*].


next up previous
Next: An Empirical Evaluation of Up: Learning from Case Failure Previous: Analyzing the Failure of

11/5/1997