next up previous
Next: Eager Derivation Replay Up: Storing and Indexing Plan Previous: Introduction

Learning from Case Failure

   
  
Figure 1: A schematic diagram illustrating the approach of DERSNLP+EBL.
\begin{figure*} \centerline{ \epsfverbosetrue \epsfxsize=250pt \epsfverbosetrue \epsfbox{/ud/ai1/laurie/figs/jair-figure1.epsf} }\end{figure*}

As stated earlier, DERSNLP+EBL is based on a complete and correct domain-independent planning strategy. Like PRIAR [22] and SPA [13], it is implemented on a partial-order planner. In this aspect it differs from state-space systems such as PRODIGY/ANALOGY [42,40] and PARIS [3]. Like PRODIGY/ANALOGY, it employs the case adaptation strategy, derivational replay which stores planning experience in the form of successful plan derivations. Previous decisions made in earlier planning episodes become instructions to guide the search process in solving the new problem. Derivational replay includes all of the following elements, as illustrated in Figure 1 [40,42]: a facility within the underlying planner to generate a trace of the derivation of a plan, the indexing and storage of the derivation trace in a library of previous cases, the retrieval of multiple cases in preparation for solving a new problem, and finally, a replay mechanism by which the planner employs the retrieved plan derivations as a sequence of instructions to guide the new search process.

  
Figure 2: Multiple derivation traces (each a sequence of decisions shown in the figure as rectangles) are used to guide the new search process. In the figure, a solution could be reached only by backtracking over the skeletal plan, which now lies outside the new plan derivation (shown as filled circles).
\begin{figure*} \centerline{ \epsfxsize=150pt \epsfbox{/ud/ai1/laurie/figs/jair-rep.epsf} }\end{figure*}

DERSNLP+EBL's methodology depends on aspects that it has in common with MOLGEN [10] and PRIAR [22]. It requires an eager case adaptation strategy, so that a skeletal plan will be constructed which contains all of the constraints added on the advice of the retrieved cases, and only those constraints. This is to separate the failure resulting from the previous guidance from any subsequent planning effort. In eager case adaptation, the planning decisions which are encapsulated in the retrieved cases are greedily adopted before these decisions are extended to solve any extra goals not covered. Multiple retrieved plan derivations are replayed in sequence to produce the skeletal plan which then contains all of the recommended plan constraints. The planner returns to from-scratch planning only after all of the previous decisions in the retrieved cases have been visited. The skeletal plan is then further refined to achieve any goals left open. Previous work has demonstrated the effectiveness of this approach to plan-space replay as well as its advantage over state-space replay [15,16].

Eager case adaptation can also be described as extension-first. The skeletal plan is first extended in the search for a solution, and, only after this extension fails, is the plan backtracked over, discarding the plan constraints which were added on the advice of previous episodes. The general approach to case adaptation therefore involves three distinct phases: case replay, case extension, and, if extension fails, recovery. During the search process employed in extending the skeletal plan, the planner constructs an explanation for the plan's failure which becomes a reason for the case retrieval failure. Explanations are formed for the analytical failures that occur in the leaf nodes directly under the skeletal plan (See Figure 2). An analytical failure is explained by a set of inconsistent plan constraints. These failure explanations are immediately regressed up the search paths as they are encountered. The regressed explanations are collected at the root of the tree to form a reason for the retrieval error. DERSNLP+EBL detects that a retrieval error has occurred when all ways of refining the skeletal plan have been tried, and the planner is forced to backtrack over this plan. At this point the failure reason is fully constructed. Performing skeletal plan extension as a separate process prior to recovery allows the planner to identify the retrieval error in terms of the failure of the skeletal plan, and to construct a reason for the failure. This reason is then communicated to the Storer to be used in augmenting the library with a new repairing case.


  
Figure 3:

(a) A plan to accomplish the transport of a single package, OB1, to the destination airport ld. (b) A new problem contains an extra goal which involves the additional transport to ld of a second package, OB2.

\begin{figure*} \begin{center} \begin{tabular} {cc\vert cc} \subfigure[Previous ...  ...epsfbox{/ud/ai1/laurie/figs/plane1.epsf} }\end{tabular}\end{center}\end{figure*}

Consider a simple example illustrated in Figure 3 which is taken from the Logistics Transportation domain shown in Figure 4. The goal is to have package OB1 located at the destination location ld. The package is initially at location l1. There is a plane located at lp which can be used to transport the package. Figure 3a illustrates a previous plan which contains steps that determine the plane's route to the destination airport as well as steps which accomplish the loading of the package at the right place along this route. Eagerly replaying these earlier step addition decisions for a new problem in which there is an extra package to transport produces a skeletal plan which may readily be extended to include the loading and unloading of the extra package as long as this package lies along the same route. However, if the new package is off the old route, the planner may not be able to solve the extra goal without backtracking over some of its previous step addition decisions. (See Figure 3b).

A case failure reason is shown in Figure 5. It gives the conditions under which a future replay of the case will again result in failure. These conditions refer to the presence in the new problem of a set, ${\cal C}$, of negatively interacting goals, as well as some initial state conditions, contained in ${\cal E}$.A summary of the information content of the failure reason is: There is an extra package to transport to the same destination location, and that package is not at the destination location, is not inside the plane, and is not located on the plane's route.

  
Figure 4: The specification of the Logistics Transportation Domain adapted for our experiments
\begin{figure*} \begin{center} {\tiny \begin{tabular} {\vert l\vert l\vert l\ver...  ...Lg)) \ & & & &\ \cline{1-2} \cline{4-5}\end{tabular}}\end{center}\end{figure*}

Subsequent to backtracking over the skeletal plan, the planner continues its search, and will go on to find a solution to the full problem if one exists. This new solution achieves all of the negatively interacting goals identified in the failure reason. Moreover, since these goals represent a subset of the problem goals, the new derivation may be used to construct a case covering these goals alone. DERSNLP+EBL stores the new case directly beneath the failing case so as to censor its retrieval. This is to ensure that whenever the failure reason holds (for example, whenever there is an extra package which is off the plane's route), the retriever is directed away from the failing case and toward the case that repairs the failure.

We are now in a position to describe in more detail DERSNLP+EBL's eager derivation replay strategy, as well as how it learns the reasons underlying a case failure.



 
next up previous
Next: Eager Derivation Replay Up: Storing and Indexing Plan Previous: Introduction

11/5/1997