![]() |
Table 9 shows the results for testing the conditional
versions of the domains on
, MBP, GPT, SGP, and YKA.
MBP: The
planner is very similar to MBP in that it uses
progression search.
uses an AO* search, whereas the MBP
binary we used uses a depth first And-Or search. The depth first
search used by MBP contributes to highly sub-optimal maximum length
branches (as much as an order of magnitude longer than
). For
instance, the plans generated by MBP for the Rovers domain have the
rover navigating back and forth between locations several times
before doing anything useful; this is not a situation beneficial for
actual mission use. MBP tends to not scale as well as
in all
of the domains we tested. A possible reason for the performance of
MBP is that the Logistics and Rovers domains have sensory actions
with execution preconditions, which prevent branching early and
finding deterministic plan segments for each branch. We
experimented with MBP using sensory actions without execution
preconditions and it was able to scale somewhat better, but plan
quality was much longer.
Optimal Planners: GPT and SGP generate better solutions but
very slowly. GPT does better on the Rovers and Logistics problems
because they exhibit some positive interaction in the plans, but
SGP does well on BT because its planning graph search is well
suited for shallow, yet broad (highly parallel) problems.
YKA: We see that YKA fares similar to GPT in Rovers and
Logistics, but has trouble scaling for other reasons. We think
that YKA may be having trouble in regression because of sensory
actions since it was able to scale reasonably well in the
conformant version of the domains. Despite this, YKA proves to do
very well in the BT and BTC problems.