We conjecture that a key advantage of the original Markov localization technique lies in its ability to recover from extreme localization failures. Re-localization after a failure is often more difficult than global localization from scratch, since the robot starts with a belief that is centered at a completely wrong position. Since the filtering techniques use the current belief to select the readings that are incorporated, it is not clear that they still maintain the ability to recover from global localization failures.
To analyze the behavior of the filters under such extreme conditions,
we carried out a series of experiments during which we manually
introduced such failures into the data to test the robustness of these
methods in the extreme. More specifically, we ``tele-ported'' the
robot at random points in time to other locations. Technically, this
was done by changing the robot's orientation by 180 degree
and shifting it by 0
cm, without letting the robot know.
These perturbations were introduced randomly, with a probability of
0.005 per meter of robot motion. Obviously, such incidents make the
robot lose track of its position. Each method was tested on 20
differently corrupted versions of both datasets. This resulted in a
total of more than 50 position failures in each dataset. For each of
these failures we measured the time until the methods re-localized the
robot correctly. Re-Localization was assumed to have succeeded if the
distance between the estimated position and the reference path was
smaller than 45cm for more than 10 seconds.
Table 3: Summary of recovery experiments.
Table 3 provides re-localization results for the
various methods, based on the two different datasets. Here
represents the average time in seconds
needed to recover from a localization error. The results are
remarkably different from the results obtained under normal
operational conditions. Both conventional Markov localization and the
technique using distance filters are relatively efficient in
recovering from extreme positioning errors in the first dataset,
whereas the entropy filter-based approach is an order of magnitude
less efficient (see first row in Table 3). The
unsatisfactory performance of the entropy filter in this experiment is
due to the fact that it disregards all sensor measurements that do not
confirm the belief of the robot. While this procedure is reasonable
when the belief is correct, it prevents the robot from detecting
localization failures. The percentage of time when the position of
the robot was lost in the entire run is given in the second row of the
table. Please note that this percentage includes both, failures due
to manually introduced perturbations and tracking failures. Again,
the distance filter is slightly better than the approach without
filter, while the entropy filter performs poorly. The average times
to recover from failures on the second
dataset are similar to those in the first dataset. The bottom row in
Table 3 provides the percentage of failures for
this more difficult dataset. Here the distance filter-based approach
performs significantly better than both other approaches, since it is
able to quickly recover from localization failures and to reliably
track the robot's position.
The results illustrate that despite the fact that sensor readings are processed selectively, the distance filter-based technique recovers as efficiently from extreme localization errors as the conventional Markov approach.