next up previous
Next: Key ideas Up: ``Squeaky Wheel'' Optimization Previous: ``Squeaky Wheel'' Optimization

Overview

We describe a general approach to optimization which we term ``Squeaky Wheel'' Optimization (SWO). [Joslin ClementsJoslin Clements1998] The core of SWO is a Construct/Analyze/Prioritize cycle, illustrated in Figure 1. A solution is constructed by a greedy algorithm, making decisions in an order determined by priorities assigned to the elements of the problem. That solution is then analyzed to find the elements of the problem that are ``trouble makers.'' The priorities of the trouble makers are then increased, causing the greedy constructor to deal with them sooner on the next iteration. This cycle repeats until a termination condition occurs.

On each iteration, the analyzer determines which elements of the problem are causing the most trouble in the current solution, and the prioritizer ensures that the constructor gives more attention to those elements on the next iteration. (``The squeaky wheel gets the grease.'') The construction, analysis and prioritization are all in terms of the elements that define a problem domain. In a scheduling domain, for example, those elements might be tasks. In graph coloring, those elements might be the nodes to be colored.


  
Figure 1: The Construct/Analyze/Prioritize cycle
\begin{figure} \epsfxsize=3.5in \centerline{\epsffile{subarch.eps}} \end{figure}

The three main components of SWO are:

Constructor.

Given a sequence of problem elements, the constructor generates a solution using a greedy algorithm, with no backtracking. The sequence determines the order in which decisions are made, and can be thought of as a ``strategy'' or ``recipe'' for constructing a new solution. (This ``solution'' may violate hard constraints.)

Analyzer.

The analyzer assigns a numeric ``blame'' factor to the problem elements that contribute to flaws in the current solution. For example, if minimizing lateness in a scheduling problem is one of the objectives, then blame would be assigned to late tasks.

A key principle behind SWO is that solutions can reveal problem structure. By analyzing a solution, we can often identify elements of that solution that work well, and elements that work poorly. A resource that is used at full capacity, for example, may represent a bottleneck. This information about problem structure is local, in that it may only apply to the part of the search space currently under examination, but may be useful in determining where the search should go next.

Prioritizer.

The prioritizer uses the blame factors assigned by the analyzer to modify the previous sequence of problem elements. Elements that received blame are moved toward the front of the sequence. The higher the blame, the further the element is moved.

The priority sequence plays a key role in SWO. As a difficult problem element moves forward in the sequence it is handled sooner by the constructor. It also tends to be handled better, thus decreasing its blame factor. Difficult elements rise rapidly to a place in the sequence where they are handled well. Once there, the blame assigned to them drops, causing them to slowly sink in the sequence as other parts of the problem that are not handled as well are given increased priority. Eventually, difficult elements sink back to the point where they are no longer handled well, causing them to receive higher blame and to move forward in the sequence again. Elements that are always easy to handle sink to the end of the sequence and stay there.


  
Figure 2: Simple example
\begin{figure} \epsfxsize=5.5in \centerline{\epsffile{example.eps}} \end{figure}

To illustrate the SWO cycle, consider a simplified scheduling example. Suppose we have a single production line, and three tasks to schedule, A, B and C. Only one task can be performed at a time. Execution starts at t=0. The duration and deadline for each task is shown in Figure 2. The objective is to minimize the number of late tasks. An optimal solution will have one late task.

Suppose our initial priority sequence is $\langle C, A, B\rangle$, and the constructor schedules tasks in order, at the earliest possible time. The resulting schedule has two late tasks (B and A). Suppose that our analyzer assigns one point of ``blame'' to each late task, for each unit of time it is late. In this case, A, B, and C receive 20, 30 and 0 units of blame, respectively. Figure 2 shows the prioritization, the schedule that the constructor builds from that prioritization, and the late tasks with the blame assigned to each.

For the next cycle, the prioritizer must take the previous priority sequence, and the blame assigned by the analyzer, and generate a new priority sequence. A simple prioritizer might just sort the tasks by their numeric blame in descending order, resulting in the new priority sequence $\langle B, A, C\rangle$.

After the second cycle, tasks A and C are late, scoring 20 and 10 points of blame, respectively. The new priority sequence is then $\langle A, C, B\rangle$.

The third solution, constructed from this priority sequence, has only one late task, B, which receives 30 points of blame. At this point we have an optimal solution. If we continue running SWO, however, as we might expect to do since we typically do not know when we have reached optimality, SWO will attempt to fix what was wrong with the current solution. Here, since task B was late, its priority would be increased, and the resulting solution would fix that problem at the expense of others. (We would also enter a short cycle, alternating between the last two schedules. We address this by introducing some randomization in the prioritizer.)


  
Figure 3: Examples of priority changes over time
\begin{figure} \epsfxsize=5in \centerline{\epsffile{pqueue.eps}} \end{figure}

Although this example is highly simplified, and there would clearly be better and more sophisticated ways to implement each of the three modules, Figure 3 shows that the behavior illustrated by the simple example is reflected in a real domain. The figure shows the changing position in the priority sequence of three tasks in the scheduling domain that is described in detail in the following section. One task (``Job 24'') starts out with a high priority, and remains at a relatively high priority level. We can see that when the task is scheduled effectively, and therefore receives little or no blame, its priority tends to drop, but it does not have to drop very far before it ceases to be scheduled well, acquires a significant level of blame, and moves quickly back to a higher priority.

The other two tasks shown in the figure behave quite differently. One task (``Job 39'') starts out with a relatively high priority, but this task is ``easy'' to schedule, with little blame, even when it is scheduled late in the sequence. Over successive iterations, the priority of such a task will tend to decrease steadily. The other task illustrated here (``Job 26'') does just the opposite, starting at a low priority and moving fairly steadily toward a high priority.

The following section discusses the characteristics of SWO that make it an effective technique for optimization. We then discuss implementations of SWO for scheduling and for graph coloring problems. The final sections discuss related work, describe directions for future research, and summarize our findings.



 
next up previous
Next: Key ideas Up: ``Squeaky Wheel'' Optimization Previous: ``Squeaky Wheel'' Optimization
Dave Clements
1999-04-13