We applied SWO to a standard set of graph coloring problems, including random graphs and application graphs that model register allocation and class scheduling problems. These were collected for the Second DIMACS Implementation Challenge [Johnson TrickJohnson Trick1996], which includes results for several algorithms on these problems [Culberson LuoCulberson Luo1993,Glover, Parker, RyanGlover et al.1993,Lewandowski CondonLewandowski Condon1993,MorgensternMorgenstern1993]. Problems range from 125 nodes with 209 edges to 4000 nodes with 4,000,268 edges.
[Glover, Parker, RyanGlover et al.1993] is the only paper that is based on a general search technique, TABU with branch and bound, rather than a graph coloring specific algorithm. This approach had the worst reported average results in the group. [MorgensternMorgenstern1993] used a distributed IMPASSE algorithm and had the best overall colorings, but also required that the target number of colors, as well as several other problem specific parameters be passed to the solver. [Lewandowski CondonLewandowski Condon1993] also found good solutions for this problem set. Their approach used a hybrid of parallel IMPASSE and systematic search on a 32 processor CM-5. [Culberson LuoCulberson Luo1993] used an Iterated Greedy ( IG) algorithm that bears some similarity to SWO. IG is the simplest algorithm in the group. Its solution quality falls between the IMPASSE algorithms and TABU but solves the entire set in 1 to 2 percent of the time taken by the other methods. Both IG and IMPASSE are discussed further under related work.
Table 3 compares SWO with the results for IG [Culberson LuoCulberson Luo1993], distributed IMPASSE [MorgensternMorgenstern1993], parallel IMPASSE [Lewandowski CondonLewandowski Condon1993], and TABU [Glover, Parker, RyanGlover et al.1993]. For each, one column shows the number of colors required for each problem, and the run time (in CPU seconds). Bold face indicates that the number of colors is within 0.5 of the best result in the table.
We used a Pentium Pro 333MHz workstation running Linux for the SWO graph coloring experiments. The times shown for the other four algorithms are based on those reported in [Johnson TrickJohnson Trick1996]. The results for IG, IMPASSE and TABU are normalized to our times using the DIMACS benchmarking program dfmax, provided for this purpose. Therefore, timing comparisons are only approximate. Our machine ran the dfmax r500.5 benchmark in 86.0 seconds; the times reported for the machines used on the other algorithms were 86.9 seconds for the TABU experiments, 192.6 seconds for IG, 189.3 seconds for IMPASSE, and 2993.6 seconds for parallel IMPASSE. Because the dfmax benchmark runs on a single processor, it is unsuitable for normalizing the times for parallel IMPASSE. We report their unnormalized times.
A variety of termination conditions were used. SWO terminated after 1000 iterations. IG terminated after 1000 iterations without improvement. Distributed IMPASSE used a wide variety of different termination conditions to solve the different problems. The only common element across problems was that distributed IMPASSE stopped when the target number of colors, provided as an input parameter, was reached. The times reported for parallel IMPASSE are the times it took to find the best solution that was found, not the time it took the algorithm to terminate, which was always 3 hours. TABU ran until the algorithm determined that it could make no further progress, or an hour had passed, whichever came first.
The TABU numbers are for a single run on each problem. The numbers for the other algorithms are averages for 4 runs (parallel IMPASSE), 5 runs (distributed IMPASSE, parallel IMPASSE) or 10 runs ( SWO, IG, distributed IMPASSE) on each problem.
Figure 6 summarizes the performance of each technique on the set of 27 problems that all of the algorithms solved. For each solver the graph indicates the average solution quality and the average amount of time needed to solve the set. The ideal location on the graph is the origin, producing high quality solutions in very little time. The points shown for the other techniques are the points reported in each of the papers. The curve shown for SWO shows how it performs when given varying amounts of time to solve the set. As the graph shows, SWO clearly outperforms TABU, the only other general purpose technique, both in terms of quality and speed. SWO also outperforms IG, a graph coloring specific algorithm, both in terms of quality and speed. The IMPASSE solvers clearly produce the best solutions in the group. However, IMPASSE is a domain specific method, and both solvers represent many months of programming effort. The SWO solver uses a general purpose search technique and was implemented in less than a month by a single programmer.