Shellsort: A Sorting Enigma

For the first 15 years or so of computing if you wanted an in-place general purpose sorting algorithm you were stuck with O(n^2) sorting algorithms. Insertion sort, selection sort, and bubble sort were about as far as the art of sorting had gotten at the time. That's not to say faster sorting methods were not available: radix sort and merge sort, among 2 of the fastest known sorting algorithms both predate computers themselves. Both also have the undesirable traits of either relying on the data conforming to certain properties, as in radix sort or require the use of additional working space, as in mergesort, which in those early day of computing was at a premium, to say the least.

In the late 1950's neither quicksort or heapsort had yet to be discovered, though that would soon change. It was in this atmosphere that Donald Shell came up with his "Diminishing Increment Sort" - an adaptation of Insertion sort that would be come to be known as Shellsort. It was the first in-place sorting algorithm to break the O(n^2) performance barrier.

It remains to this day, the simplest "fast" sorting method. Sedgewick gives it an endorsement that is hard to beat when he says "In general, if you have a sorting problem: use shellsort, and then see if implementing something more complicated is worth while. There's definite wisdom in those words, especially when you consider how easily one can write a bad quicksort that will have a worse worst case complexity than shellsort!

Interleaved Insertion Sorts

At the time of its discovery, the best performing in place sort was insertion sort, an O(n^2) algorithm its self, but measurably faster than both selection sort and bubble sort. The major bottleneck in insertion sort is that it only ever compared values that were directly adjacent to each other, and as such could not remove more than one inversion per swap. However, it is well known that insertion sort works very quickly (nearing O(n) on nearly sorted data.

void insertionSort(int a[], int l, int r) {
    for (int i = l; i <= r; i++) {
        int j = i; int v = a[i];
        while (j >= l && a[j - 1] > v) {
            a[j] = a[j - gap];
            j --;
        }
        a[j] = v;
    }
}

Shell's Idea was to use the insertion sort algorithm, but to compare array positions that were not directly adjacent. Such a list would be called h-sorted, meaning that the data at every h'th position is in sorted order. If you were to interleave a bunch of h-sorted lists, which is accomplished by running the gap-insertion sort algorithm at different h-values over the same array, then the array would be in a nearly sorted order.

void gapInsertionSort(int a[], int l, int r, int gap) {
    for (int i = l+gap; i <= r; i++) {
        int j = i; int v = a[i];
        while (j >= l+gap && a[j - gap] > v) {
            a[j] = a[j - gap];
            j -= gap;
        }
        a[j] = v;
    }
}

A a list of interleaved h-sorted lists is said to be k-sorted, and regular insertion sort should be able to sort a k-sorted list very quickly. The gapInsertionSort algorithm with a gap value of one is, conveniently, straight insertion sort. Any decreasing sequences of numbers that ends in '1' that is used as a gap values will result in a sorted list, and this is the basis behind shellsort.

void shellSort(int a[], int l, int r) {
    for (int gap = (r-l)/2; gap > 0; gap /= 2) {
        gapInsertionSort(a, l, r, gap);
    }
}

Shell's original sequence is determined by starting with the length of the list and dividing it in half. This value is then halved at each iteration until reaching one. It is straight forward to implement, and is found in countless programming books of the past 50 years. But as I mentioned previously, different gap sequences result in different performance, and other gap sequences have been found to be much better.

Analyzing Shellsort

Shellsort holds an interesting place amongst sorting algorithms in that by changing the gap sequence, you can change the computational complexity of the algorithm - in some cases profoundly. So what constitutes a good gap sequence?

Shell's original sequence is actually to be considered among some of the weaker sequences, as it tends to only compare even positions against even positions and odd positions against odd positions until it reaches the final insertion sort. We know that ideally, we should have a good mixture of oppositional array indices being compared.

Some researchers, such as Knuth, Hibbard, and Sedgewick have found gap sequences with worst case bounds of O(n^4/3) and O(n^3/2). Pratt found a sequence which leads to O(n log^2 n) complexity, though not of practical use because of the size of the sequence its self.

It has not been proven that there is not a gap sequence which leads to a worst case bounds of O(n log n) and thus remains an open (though no longer pursued) question in computer science. I say it is no longer pursued, because while it is of theoretical interest, there are demonstrably better performing algorithms which fit the same niche in existence.

Proven Gap Sequences

Several gap sequences have been well studied, with proven worst and average case analysis being conducted by prominent computer scientists such as Donald Knuth, Robert Sedgewick, as well as Pratt, amongst others.

One benefit to de-coupling the gap stepping portion of the algorithm from the gap insertion sort portion of shellsort as done above is that it allows us to easily try different gap sequences in order to to gauge their relative performance.

In his The Art Of Computer Programming Knuth suggested the geometric sequence 2*h-1, which was originally found by Hibbard (the same Hibbard who invented the binary search tree), resulting in an O(n^3/2) worst case performance.

 void hibbardShellSort(int a[], int l, int r) {
    int h;
    for (h = 2; h < (r-l)/4; h = 2*h-1);
    for (int gap = h; gap > 0; gap /= 2) {
        insertionsort(a, l, r, gap);
    }
}

Interestingly, Sedgewick recommends the sequence 3*h+1, which was derived by Knuth from earlier work done by Pratt. Sedgewick found a worst case of O(n^3/2), and is the sequence used in every version of his Algorithms series of books.

 void sedgewickShellSort(int a[], int l, int r) {
    int h;
    for (h=1; h < (r-l)/9; h = 3*h+1);
for (int gap = h; gap > 0; gap /= 2) {
        insertionsort(a, l, r, gap);
    }
    cout<<endl;
}

Other similar sequences include those derived from 2*h+1, and related progressions. Pratts work with the "3-smooth" numbers and similar mathematical lines of inquire have let to a multitude of sequences. And yet others, such as Ciura's sequence were found through trial and error, and are not derived from a known equation.

Their has also been the use of genetic algorithms, neural networks, and other varied and interesting - sometimes surprising approaches to studying the problem of generating gap sequences that have shown up in various journals. New research is still taking place into this old enigma of a sorting algorithm, which both refuses to be classified and refuses to die. That's all I've got for today, so until next time, happy hacking!

Further Reading

[1] Sedgewick, Robert, Algorithms in C++, Parts 1-4 (3rd ed.). Addison Wesley

[2] Knuth, Donald, The Art of Computer Programming. Volume 3: Sorting and Searching (2nd ed.). Addison-Wesley. 


Leave A Comment