Explaining Mergesort with FIFO Queues
Mergesort is a beautiful algorithm. I make no secret of it being my favorite sorting algorithm (everybody has one of those, right?) and so I like to spend time playing around with it, thinking of different ways of implementing the concept. In this post we're gonna talk about mergesort. Again.
Mergesort has some interesting properties in this regards. Depending on the underlying representation of the list, the performance characteristics of how you can sort that list will change. As an example, to merge sort an array you can either merge using O(1) space, but sort in O(n^2 log n)(in-place merge sort), or sort in O(n log n) but use O(n) space as with traditional top down/bottom up mergesort. Additionally, a linked list can be merge sorted in O(n log n) time and O(1) space, but a linked list node is larger than an integer in an array cell, as well as being located on the heap.
In this post I will introduce an mergesort variant that will sort a dynamic array of integers in amortized O(nlogn)/O(1) comparisons and space. While of novel interest, It more than likely will fall into that odd category of "slow O(nlogn)" sorts, because of the size of the constants hidden by using Big O complexity. It's more an interesting aside than the main focus of this post. The main motivation for this post to highlight the data movement involved during mergesort, So without further ado, Yet Another Merge Sort!
Mergesort, a Quick Review.
Mergesort is one of the oldest sorting algorithms, first put in algorithmic form by John von Neumann sometime in the 1940s. In The Art of Computer Programming, Donald Knuth speculates, on credible word, that von Neumann was also the first person to implement merge sort in code on a computer. Could just be the super geek in me, but I think that's pretty rad.
Mergesort typifies the divide and conquer algorithmic technique. Divide and conquer can be applied to a problem in which the task can be broken down into smaller instances of the same problem, which can be more easily solved. Aside from this property, those answers to the smaller instances must be able to be combined to solve the larger instance. Sorting a collection is one such problem. The other prototypical divide and conquer algorithm - binary search - just so happens to require the data being searched to be sorted to work.
In the case of (top down) merge sort, a list of size N is recursively split in half until you have turned the N-element list into N one-element lists. How is turning one list into a whole bunch lists going to help you sort a list? Well, that's only the divide phase of the algorithm, but It is a very import step because a one element list has a very desirable property: It's a sorted list, and merging two sorted lists into one sorted list is a very simple affair.
Merging Two Sorted Lists
Two merge two sorted lists together, the lists need to have only the following two properties: the ability to read the first item on the list, and the ability to remove the first element from the list. Ideally, lists should also be able to grow from the rear, though its not a firm requirement. If all three of those criteria are met, than the merging algorithm proceeds like the following:
List merge(List<item> a, List<item> b) {
List<item> mergedList;
while (!a.isEmpty && !b.isEmpty) {
if (a.first < b.first) {
remove the first element from list a and place it in mergedList
} else {
remove the first element from list b and place it in mergedList
}
}
while (!a.isEmpty) {
remove the first element from list a and place it in mergedList
}
while (!b.isEmpty) {
remove the first element from list b and place it in mergedList
}
return mergedList;
}
Recursive Mergesort
This process lends its self quite naturally to using recursion. By splitting the list on each pass before making the recursive call and making the depth of recursion stop when the list has divided into one element lists, we can take advantage of the unwinding of the stack to drive the merge phase.
template <class T>
List<item> sort(List<item> list) {
if (list.size() <= 1)
return list;
List<item> a = <front half of list>
List<item> b = <back half of list>
return merge(sort(a), sort(b))
}
There are many different strategies for splitting the input list into two halves, and they are determined by the underlying structure of the list (array, linked list, iterators, etc).
Iterative Mergesort
You can of course, avoid recursion altogether, and decompose the list iteratively, and then merge them into larger and larger lists, this is called bottom up merge sort, and sticking with a theme, the way this is accomplished in widely varying ways based on the types of the two lists being sorted.
List<int> bottomUp(List<int> list) {
List<List<int>> lists;
while (!list.empty()) {
List<int> tmp;
int n = list.removeFirst();
tmp.addToBack(n);
lists.addToBack(tmp);
}
while (lists.size() > 1) {
List<int> a = lists.removeFirst();
List<int> b = lists.removeFirst();
list = merge(a, b);
lists.addToBack(list);
}
return lists.removeFirst();
}
Not as pretty as the recursive version, but gets the job done. If what you're sorting is an array It can be done with nested for loops and no queue list. Bottom up mergesort may or may not sort faster than the recursive version, depending on the computer as it does have better cache locality.
A Simple Implementation
At its most simple mergesort works by doing a few basic list operations: taking items from either the front of one list or the front of another list, and adding it to the back of yet another list, based on the value of the first item on the respective lists. In other words, the lists are behaving as queues.
Queues have the desirable property that they shrink and grow to accommodate the number of items placed on them. More importantly, the three fundamental operations of queues are the exact three data movement operations I just specified as being used for merge sorting.
Under normal circumstances it would be strange to sort a queue. But we're not technically sorting queues - were using queues to hold that data being sorted, a subtle but significant difference. The one major drawback to merge sort is that it requires scratch space for moving the data around - and those data movements happen to be done just like a queue. There is no rule that when your sorting an array that you must then use an array as the additional memory. Like wise for a linked list, or vector - It's just common practice to do so. So if that data moves like a queue, then why not use a queue? It turns out it is delightfully easy to do so. To make it even easier, I make a slight modification to std::queue so that pop() returns the value being removed instead of being a void method.
namespace mgc {
template <class T>
struct queue : public std::queue<T> {
T pop() {
T ret = std::queue<T>::front();
std::queue<T>::pop();
return ret;
}
};
}
Lets begin with the splitting of the list portion of mergesort. Splitting a queue into two equal sized queues is a very straight forward affair: you simply create a second queue, and pop the first queue into the second queue until their two sizes are equal.
void sortutil(mgc::queue<int>& q) {
if (q.size() <= 1) {
return;
}
//split the queue in half
mgc::queue<int> front;
while (front.size() < q.size()) {
front.push(q.pop());
}
//sort the queue
sort(front);
sort(q);
q = merge(front, q);
}
The merge phase is almost line for line the same as the pseudo code I laid above.
mgc::queue<int> merge(mgc::queue<int>& a, mgc::queue<int>& b) {
mgc::queue<int> merged;
while (!a.empty() && !b.empty()) {
if (b.front() > a.front()) {
merged.push(a.pop());
} else {
merged.push(b.pop());
}
}
while (!a.empty()) merged.push(a.pop());
while (!b.empty()) merged.push(b.pop());
return merged;
}
We can use this method to sort an array, a linked list, a vector, heck anything, by copying their contents first into a queue, calling the above sort(), and then popping the resultant sorted queue back into what ever collection you were sorting. The two additional O(N) loops to move the data to the queue in the beginning, and from it at the end do not change the O(nlogn) complexity, as O(2N + nlogn) is the same as O(nlogn).
This is where my claim of amortized constant space and O(nlogn) comparisons comes into play. If we have an std::vector that we need to sort, we can do it in constant amortized space, by first transferring it to a queue instead of copying it, as in the following example. We can then apply the above mergesort algorithm on the queue, and finish by transferring the now sorted queue back into the vector:
void sort(vector<int>& a) {
mgc::queue<int> mem;
while (!a.empty()) {
swap(a.front(), a.back());
mem.push(a.back());
a.pop_back();
}
sortutil(mem);
for (int i = 0; !mem.empty(); i++)
a.push_back(mem.pop());
}
So there you have it, a FIFO queue based mergesort algorithm that can sort dynamically sized collections in amortized constant space and O(nlogn) comparisons.
What do you think, is that cheating? Let me know! and until next time, Happy Hacking!
-
How Many Hanukkah Candles?
-
Pratt Parsing: Top-Down Operator Precedence
-
Generating P-Code by AST Traversal
-
Ternary Search Tries: String Specific Ordered Symbol Tables
-
Digital Search Trees
-
Lossless Compression Part III: Huffman Coding
-
Lossless Compression Part II: The LZ77 Algorithm
-
Lossless Compression Part I: Working with Bits in a Byte Oriented World
-
Bottom Up AVL Tree: The OG Self-Balancing Binary Search Tree
-
A Different Take on Merge Sort: Binary Search Trees?
Leave A Comment