Breaking Data Down, Again
The Quick Sort Strategy
The quick sort works out to be operationally about the same as the merge sort in that part of the process is division by (roughly) two and then iterating across the data to set up the sorting action. On the other hand, in one way, it is exactly the opposite of the merge sort. Where the merge sort breaks all the data down to individual values and sorts them as the recursion completes, the quick sort actually sorts the data as the recursion is implemented so that when the recursion ends for all the data, it is sorted. The recursion simply completes in each method all the way back up to where it was originally called. Both sorts are interesting and excellent opportunities to learn more about data management as well as practicing your programming skills. First, start by viewing the video to see the quick sort in action.
How it gets done
Again, much like the merge sort, the quick sort operational part is not complicated. There is a test to make sure the lower index is less than the upper index, the partition process is called, and then the quick sort is called for the left and right sides of the returned partition index. Consider the following code.
public void runQuickSort( char[] array, int lowIndex, int highIndex )
// first, make sure the lower index of the list
// is less than the upper index of the list
// call the partition process to set the pivot value
// and get its index
// call the quick sort process for the left side of the list
// to the left of the pivot index
// call the quick sort process for the right side of the list
// to the right of the pivot index
If the lower index is not less than the upper index, there is no reason to continue; this ends the recursion as you observed in the video.
The Partitioning Process
Like the merge sort, that actual quick sort operation is pretty simple. The interesting part of the process is the partitioning action. The pseudo code for that is provided here.
public void runPartition( char[] array, int lowIndex, int highIndex )
// Identify the partition value at the beginning
// of the array segment (at lowIndex)
// set the working index and the pivot index
// to the low index parameter
// Start a loop across the the array segment
// from the low index to the high index, inclusive
// This will use the working index
// check if the value at the current working index
// is less than the original pivot value
// increment the pivot index
// swap the value at the working index
// with the value at the pivot index
// end working loop
// Swap the original pivot value (at the low index)
// with the value at the current pivot index
// return the pivot index
In truth, this is not terribly complicated but it helps a lot to draw a picture or use manipulatives (e.g., pieces of paper, blocks, etc.) to make sure you can follow the process.
Complexity
As mentioned, this method uses a divide by two strategy so half of the operation qualifies as a Log2(N) strategy. However, the partitioning operations require a series of iteration operations across the data so the second half of the strategy takes roughly N operations. For that reason, the complexity of the merge sort is considered to be N Log2(N), again, very comparable to the merge sort. However, if the quick sort is given an already sorted list -- sorted either forward or backward -- you should recognize that the divide by two process won't work. In fact, for the case of already sorted lists, the quick sort will actually fall back to N2 complexity. You should be able to understand and explain this.
Summary
The quick sort is a handy tool that takes advantage of the divide and conquer strategy. Like the merge sort, it is not really complicated, and like the merge sort, it has a complexity of N Log2(N) for most circumstances. Learning either of these sorting actions is not difficult but what is far more important is that you understand the algorithm and can describe the process to someone else. That will be the point at which you are learing what you need to know from your data structures course.