A Little Follow Up
The section in Chapter 11 that discusses time complexity is pretty thorough. However, every time a student learns a new algorithm it should lead to checking out how the algorithm compares to others, and why.
The merge and quick sorts use both divide and conquer strategies AND loops, indeed in different ways but nevertheless that's how they work. Consider the following.
The Merge Sort
The merge sort divides the items down until they are singles, and then merges them back together before ending each recursive call. Here is the pseudo code.
void runMergeSort( char array[], int lowIndex, int highIndex )
1 // first, make sure the lower index of the list
// is less than the upper index of the list
2 // find the mid point of the local list
3 // call the merge sort process for the left side of the list
4 // call the merge sort process for the right side of the list
5 // then merge the two lists back together as sorted data
The initial observations for this:
- line 1 is just an if test, just one line
- line 2 is a calculation, just one line
- line 3 is a call to sort the data to the left of the mid point (roughly divide by two); this is log2N
- line 4 is a call to sort the data to the right of the mid point (roughly divide by two); this is log2N
- line 5 is the merging process. Depending on how this is conducted there is almost always at least two iterations across the list, one to copy part of the working array into the local array and the other to iterate across the partial working list. At worst, it would be 2N inside the function, and RN depending on the number of recursion calls which in turn call the merge operation. The R*2*N will be conducted log2N times each direction
Running the analysis.
- add them up: 1 + 1 + log2N * R*2*N + log2N * R*2*N
- combine terms: 2*R*2*N*log2N + 2
- removing the constants: N*log2N
- finding the largest power: done
- time complexity is N*log2N or Big O( Nlog2N )
The Quick Sort
The difference between the quick sort and the merge sort is that the quick sort conducts its sorting operations as it recurses downward (in the partition operation) while the merge sort recurses all the way to the bottom of the list breaking apart the items and then conducts the sorting (in the merge operation) as the recursion is completing. Other than that, these sorts are functionally the same. Take a look at the pseudo code.
void runQuickSort( char array[], int lowIndex, int highIndex )
1 // first, make sure the lower index of the list
// is less than the upper index of the list
2 // call the partition process to set the pivot value
// and get its index
3 // call the quick sort process for the left side of the list
// to the left of the pivot index
4 // call the quick sort process for the right side of the list
// to the right of the pivot index
The initial observations for this.
- line 1 is an if test, just once
- line 2 is a call to the partition operation, almost always one pass across the list, N, but . . .
- line 3 is a call to sort the elements on lower side of the partition value (divide by roughly 2), which will drive the number of partition functions called, log2
- line 4 is a call to sort the elements on the upper side of the partition value (divide by roughly 2), which will drive the number of partition functions called, log2
Run the analysis.
- add them up: 1 + log2N * N + log2N * N
- combine them: 2*log2N * N + 1
- drop the constants: log2N * N
- show the results: the time complexity is Nlog2N, or Big O( Nlog2N )
Again, This is Not Difficult
The analysis process is just thinking through how many times something will be repeated, from 1 to log2N to N to N2 and beyond. The tricky part in some cases is to track what the code is doing but that is the cool part of Computing Science. People in this discipline are almost always asking, "How does that work?" Witha good CS education, those folks can answer that question.
As noted in the Chapter 11 section, it is still handy to check out the BigOCheatSheet web site.