Friday 6 December 2013

Sorts and Big O

If you had a deck of 20 cards numbered from 1-20 laid out in front of you, and you were asked to sort them in increasing order in the most efficient way possible, how would you do it? Surprisingly, there are many acceptable approaches to this problem, which leads to the topic of sorting algorithms. For my last post, I'll be discussing three basic sorts: selection sort, quick sort, and merge sort as well as their efficiency. If any of my explanations are unclear, I really recommend checking out the wikipedia articles on each of the sorts since they are explained in more detail and have gifs which I found really helpful.

       Selection

Selection sort is probably one of the simplest and most intuitive approaches to sorting - you look through the the list (or in this case the deck of cards) to find the smallest element, and put that aside. Then you go through the rest of the deck and continue the process as many times as there were cards you started with and you end up with a sorted deck. In terms of code, a possible implementation is starting at the first (0th) element of an array, and switching it with the smallest value of the rest of the array. Then the second time through you can start on the second (1st) element and so on until you have gone through the array. 

If the size of the input (the number of elements) is n, then the first time through the list you have to check n - 1 elements. The second time, you ignore one element, so you only need to check n - 2 elements, and so on until you have 0 elements to check. So in relation to n, the number of operations you have to do is (n - 1) + (n - 2) + ... + 1 =  (n^2 + n) / 2, which means selection is O(n^2)

    Quick
Quick sort is a naturally recursive algorithm that is as follows: choose an element of  a list that is easily accessible (for example, the first, last or middle one) - this element is called the pivot. Then, move all of the elements that are smaller than the pivot to its left (or right if you want decreasing order) and all of the elements that are bigger than the pivot to its right.  Then, you perform this procedure again to the left half and then the right half (excluding you original pivot, since it is already in the correct position) of the array and repeat until you reach the base case - you have 1 or 0 elements, which don't need to be sorted. The recursive nature of quick sort lends itself to a very simple implementation in code which would look like: return ([quick sort of all the elements less than the pivot] + [the pivot] + [quick sort of all of the elements greater than the pivot]) if the array has more than 1 element, otherwise just return the array.

Assuming that for each pivot you pick, there are roughly the same amount of elements less than the pivot as there are elements greater than the pivot, you'll be splitting the list in half each time, which is logn (base 2). Each time you split the list, you'll be checking (at most) every element against a pivot. Since there are (almost) n elements that are checked logn times, you get O(nlogn).

However, if by chance every time you pick a pivot that is either the smallest or the biggest element in the list you are examining, you'll have to split the list n-1 times. And since the fact that you have to check at most every element against a pivot, the number of operations is roughly n(n-1), which is O(n^2). 

      Merge
Merge is another recursive algorithm which first splits the array until each slice is either 1 or 0 elements or in other words, until each slice is sorted. Then, it goes through all of the sub arrays and merges them into a sorted list by comparing the first element in each subarray and appending the smaller (or bigger if you want decreasing order) element to the merged array until both subarrays are empty (since they are already sorted, the merged array using this method will also be sorted). This process is continued until only one array remains. 

Similarly to the reasoning used for quicksort, since you are performing n operations logn times (for each split), you end up with O(nlogn). 


In the end, it is clear that merge will always be better than selection sort, and quick will almost always be better than selection sort. However, which one between merge and quick is faster? In merge sort you must check every element at every level, whereas for quick sort, you have to check every element except for the pivots that you already know are in the correct location from previous levels. In other words, in the best case, quick sort will have to perform less operations for every split. Since they split their input the same amount of times, a balanced quick sort will be slightly faster. In other words, both of these sorts are composed of two parts: the splitting (logn) and the operations every split (n). Unless the quick sort is perfectly balanced (split exactly in half each time), the splitting part will be somewhere between logn and n, but the operations every split will be less for quick sort. So the faster algorithm depends on the relationship between those two parts, and whether the time lost in quick sort by splitting more times is offset by the time gained in having to do less operations per split. In practice, as we've seen in class, quick sort ends up being slightly faster. 

No comments:

Post a Comment