One of the oldest algorithms for sorting numbers is also one of the fastest: Quicksort was invented in 1959 by Tony Hoare. This page animates the steps of operation of Quicksort, but first I should introduce the graphical elements we will be using. We start off with an array of random numbers, between 1 and 200. The numbers are displayed below columns indicating their magnitude, as shown above.
A central element of Quicksort is the use of pivots. Here we have chosen the first element of the array to be the pivot, which is a valid choice. We will later discuss the use of random pivots. So the pivot is marked in red, and the most important aspect is its value, which is shown as a height of columns by the red line. The purpose of the pivot, is to place all elements which are smaller or equal to it to the left side of the array and all the elements larger than it to the right side. We will then place the pivot in between these two sides.
The panel above partitions the data about a pivot as described previously. The green squares contain numbers smaller than or equal to the pivot (red) and the blue boxes contain larger numbers. As the algorithm moves from left to right, entries are swapped to separate green from blue. After the pivot is placed between the sides, it takes up its final position in the array! So we have already sorted one entry in the array to its correct position.
Quicksort is a ‘divide and conquer’ algorithm, which is a common strategy for algorithms. Now that the numbers are partitioned, we shall divide the array into the green and blue halves and partition these separately in turn. We will repeat this recursively on both sides, until the subarrays are just one element in size, at which point the entire array has been sorted. As the subarrays are in general not equal in size, and often far from it, this recursive descent can look pretty random.
So far we have sorted to an ascending order, with the largest values at the rear end. Quicksort is of course just as capable of sorting to a descending order, which is shown on the panel above. By placing large values at the front of the array we achieve a descending sort. For the rest of this page we are back to ascending order.
While Quicksort on the random data we have seen so far is fast, Quicksort doesn't have good performance on the worst-case input data. Above we start with a strictly descending array of numbers. The easiest way to sort it would be to simply reverse them. But Quicksort doesn't even partition the data very well, which leads to slow performance indeed. Just compare the sorting speed of this panel to the previous one.
The solution to the problem stated above, is to choose a random element of the subarray in question to be the pivot. Once chosen, the pivot is swapped to the first position of the subarray, and then the partitioning is done as before and the sort proceeds faster for our worst-case input, as can be seen on this panel.
Just for fun, let's sort a slightly larger array, one that will fit on the screen without loosing all the details. Here we have 101 random values, again numbers between 1 and 200. The speed is turned up by a factor 2.5 which is not that much. And we can see that the array is sorted fairly fast again. This is because Quicksort scales well with larger arrays.
If you enjoyed this page, there are more algorithms and data structures to be found on the main page. In particular, the page about the Heap includes a panel on Heapsort, a different but similarly efficient sorting algorithm.