1,615 views

3 Answers

Best answer
4 votes
4 votes

Quicksort is usually faster than sorts that are slower than O(nlogn) (say, Insertion sort with its O(n2) running time), simply because for large nn their running times explode.

A good reason why Quicksort is so fast in practice compared to most other O(nlogn)algorithms such as Heapsort, is because it is relatively cache-efficient. Its running time is actually O(n/Blog(n/B)), where B is the block size. Heapsort, on the other hand, doesn't have any such speedup: it's not at all accessing memory cache-efficiently.

The reason for this cache efficiency is that it linearly scans the input and linearly partitions the input. This means we can make the most of every cache load we do as we read every number we load into the cache before swapping that cache for another. In particular, the algorithm is cache-oblivious, which gives good cache performance for every cache level, which is another win

source:https://cs.stackexchange.com/questions/3/why-is-quicksort-better-than-other-sorting-algorithms-in-practice

In addition, it’s very easy to avoid quicksort’s worst-case run time of O(n2)almost entirely by using an appropriate choice of the pivot – such as picking it at random (this is an excellent strategy).

selected by
2 votes
2 votes

quicksort changes the array inplace - in the array it is working on [unlike merge sort, for instance - which creates a different array for it]. Thus, it applies the principle of locality of reference.

Cache benefits from multiple accesses to the same place in the memory, since only the first access needs to be actually taken from the memory - the rest of the accesses are taken from cache, which is much faster the access to memory.

Merge sort for instance - needs much more memory [RAM] accesses - since every accessory array you create - is accessing the RAM again.

Trees are even worse - since 2 sequential accesses in a tree are not likely to be close to each other. [Cache is filled in blocks, so for sequential accesses - only the first byte in the block is a "miss" and the others are a "hit"].

1 votes
1 votes
I think one of the main reasons why Quick Sort is cache-friendly. When QS processes a segment of an array, it accesses elements at the beginning and end of the segment, and moves towards the center of the segment.

So, when you start, you access the first element in the array and a piece of memory (“location”) is loaded into the cache. And when you try to access the second element, it's (most likely) already in the cache, so it's very fast.

Other algorithms like heapsort don't work like this, they jump in the array a lot, which makes them slower.

Related questions

3 votes
3 votes
1 answer
1
iarnav asked Jan 14, 2018
2,461 views
You have an array of n elements. Suppose you implement quick sort by always choosing the central element of the array as the pivot. Then the tightest lower bound for the...
3 votes
3 votes
1 answer
2
SHALINI PORWAL asked Aug 10, 2017
1,357 views
When array is already sorted in reverse order then what will be the recurrence relation for number of swaps on array of n elements using quick sort?
2 votes
2 votes
3 answers
3
LavTheRawkstar asked Apr 15, 2017
1,191 views
With quick sort The results after first partioning of the given array. A = (2,8,7,1,3,5,6,4,9).Analysis the time complexity of Quick sort in the best case.