(I'm going to fix this in the quoted post.)Jplus wrote:[...] and you could be more efficient if you moved indices from both ends of the list. Let the left index go up until it meets a value that is~~bigger~~not smaller than the pivot, let the right index go down until it meets a value that is~~smaller~~not larger, break the loop if the left index and the right index have met or swap and repeat otherwise.

I read the dual pivot paper, but I don't think it's faster than the normal, single-pivot method. If you partition the array into three parts you'll need to recurse less deep on average, but you make more recursive function calls. You also have to perform more index calculations and you perform two comparisons per element per partitioning pass instead of one (the paper suggests that single pivot methods need two comparisons per element too, but this is incorrect). Choosing a sensible pivot also becomes slightly harder. The author of the paper clearly didn't have a very strong background in algorithmics. At best it's going to be not much slower, but you can test how it works for you anyway.

PM 2Ring wrote:Jplus wrote:Finally, you can improve a little by delaying the insertion sort until the end. If the subrange is shorter than your insertion sort threshold, just return directly from the quicksort function. When the topmost quicksort has returned, do a single insertion sort pass over the entire array.

Nice trick, Julian!

Thanks. It wasn't my invention though.

Read about introsort, the algorithm that was used in the C++ standard library. (It states that the final insertion pass doesn't make sorting faster, but that's in C++ where all function calls can be automatically inlined by the compiler.)