It is well known that comparison-based algorithms cannot run faster than O(nlogn). Therefore the running times of Mergesort, Heapsort, and Quicksort algorithms are asymptotically optimal. As well as asymptotic running times, how fast sorting algorithms work in practice has gained importance lately. One algorithm is preferred, if it is slightly more efficient than another, due to the rapid increase in data generation. In this study, we consider the Mergesort algorithm with a different approach. Unlike the Mergesort algorithm, we do not perform unconditional division. Instead, we divide the input array into ascending and descending sub-arrays. Then, we merge the sub-arrays obtained as a result of the division by slightly modifying the classical Merge function. As a result of these two operations, we obtain three new sorting algorithms depending on how we get ascending and descending sub-arrays. Among these three algorithms, the third one, which is our main proposed algorithm, is more important both theoretically and practically. In this algorithm, we divide the input array into alternately ascending and descending sub-arrays. Moreover, the terms of the sub-arrays do not have to be consecutive terms of the input array. The asymptotic running time of our proposed algorithm is O(nlogn) in the worst case and O(n) in the best case. In practice, the proposed algorithm performs better than the classical Mergesort algorithm in arrays with Gaussian and Uniform distributions of various sizes. In an array of four million, it provided a 17% improvement in the Gaussian distribution compared to the classical Mergesort algorithm. In addition, it provided a 29% improvement in the Uniform distribution of the same size. The proposed algorithm performs also much better than traditional sorting algorithms in real data collected from various sources.
Read full abstract