Abstract

Problem statement: Researchers focused their attention on optimally ad aptive sorting algorithm and illustrated a need to develop tools f or constructing adaptive algorithms for large class es of measures. In adaptive sorting algorithm the run time for n input data smoothly varies from O(n) to O(nlogn), with respect to several measures of disor der. Questions were raised whether any approach or technique would reduce the run time of adaptive sor ting algorithm and provide an easier way of implementation for practical applications. Approach: The objective of this study is to present a new method on natural sorting algorithm with a run time for n input data O(n) to O(nlogm), where m defines a positive value and surrounded by 50% of n . In our method, a single pass over the inputted data creates some blocks of data or buffers accordi ng to their natural sequential order and the order can be in ascending or descending. Afterward, a bottom up approach is applied to merge the naturally sorted subsequences or buffers. Additionally, a par allel merging technique is successfully aggregated in our proposed algorithm. Results: Experiments are provided to establish the best, wo rst and average case runtime behavior of the proposed method. The simulation statistics provide same harmony with the theoretical calculation and proof the method ef ficiency. Conclusion: The results indicated that our method uses less time as well as acceptable memory to sort a data sequence considering the natural order behavior and applicable to the realistic rese arches. The parallel implementation can make the algorithm for efficient in time domain and will be the future research issue.

Highlights

  • Sorting a huge data set in a very nominal time is always a demand for almost all fields of computer science

  • In the sorting technique arena, natural order is taken into deep consideration

  • Parallelism is present as the approach is inherited from Mergesort (JaJa, 1992)

Read more

Summary

Introduction

Sorting a huge data set in a very nominal time is always a demand for almost all fields of computer science. Algorithms for the usual comparison-based model of computation. Divide and conquer is a bottom up approach followed by a top down traverse. Measurement of disorder has been studied as a universal method for the development of adaptive sorting algorithms (Chen and Carlsson, 1991). A bottom up traverse is enough after calculating the disorderness. The design of generic sorting algorithms results in several advantages (Estivill-Castro and Wood, 1992a), for example: In the proposed technique, at first, the disorderness of the data is checked and partitioned in a single pass over the data set. Thereafter, the partitions are merged according to their order. It has been ensured that the approach provides the optimum time while the bottom up merging tree is balanced

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call