The CPU scheduling technique influences the performance and efficiency of operating systems. Round-robin scheduling algorithm is ideal for time-shared systems, but it is not optimal for real-time operating systems since it yields more context switching, longer waiting time, and high turnaround time. The performance of the algorithm is predominantly influenced by the designated time quantum; however, determining a suitable time quantum is extremely challenging. This paper presents a CPU scheduling algorithm that provides a better tradeoff between waiting time, turnaround time, response time, and number of context switch by using hypothesis-based quanta generation approach. It combines the CPU burst requirements of actual processes with some noisy data and plots them against the presumed CPU quanta to get quanta densities so that a polynomial regression model can fit the data points with the highest adjusted R-squared. Then applying some complex inferential statistic, the required quanta is obtained. The scheduling is dynamic in nature because it generates the next CPU quanta in reference to the quanta that have been used in the previous cycle with remaining CPU burst requirements of the process, and it is also adaptive in nature because, at each cycle, it uses ‘d’ (5, 5, 4, 3, 2) degree of freedom to calculate the Jarque-Bera Statistics to accept/reject the hypothesis. The algorithm is implemented in ‘R’ and the performance has been evaluated on a sample size of five processes with some noisy data which outperforms the conventional RR and significantly reduces the performance parameters mentioned above. Implementing this algorithm to a time-sharing or distributed environment will undoubtedly improve system performance and will help to avoid issues like thrashing, incorporate aging, CPU affinity, and starvation. Since the proposed algorithm is work-conservative, therefore can be implemented in network packet switching, statistical multiplexing, and real-time systems.