Abstract

Collection of data and to check its suitability is the first step in any statistical data analysis. In such analyses, the presence of outliers appears as an unavoidable important problem. Outliers are unexpected random values in dataset, and they can alter the statistical conclusion and also affect their assumptions. Thus, in order to manage the data properly, outliers must be defined and treated. So all statisticians have to confront the analysis and forced to take a decision. There is only being one of the two extreme choices left for the researcher or statistician during the analysis of outliers. First, either to reject the outlier with the risk of loss of genuine information and the second one is to include them with the risk of error in drawing conclusion. The study therefore summarize the various potential causes of extreme scores in a data set (e.g., data recording or entry errors, sampling errors, and legitimate sampling), how to detect them, and whether they should be removed or not. Another objective of this study was to explore how significantly a small proportion of outliers can affect even simple analyses. The study was explored with citing suitable examples including outlier value and also excluding the outlier data. The examples show a strong beneficial effect of repetition of the study based on extreme of scores. One way ANOVA test was performed and the significance of extreme outlier was described.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.