Abstract
Big data requires new technologies and tools to process, analyze and interpret the vast amount of high-speed heterogeneous information. A simple mistake in processing software, error in data, and malfunctioning in hardware results in inaccurate analysis, compromised results, and inadequate performance. Thus, measures concerning reliability play an important role in determining the quality of Big data. Literature related to Big data software reliability was critically examined in this paper to investigate: the type of mathematical model developed, the influence of external factors, the type of data sets used, and methods employed to evaluate model parameters while determining the system reliability or component reliability of the software. Since the environmental conditions and input variables differ for each model due to varied platforms it is difficult to analyze which method gives the better prediction using the same set of data. Thus, paper summarizes some of the Big data techniques and common reliability models and compared them based on interdependencies, estimation function, parameter evaluation method, mean value function, etc. Visualization is also included in the study to represent the Big data reliability distribution, classification, analysis, and technical comparison. This study helps in choosing and developing an appropriate model for the reliability prediction of Big data software.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.