Abstract

This article analyzes the properties of unknown faults in knowledge management and Big Data systems processing Big Data in real-time. These faults introduce risks and threaten the knowledge pyramid and decisions based on knowledge gleaned from volumes of complex data. The authors hypothesize that not yet encountered faults may require fault handling, an analytic model, and an architectural framework to assess and manage the faults and mitigate the risks of correlating or integrating otherwise uncorrelated Big Data, and to ensure the source pedigree, quality, set integrity, freshness, and validity of the data. New architectures, methods, and tools for handling and analyzing Big Data systems functioning in real-time will contribute to organizational knowledge and performance. System designs must mitigate faults resulting from real-time streaming processes while ensuring that variables such as synchronization, redundancy, and latency are addressed. This article concludes that with improved designs, real-time Big Data systems may continuously deliver the value of streaming Big Data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.