Abstract

In this paper a statistical approach to error location and correction for data stored in secondary memories is developed. The approach is based on the observation that the data records in secondary storage have some inherent redundancy of information. This redundancy cannot precisely be predicted as in the case of typical error correction scheme's artificial redundancy. However, the redundancy can be exploited to provide error correction with some degree of confidence. We use simple and weighted checksum schemes for error detection and present algorithms for single and multiple error correction using statistical error location and correction (SELAC). An implementation of SELAC will be described with an elaborate study of its error-correction capabilities. A conspicuous aspect of SELAC is that it will not cost any processor time and storage overhead until after an error is encountered, unlike the classical schemes using single error correcting-double error detecting (SEC-DED) and double error correcting-triple error detecting (DEC-TED) codes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call