Abstract

We examine the principal systems of Error Detection and Correction (EDAC) which have been recently proposed as U.S. standards for optical disks and discuss the the two principal methodologies employed: Reed-Solomon Codes and Product Codes, and describe the variations in their operating characteristics and their overhead in disk space. We then present current knowledge of the nature of defect distributions on optical media including bit error rates, the incidence and extents of clustered errors and burst errors, and the controversial aspects of correlation between these forms of error. We show that if such forms are correlated then stronger EDAC systems are needed than if they are not. We discuss the nature of defect growth over time and its likely causes, and present the differing views on the growth of burst errors including nucleation and incubation effects which are not detectable in new media. We exhibit a mathematical model of a currently proposed end-of-life defect distribution for write once media and discuss its implications in EDAC selection. We show that standardization of an EDAC system unifies the data recording process and is permissive to data interchange, but that enhancements in EDAC computation during reading can achieve higher than normal EDAC performance, though sometimes at the expense of decoding time. Finally we examine vendor estimates of disk longevity and possible means of life extension where archival recording is desired.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call