Abstract

Nanoscale CMOS technology has encountered severe reliability issues especially in on-chip memory. Conventional word-level error resilience techniques such as Error Correcting Codes (ECC) suffer from high physical overhead and inability to correct increasingly reported multiple bit flip errors. On the other hands, state-of-the-art applications such as image processing and machine learning loosen the requirement on the levels of data protection, which result in dedicated techniques of approximated fault tolerance. In this work, we introduce a novel error protection scheme for memory, based on feature extraction through Principal Component Analysis and the modular-wise technique to segment the data before PCA. The extracted features can be protected by replacing the fault vector with the averaged confinement vectors. This approach confines the errors with either single or multi-bit flips for generic data blocks, whilst achieving significant savings on execution time and memory usage compared to traditional ECC techniques. Experimental results of image processing demonstrate that the proposed technique results in a reconstructed image with PSNR over 30 dB, while robust against both single bit and multiple bit flip errors, with reduced memory storage to just 22.4% compared to the conventional ECC-based technique.

Highlights

  • During the last few decades, the semiconductor industry has experienced continuous scaling of CMOS technology, guided by Moore’s Law [1], to design and fabricate devices with higher speed, less area and power consumption

  • We explore the tradeoff between contribution rate (CR) and physical overhead in the experiment section and generalize the empirical suggestions

  • According to [30,31], partitioning the image first before applying Principal Component AnalysisPrincipal component analysis (PCA) to partitioned sub-blocks can effectively extract the local characteristics of the image, and speed up the dimensionality reduction and reconstruction of the image. In this experiment, we cut the images of 512 × 512 into several sub-blocks, extract principal components and reduce dimensions on sub-blocks with PCA

Read more

Summary

Introduction

During the last few decades, the semiconductor industry has experienced continuous scaling of CMOS technology, guided by Moore’s Law [1], to design and fabricate devices with higher speed, less area and power consumption. Such scaling is inarguably questioned by the constraint of quantum physics, where the state-of-the-art technology nodes are already approaching the thickness of a single atom. Given a set of samples, PCA yields a set of orthonormal vectors that can be used to linearly project the samples into a new space This space maximizes the variance of projected samples and minimizes their least mean square error, that is, it minimizes the difference between the projected sample and its reconstruction back to the original space [17]. The data comes from many kinds of sources, such as images [18]

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.