Abstract

Thompson’s model of very large scale integration computation relates the energy of a computation to the product of the circuit area and the number of clock cycles needed to carry out the computation. It is shown that for any sequence of increasing block-length decoder circuits implemented according to this model, if the probability of block error is asymptotically less than 1/2 then the energy of the computation scales at least as $\Omega (n({\log n)^{1/2}})$ , and so the energy of decoding per bit must scale at least as $\Omega ({\log n)^{1/2}}$ . This implies that the average energy per decoded bit must approach infinity for any sequence of decoders that approaches capacity. The analysis techniques used are then extended to show that for any sequence of increasing block-length serial decoders, if the asymptotic block error probability is less than 1/2 then the energy scales at least as fast as $\Omega (n\log n)$ . In a very general case that allows for the number of output pins to vary with block length, it is shown that the energy must scale as $\Omega (n(\log n)^{1/5})$ . A simple example is provided of a class of circuits performing low-density parity-check decoding whose energy complexity scales as $O(n^{2} \log \log n)$ .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.