Cryptography based on error correcting codes

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The idea to use error-correcting codes in order to construct public key cryptosystems was published in 1978 by McEliece [ME1978]. In his original construction, McEliece used Goppa codes, but various later publications suggested the use of different families of error-correcting codes. The choice of the code has a crucial impact on the security of this type of cryptosystem. Some codes have a structure that can be recovered in polynomial time, thus breaking the cryptosystem completely, while other codes have resisted attempts to cryptanalyze them for a very long time now. In this thesis, we examine different derivatives of the McEliece cryptosystem and study their structural weaknesses. The main results are the following: In chapter 3 we devise an effective structural attack against the McEliece cryptosystem based on algebraic geometry codes defined over elliptic curves. This attack is inspired by an algorithm due to Sidelnikov and Shestakov [SS1992] which solves the corresponding problem for Reed-Solomon codes. The presented algorithm is heuristic polynomial time and thus inverts trapdoors even for very large codes. In chapter 4, we show that the Sidelnikov cryptosystem [S1994], which is based on binary Reed-Muller codes, is insecure. The basic idea of our attack is to use the fact that minimum weight words in a Reed-Muller code have very particular properties. This attack relies on the ability to find minimum weight words in the code, a problem that is, in this specific instance, much easier than general decoding, and feasible for interesting parameters in a modest amount of time. The attack has subexponential running time if the order of the code is kept fixed, and it breaks the large keys as proposed by Sidelnikov in under an hour on a stock PC. In the chapter 5, we finally discuss some of the problems to solve if one attempts to generalize these algorithms.

Similar Papers
  • Research Article
  • 10.5555/2691986.2691990
An Efficient Decoding of Goppa Codes for the McEliece Cryptosystem
  • Oct 1, 2014
  • Fundamenta Informaticae
  • Limseongan + 2 more

The McEliece cryptosystem is defined using a Goppa code, and decoding the Goppa code is a crucial step of its decryption. Patterson's decoding algorithm is the best known algorithm for decoding Gop...

  • Research Article
  • Cite Count Icon 30
  • 10.1109/tit.2021.3074526
Hulls of Generalized Reed-Solomon Codes via Goppa Codes and Their Applications to Quantum Codes
  • Apr 21, 2021
  • IEEE Transactions on Information Theory
  • Yanyan Gao + 3 more

A Goppa code over \Bbb F <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">q<sup>m</sup></sub> is a well-known subclass of algebraic error-correcting code. If m=1, then it is a generalized Reed-Solomon(GRS) code and its dual code is called a GRS code via a Goppa code. In this paper, we give a necessary and sufficient condition that the dual codes of GRS codes via (expurgated) Goppa codes are also GRS codes via Goppa codes. Under the above condition, we show that the hulls of GRS codes via Goppa codes are still GRS codes via Goppa codes. As an application, we characterize LCD GRS codes and self-dual GRS codes under the above condition. Some numerical examples are also presented to illustrate our main results. Moreover, we also apply our result to entanglement-assisted quantum error correcting codes (EAQECCs) and obtain two new families of MDS EAQECCs with arbitrary parameters.

  • Research Article
  • Cite Count Icon 1
  • 10.12785/amis/080559
Hexi McEliece Public Key Cryptosystem
  • Sep 1, 2014
  • Applied Mathematics &amp; Information Sciences
  • K Ilanthenral + 1 more

This paper introduces a new class of hexi codes namely, hexi polynomial codes, hexi Rank Distance codes, hexi Maximum Rank Distance codes, hexi Goppa codes and hexi wild Goppa codes. These codes are useful to create variants of the McEliece public key cryptosystem known as the hexi McEliece public key cryptosystem and its variants; these cryptosystems are secure against attacks carried out on the existing variants of the McEliece public key cryptosystem. This newly introduced cryptosystem has better error correcting capacity and lesser time complexity making it more feasible to use. The security and possible attacks on these variants of the hexi McEliece public key cryptosystem are analysed. The McEliece public key cryptosystem introduced by McEliece in the year 1978 (19), still remains unbroken. The public key cryptosystem is based on binary Goppa codes. Hexi codes were developed in 2013 for error correction in AES (14), further development of these codes is carried out in this paper. These codes are useful to create variants of the McEliece public key cryptosystem called the hexi McEliece public key cryptosystem and its variants. These public key cryptosystems are secure, have better error correcting capacity and lesser time complexity making it more advantageous to use. The organization of the rest of this paper is as follows. The history of the McEliece public key cryptosystem and its several variants are dealt in section two. Section three recalls hexi codes and introduces hexi polynomial codes, hexi Rank Distance (hexi RD) codes, hexi Maximum Rank Distance (hexi MRD) codes, hexi Goppa codes and hexi wild Goppa codes. The decoding, error detecting and error correcting capacity of these codes is discussed in section four. Section five introduces a few variants of the McEliece public key cryptosystems which are based on these new hexi codes; they are called the hexi McEliece public key cryptosystem and its variants. Section six deals with the possible attacks on the hexi McEliece public key cryptosystem and the resistance against these attacks. It also discusses the security of the cryptosystem. Section seven provides a comparison of the hexi McEliece public key cryptosystem with original McEliece public key cryptosystem, in terms of time complexity and error correcting capacity. Conclusions, suggestions and future direction of research are given in section eight.

  • Single Book
  • Cite Count Icon 275
  • 10.1002/0470847824
The Art of Error Correcting Coding
  • Mar 11, 2002
  • Robert H Morelos‐Zaragoza

Preface. Foreword. The ECC web site. 1. Introduction. 1.1 Error correcting coding: Basic concepts. 1.1.1 Block codes and convolutional codes. 1.1.2 Hamming distance, Hamming spheres and error correcting capability. 1.2 Linear block codes. 1.2.1 Generator and parity-check matrices. 1.2.2 The weight is the distance. 1.3 Encoding and decoding of linear block codes. 1.3.1 Encoding with G and H. 1.3.2 Standard array decoding. 1.3.3 Hamming spheres, decoding regions and the standard array. 1.4 Weight distribution and error performance. 1.4.1 Weight distribution and undetected error probability over a BSC. 1.4.2 Performance bounds over BSC, AWGN and fading channels. 1.5 General structure of a hard-decision decoder of linear codes. Problems. 2. Hamming, Golay and Reed-Muller codes. 2.1 Hamming codes. 2.1.1 Encoding and decoding procedures. 2.2 The binary Golay code. 2.2.1 Encoding. 2.2.2 Decoding. 2.2.3 Arithmetic decoding of the extended (24, 12, 8) Golay code. 2.3 Binary Reed-Muller codes. 2.3.1 Boolean polynomials and RM codes. 2.3.2 Finite geometries and majority-logic decoding. Problems. 3. Binary cyclic codes and BCH codes. 3.1 Binary cyclic codes. 3.1.1 Generator and parity-check polynomials. 3.1.2 The generator polynomial. 3.1.3 Encoding and decoding of binary cyclic codes. 3.1.4 The parity-check polynomial. 3.1.5 Shortened cyclic codes and CRC codes. 3.1.6 Fire codes. 3.2 General decoding of cyclic codes. 3.2.1 GF(2m) arithmetic. 3.3 Binary BCH codes. 3.3.1 BCH bound. 3.4 Polynomial codes. 3.5 Decoding of binary BCH codes. 3.5.1 General decoding algorithm for BCH codes. 3.5.2 The Berlekamp-Massey algorithm (BMA). 3.5.3 PGZ decoder. 3.5.4 Euclidean algorithm. 3.5.5 Chien search and error correction. 3.5.6 Errors-and-erasures decoding. 3.6 Weight distribution and performance bounds. 3.6.1 Error performance evaluation. Problems. 4. Nonbinary BCH codes: Reed-Solomon codes. 4.1 RS codes as polynomial codes. 4.2 From binary BCH to RS codes. 4.3 Decoding RS codes. 4.3.1 Remarks on decoding algorithms. 4.3.2 Errors-and-erasures decoding. 4.4 Weight distribution. Problems. 5. Binary convolutional codes. 5.1 Basic structure. 5.1.1 Recursive systematic convolutional codes. 5.1.2 Free distance. 5.2 Connections with block codes. 5.2.1 Zero-tail construction. 5.2.2 Direct-truncation construction. 5.2.3 Tail-biting construction. 5.2.4 Weight distributions. 5.3 Weight enumeration. 5.4 Performance bounds. 5.5 Decoding: Viterbi algorithm with Hamming metrics. 5.5.1 Maximum-likelihood decoding and metrics. 5.5.2 The Viterbi algorithm. 5.5.3 Implementation issues. 5.6 Punctured convolutional codes. 5.6.1 Implementation issues related to punctured convolutional codes. 5.6.2 RCPC codes. Problems. 6. Modifying and combining codes. 6.1 Modifying codes. 6.1.1 Shortening. 6.1.2 Extending. 6.1.3 Puncturing. 6.1.4 Augmenting, expurgating and lengthening. 6.2 Combining codes. 6.2.1 Time sharing of codes. 6.2.2 Direct sums of codes. 6.2.3 The |u|u + v|-construction and related techniques. 6.2.4 Products of codes. 6.2.5 Concatenated codes. 6.2.6 Generalized concatenated codes. 7. Soft-decision decoding. 7.1 Binary transmission over AWGN channels. 7.2 Viterbi algorithm with Euclidean metric. 7.3 Decoding binary linear block codes with a trellis. 7.4 The Chase algorithm. 7.5 Ordered statistics decoding. 7.6 Generalized minimum distance decoding. 7.6.1 Sufficient conditions for optimality. 7.7 List decoding. 7.8 Soft-output algorithms. 7.8.1 Soft-output Viterbi algorithm. 7.8.2 Maximum-a posteriori (MAP) algorithm. 7.8.3 Log-MAP algorithm. 7.8.4 Max-Log-MAP algorithm. 7.8.5 Soft-output OSD algorithm. Problems. 8. Iteratively decodable codes. 8.1 Iterative decoding. 8.2 Product codes. 8.2.1 Parallel concatenation: Turbo codes. 8.2.2 Serial concatenation. 8.2.3 Block product codes. 8.3 Low-density parity-check codes. 8.3.1 Tanner graphs. 8.3.2 Iterative hard-decision decoding: The bit-flip algorithm. 8.3.3 Iterative probabilistic decoding: Belief propagation. Problems. 9. Combining codes and digital modulation. 9.1 Motivation. 9.1.1 Examples of signal sets. 9.1.2 Coded modulation. 9.1.3 Distance considerations. 9.2 Trellis-coded modulation (TCM). 9.2.1 Set partitioning and trellis mapping. 9.2.2 Maximum-likelihood. 9.2.3 Distance considerations and error performance. 9.2.4 Pragmatic TCM and two-stage decoding. 9.3 Multilevel coded modulation. 9.3.1 Constructions and multistage decoding. 9.3.2 Unequal error protection with MCM. 9.4 Bit-interleaved coded modulation. 9.4.1 Gray mapping. 9.4.2 Metric generation: De-mapping. 9.4.3 Interleaving. 9.5 Turbo trellis-coded modulation. 9.5.1 Pragmatic turbo TCM. 9.5.2 Turbo TCM with symbol interleaving. 9.5.3 Turbo TCM with bit interleaving. Problems. Appendix A: Weight distributions of extended BCH codes. A.1 Length 8. A.2 Length 16. A.3 Length 32. A.4 Length 64. A.5 Length 128. Bibliography. Index.

  • Research Article
  • Cite Count Icon 38
  • 10.1007/s11227-020-03144-x
Cryptosystem design based on Hermitian curves for IoT security
  • Jan 14, 2020
  • The Journal of Supercomputing
  • Omar A Alzubi + 3 more

The ultimate goal of modern cryptography is to protect the information resource and make it absolutely unbreakable and beyond compromise. However, throughout the history of cryptography, thousands of cryptosystems emerged and believed to be invincible and yet attackers were able to break and compromise their security. The main objective of this paper is to design a robust cryptosystem that will be suitable to be implemented in Internet of Things. The proposed cryptosystem is based on algebraic geometric curves, more specifically on Hermitian curves. The new cryptosystem design is called Hermitian-based cryptosystem (HBC). During the development of the HBC design, Kerckhoffs’s desideratum was the main guidance principle, which has been satisfied by choosing the Hermitian curves as the core of the proposed design. The proposed HBC inherits all the advantageous characteristics of Hermitian curve which are large number of points that satisfy the curve and high genus curves. The aforementioned characteristics play a crucial role in generating a large size encryption key for HBC and determine the block size of plaintext. Due to the fact that HBC used algebraic geometric codes over Hermitian curve, it has the ability to perform error correction in addition to data encryption. The error correction is another advantage of HBC compared with many existing cryptosystems such as McEliece cryptosystem. The number of errors that can be corrected by HBC is larger (high data rate) than other algebraic geometric codes such as elliptic and hyperelliptic curves. It also uses non-binary representation which increases its attack resistance. In this paper, the proposed HBC has been mathematically compared with elliptic curve cryptosystem. The results show that HBC has many advantages over the elliptic curves in terms of number of points and genus of the curve.

  • Dissertation
  • 10.5167/uzh-127105
A study of cryptographic systems based on Rank metric codes
  • Jan 1, 2016
  • Kyle Marshall

The ubiquity, dependability, and extensiveness of internet access has seen a migration of local services to cloud services where the advantages of scalability can be efficiently exploited. In doing so, the exposure of sensitive data to eavesdropping is a principal concern. Asymmetric cryptosystems attempt to solve this problem by basing access on the knowledge of a solution to mathematically difficult problems. Shor demonstrated that on a quantum computer, cryptosystems based on the difficulty of factoring integers or solving discrete logarithms were efficiently solvable. As the most ubiquitous asymmetric cryptosystems in modern use are based on these problems, new cryptosystems had to be considered for post-quantum cryptography. In 1978, McEliece proposed a cryptosystem based on the difficulty of decoding random linear codes but the key sizes were too large for practical consideration. These systems, though, do appear to resist Shor’s algorithm and other quantum attacks. More recently, Gabidulin proposed using codes in the rank metric to design secure cryptosystems because they could be designed with smaller parameters. In this direction, many proposals for cryptosystems based on rank metric codes were designed. Overbeck managed to cryptanalyze many of these systems, but there remain several which resist all known structural attacks. In this work, we investigate the use of rank metric codes for cryptographic purposes. Firstly, we investigate the construction of MRD codes and propose some new constructions based on combinatorial methods. We then generalize Overbeck’s attack and show how our generalized attack can be used to cryptanalyze some of the cryptosystems which were designed to resist the attack of Overbeck. Our attack is based on a new approach of exploiting the structure of low weight elements in the code. Our approach also allows us to extend a result of Gaborit to obtain a polynomial time decoding algorithm for codes with certain parameters. Lastly, we consider the use of codes in the subspace metric– which are based on rank metric codes–in order to create an alternative instance of Juels’ and Sudan’s fuzzy vault primitive.

  • Research Article
  • Cite Count Icon 1
  • 10.1145/2768577.2768606
Error-correcting pairs
  • Jun 10, 2015
  • ACM Communications in Computer Algebra
  • Irene Márquez-Corbella + 1 more

McEliece cryptosystem is the first public-key cryptosystem based on linear error-correcting codes. Although a code with an efficient bounded distance decoding algorithm is chosen as the secret key in this cryptosystem, not knowing the secret code and its decoding algorithm faced the attacker with the problem of decoding a random-looking linear code. Moreover, it is well known that the known efficient bounded distance decoding algorithm of the families of codes proposed for code-based cryptography (like Reed-Solomon codes, Goppa codes, alternant codes or algebraic geometry codes) can be described using error correcting pairs (ECP). That means that, the McEliece cryptosystem is not based on the intractability of bounded distance decoding but on the problem of retrieving an error-correcting pair from a random linear code. The aim of this article is to propose the class of codes with a t-ECP whose error-correcting pair is not easily reconstructed from the single knowledge of a generator matrix.

  • Research Article
  • Cite Count Icon 15
  • 10.1109/tc.2022.3174587
RISC-V Galois Field ISA Extension for Non-Binary Error-Correction Codes and Classical and Post-Quantum Cryptography
  • Jan 1, 2022
  • IEEE Transactions on Computers
  • Yao-Ming Kuo + 3 more

Due to the recent advances in new communication standards, such as 5G New Radio and beyond 5G, and in quantum computing and communications, new requirements for integrating processors into nodes have appeared. These requirements are meant to provide flexibility in the network to reduce operational costs and support diversity in services and load balancing. They are also designed to integrate both new and classical algorithms into efficient and universal platforms, execute specific operations, and attend to tasks with lower latency. Furthermore, some cryptographic algorithms (classical and post-quantum), which are essential to portable devices, share the same arithmetic with error-correction codes. For example, Advanced Encryption Standard (AES), elliptic curve cryptography, Classic McEliece, Hamming Quasi-Cyclic, and Reed-Solomon codes use GF(2^m) arithmetic. As this arithmetic is the basis of many algorithms, a versatile RISC-V Galois field ISA extension is proposed in this work. The RISC-V instruction set extension is implemented and validated using SweRV-EL2 1.3 on a Nexys A7 FPGA. In addition, a five-times acceleration is achieved for AES, Reed-Solomon codes, and Classic McEliece (post-quantum cryptography) at the expense of increasing the logic utilization by 1.27%.

  • Single Book
  • Cite Count Icon 94
  • 10.1007/b104335
List Decoding of Error-Correcting Codes
  • Jan 1, 2005
  • Venkatesan Guruswami

Error-correcting codes are combinatorial objects designed to cope with the problem of reliable transmission of information on a noisy channel. A fundamental algorithmic challenge in coding theory and practice is to efficiently decode the original transmitted message even when a few symbols of the received word are in error. The naive search algorithm runs in exponential time, and several classical polynomial time decoding algorithms are known for specific code families. Traditionally, however, these algorithms have been constrained to output a unique codeword. Thus they faced a “combinatorial barrier” and could only correct up to d/2 errors, where d is the minimum distance of the code. An alternate notion of decoding called list decoding, proposed independently by Elias and Wozencraft in the late 50s, allows the decoder to output a list of all codewords that differ from the received word in a certain number of positions. Even when constrained to output a relatively small number of answers, list decoding permits recovery from errors well beyond the d/2 barrier, and opens up the possibility of meaningful error-correction from large amounts of noise. However, for nearly four decades after its conception, this potential: of list decoding was largely untapped due to the lack of efficient algorithms to list decode beyond d/2 errors for useful families of codes. This thesis presents a detailed investigation of list decoding, and proves its potential, feasibility, and importance as a combinatorial and algorithmic concept. We prove several; combinatorial results that sharpen our understanding of the potential and limits of list; decoding, and its relation to more classical parameters like the rate and minimum distance. The crux of the thesis is its algorithmic results, which were lacking in the early works on list decoding. Our algorithmic results include: (1) Efficient list decoding algorithms for classically studied codes such as Reed-Solomon codes and algebraic-geometric codes. In particular, building upon an earlier algorithm due to Sudan, we present the first polynomial time algorithm to decode Reed-Solomon codes beyond d/2 errors for every value of the rate. (2) A new soft list decoding algorithm for Reed-Solomon and algebraic-geometric codes and novel decoding algorithms for concatenated codes based on it. (3) New code constructions using concatenation and/or expander graphs that have good (and sometimes near-optimal) rate and are efficiently list decodable from extremely large amounts of noise. (4) Expander-based constructions of linear time encodable and decodable codes that ca4 correct up to the maximum possible fraction of errors, using unique (not list) decoding. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  • Conference Article
  • Cite Count Icon 251
  • 10.1109/sfcs.1998.743426
Improved decoding of Reed-Solomon and algebraic-geometric codes
  • Nov 8, 1998
  • V Guruswami + 1 more

Given an error-correcting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding Reed-Solomon codes. The list decoding problem for Reed-Solomon codes reduces to the following curve-fitting problem over a field F: Given n points {(x/sub i/.y/sub i/)}/sub i=1//sup n/, x/sub i/,y/sub i//spl isin/F, and a degree parameter k and error parameter e, find all univariate polynomials p of degree at most k such that y/sub i/=p(x/sub i/) for all but at most e values of i/spl isin/{1....,n}. We give an algorithm that solves this problem for e 1/3, where the result yields the first asymptotic improvement in four decades. The algorithm generalizes to solve the list decoding problem for other algebraic codes, specifically alternant codes (a class of codes including BCH codes) and algebraic-geometric codes. In both cases, we obtain a list decoding algorithm that corrects up to n-/spl radic/(n-d-) errors, where n is the block length and d' is the designed distance of the code. The improvement for the case of algebraic-geometric codes extends the methods of Shokrollahi and Wasserman (1998) and improves upon their bound for every choice of n and d'. We also present some other consequences of our algorithm including a solution to a weighted curve fitting problem, which is of use in soft-decision decoding algorithms for Reed-Solomon codes.

  • Research Article
  • 10.11999/jeit190851
Survey on Applications of List Decoding to Cryptography
  • Jun 4, 2020
  • Zhuoran Zhang + 2 more

Since the conception of list decoding is proposed in the 1950s, list decoding not only is applied to communication and coding theory, but also plays a significant role in computational complexity and cryptography. In recent years, with the rapid development of quantum computing, the traditional cryptographic schemes based on factorization and other difficult problems are greatly threatened. The code-based cryptosystems, whose security relies on the NP-hard problems in coding theory, are attracting more and more attention as a candidate of the post-quantum cryptography, and so does the list decoding algorithm. This paper systematically reviews the applications of list decoding to cryptography, including early applications in proving that any one-way function has hard-core bits, designing traitor tracing schemes, designing public key schemes using polynomial reconstruction as cryptographic primitives, improving the traditional code-based cryptosystems and solving Discrete Logarithm Problems (DLP), and recent applications to designing secure communication interactive protocols, solving the elliptic curve discrete logarithm problem, and designs new cryptographic schemes based on error correction codes. Finally, the new research issues of the algorithm improvement of list decoding, its application to the design of cryptographic protocol and cryptoanalysis, and the exploration of new application scenarios are discussed.

  • Addendum
  • Cite Count Icon 4
  • 10.1016/j.matpr.2021.07.182
WITHDRAWN: LSB based image steganography using McEliece cryptosystem
  • Jul 1, 2021
  • Materials Today: Proceedings
  • Hayder Abdulkudhur Mohammed + 1 more

WITHDRAWN: LSB based image steganography using McEliece cryptosystem

  • Research Article
  • Cite Count Icon 1
  • 10.22060/eej.2016.814
Steganography Scheme Based on Reed-Muller Code with Improving Payload and Ability to Retrieval of Destroyed Data for Digital Images
  • Jun 1, 2017
  • Amir Masoud Molaei + 2 more

In this paper, a new steganography scheme with high embedding payload and good visual quality is presented. Before embedding process, secret information is encoded as block using Reed-Muller error correction code. After data encoding and embedding into the low-order bits of host image, modulus function is used to increase visual quality of stego image. Since the proposed method is able to embed secret information into more significant bits of the image, it has improved embedding payload. The steps of extracting data from the host image are independent of the original image. Therefore, the proposed algorithm has a blind detection process which is more suitable for practical and online applications. The simulation results show that the proposed algorithm is also able to retrieve destroyed data by intentional or unintentional attacks such as addition of noise and filtering due to use of the error correction code. In addition, the payload is improved in comparison with the same techniques.

  • Conference Article
  • Cite Count Icon 12
  • 10.1145/2746539.2746575
Reed-Muller Codes for Random Erasures and Errors
  • Jun 14, 2015
  • Emmanuel Abbe + 2 more

This paper studies the parameters for which binary Reed-Muller (RM) codes can be decoded successfully on the BEC and BSC, and in particular when can they achieve capacity for these two classical channels. Necessarily, the paper also studies properties of evaluations of multi-variate GF(2) polynomials on random sets of inputs. For erasures, we prove that RM codes achieve capacity both for very high rate and very low rate regimes. For errors, we prove that RM codes achieve capacity for very low rate regimes, and for very high rates, we show that they can uniquely decode at about square root of the number of errors at capacity.The proofs of these four results are based on different techniques, which we find interesting in their own right. In particular, we study the following questions about E(m,r), the matrix whose rows are truth tables of all monomials of degree ≤ r in m variables. What is the most (resp. least) number of random columns in E(m,r) that define a submatrix having full column rank (resp. full row rank) with high probability? We obtain tight bounds for very small (resp. very large) degrees r, which we use to show that RM codes achieve capacity for erasures in these regimes.Our decoding from random errors follows from the following novel reduction. For every linear code C of sufficiently high rate we construct a new code C' obtained by tensoring C, such that for every subset S of coordinates, if $C$ can recover from erasures in $S$, then C' can recover from errors in S. Specializing this to RM codes and using our results for erasures imply our result on unique decoding of RM codes at high rate.Finally, two of our capacity achieving results require tight bounds on the weight distribution of RM codes. We obtain such bounds extending the recent [27] bounds from constant degree to linear degree polynomials.

  • Research Article
  • Cite Count Icon 67
  • 10.1109/tit.2015.2462817
Reed–Muller Codes for Random Erasures and Errors
  • Oct 1, 2015
  • IEEE Transactions on Information Theory
  • Emmanuel Abbe + 2 more

This paper studies the parameters for which binary Reed-Muller (RM) codes can be decoded successfully on the binary erasure channel and binary symmetry channel, and, in particular, when can they achieve capacity for these two classical channels. Necessarily, this paper also studies the properties of evaluations of multivariate GF(2) polynomials on the random sets of inputs. For erasures, we prove that RM codes achieve capacity both for very high rate and very low rate regimes. For errors, we prove that RM codes achieve capacity for very low rate regimes, and for very high rates, we show that they can uniquely decode at about the square root of the number of errors at capacity. The proofs of these four results are based on different techniques, which we find interesting in their own right. In particular, we study the following questions about E(m, r), the matrix whose rows are the truth tables of all the monomials of degree ≤ r in m variables. What is the most (resp. least) number of random columns in E(m, r) that define a submatrix having full column rank (resp. full row rank) with high probability? We obtain tight bounds for very small (resp. very large) degrees r, which we use to show that RM codes achieve capacity for erasures in these regimes. Our decoding from random errors follows from the following novel reduction. For every linear code C of sufficiently high rate, we construct a new code C' obtained by tensorizing C, such that for every subset S of coordinates, if C can recover from erasures in S, then C' can recover from errors in S. Specializing this to the RM codes and using our results for erasures imply our result on the unique decoding of the RM codes at high rate. Finally, two of our capacity achieving results require tight bounds on the weight distribution of RM codes. We obtain such bounds extending the recent bounds from constant degree to linear degree polynomials.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.